What is it about?

In today's digital era, we often rely on shared information displays for decision-making. However, problems can arise when many people have shared access to the same predictions. Consider a display predicting a taxi driver’s chance of getting a pickup based on historical data about where drivers tend to go. Drivers with different depths of thinking may react differently to this information. Some drivers may follow predictions by choosing to search the most lucrative location shown on the display. However, more strategic drivers may realize that if everyone uses the display, there will be a surplus of drivers at the most lucrative location, so they might search locations with lower displayed pickup probabilities to increase their chances of securing a pickup. Because of these varied behavioral responses, despite the model being well-calibrated based on historical data, predictions can still be perceived by decision-makers as inaccurate in hindsight because humans tend to strategically react to the information. To what extent can the design of the displays allow data providers to avoid problems that arise when agents respond strategically? By evaluating design choices like visualizing prediction uncertainty and providing post-hoc feedback on prediction errors, we identified trade-offs between individual utility, social welfare, and the decision-makers’ trust and reliance on predictions. Our findings and design recommendations may guide the development of predictive systems that can potentially coordinate human behaviors toward more equitable and desirable outcomes in large complex systems.

Featured Image

Why is it important?

Despite how seeing predictions on displays can benefit decision-makers with useful information, the full benefit of this information access may not always be realized due to humans’ behavioral responses (e.g., second-guessing, herding, overthinking). If these human factors are not accounted for during design, the interface can create gaps between the displayed prediction and the realized outcome. Consequently, decision-makers may soon realize the displayed predictions are inaccurate, causing them to lose trust and stop using the service altogether. Our results demonstrate the importance of considering these behavioral factors in the design of shared information displays. They highlight the need for solutions that respect individual autonomy while promoting trust and transparency, which can guide the design and development of more robust decision-making tools.

Perspectives

This work opens new avenues for exploring robust design strategies. By integrating behavioral models with experimental design, we can characterize how users with different depths of thinking respond to information stimuli. The trade-offs between individual utility, social welfare, and system trust highlight the need for designers to evaluate and optimize design choices based on objectives. We hope this work can promote more discussion and research on designing shared information displays that assist in more informed, data-driven decision-making.

Dongping Zhang
Northwestern University

Read the Original

This page is a summary of: Designing Shared Information Displays for Agents of Varying Strategic Sophistication, Proceedings of the ACM on Human-Computer Interaction, April 2024, ACM (Association for Computing Machinery),
DOI: 10.1145/3637319.
You can read the full text:

Read

Contributors

The following have contributed to this page