What is it about?

The objective of this paper is to develop and empirically validate a conceptual model that explains individuals' behavioral intention to accept AI-based recommendations as a function of attitude toward AI, trust, perceived accuracy and uncertainty level. The conceptual model was tested through a between-participants experiment using a simulated AI-enabled investment recommendation system. A total of 368 participants were randomly and evenly assigned to one of the two experimental conditions, one depicting low-uncertainty investment recommendation involving blue-chip stocks while the other depicting high-uncertainty investment recommendation involving penny stocks. Results show that attitude toward AI was positively associated with behavioral intention to accept AI-based recommendations, trust in AI, and perceived accuracy of AI. Furthermore, uncertainty level moderated how attitude, trust and perceived accuracy varied with behavioral intention to accept AI-based recommendations. When uncertainty was low, a favorable attitude toward AI seemed sufficient to promote reliance on automation. However, when uncertainty was high, a favorable attitude toward AI was a necessary but no longer sufficient condition for AI acceptance. Thus, the paper contributes to the human-AI interaction literature by not only shedding light on the underlying psychological mechanism of how users decide to accept AI-enabled advice but also adding to the scholarly understanding of AI recommendation systems in tasks that call for intuition in high involvement services.

Featured Image

Why is it important?

The paper offers insights into how the uptake of AI recommendation systems can be promoted in high involvement industries such as healthcare and finance where machine-generated advice has received much resistance. As new AI recommendation systems proliferate, it is important for policymakers to ensure that the public develops a realistic attitude toward AI. Furthermore, marketing communication for AI recommendation systems should be tailored according to the decision-making context. For example, in situations where there is high risk, successful performance of the systems in the past could be recounted to inspire user confidence. AI systems offering recommendations under high risk should be designed in ways so as to enhance perceptions of trust and accuracy.

Read the Original

This page is a summary of: AI-enabled investment advice: Will users buy it?, Computers in Human Behavior, January 2023, Elsevier,
DOI: 10.1016/j.chb.2022.107481.
You can read the full text:

Read

Contributors

The following have contributed to this page