What is it about?
There is evidence that a user’s subjective confidence in an Artificial Intelligence (AI)-based system is crucial in its use, even more decisive than the objective effectiveness and efficiency of the system. Therefore, different methods have been proposed for analyzing confidence in AI. In our research, we set out to evaluate how the degree of perceived trust in an AI system could affect a user’s final decision to follow AI recommendations. To this end, we established trustworthy criteria that such an evaluation should meet by following a co-creation approach with a multidisciplinary group of 10 experts. After a systematic review of 3,204 articles, we found that none of the tools met the inclusion criteria. Thus, we introduce the so-called ‘‘Perceived Operational Trust Degree in AI’’ (POTDAI) tool that is based on the findings from the expert group and the literature analysis, with a methodology that adds rigor to that employed previously to create similar evaluation tools. We propose a short questionnaire for quick and easy application, inspired by the original version of the Technology Acceptance Model (TAM) with six Likert-type items. In this way, we also respond to the need pointed out by authors such as Vorm and Combs to extend the TAM to address questions related to user perception in systems with an AI component. Thus, POTDAI can be used alone or in combination with TAM to obtain additional information on its usefulness and ease of use.
Featured Image
Photo by Igor Omilaev on Unsplash
Why is it important?
This article is important because it proposes a new, specific tool (POTDAI) to measure the extent to which people are willing to follow the recommendations of an artificial intelligence system, something that existing scales do not adequately capture. It builds on a systematic review of 3,204 articles to show that there were no instruments that briefly and operationally assessed perceived trust in AI, especially in critical contexts such as police interventions, and develops a six-item questionnaire that can be used alone or integrated into the Technology Acceptance Model to inform the design of safer, more trustworthy AI systems.
Perspectives
From my perspective as an author, this work arose from the conviction that, in many real applications, what truly determines how AI is used is not only how accurate it is, but how much people feel they can trust it in practice. I repeatedly saw situations where systems with good technical performance were either ignored out of mistrust or followed blindly out of overtrust, and I felt we were missing a simple way to capture that operational dimension of trust in user studies. With POTDAI, my intention was to offer a concise, rigorously developed tool that helps researchers and practitioners quickly quantify whether users are inclined to follow an AI system’s recommendations, and how overtrust, mistrust or a sense of being monitored may be influencing that behavior. I hope this scale can be combined with existing acceptance models to give a more complete picture of human–AI interaction and, ultimately, guide the design and evaluation of AI systems that are not only technically robust but also used in a safer and more conscious way by their human operators.
Eduardo García Laredo
Read the Original
This page is a summary of: POTDAI: A Tool to Evaluate the Perceived Operational Trust Degree in Artificial Intelligence Systems, IEEE Access, January 2024, Institute of Electrical & Electronics Engineers (IEEE),
DOI: 10.1109/access.2024.3454061.
You can read the full text:
Contributors
The following have contributed to this page







