What is it about?

eXplainable AI (XAI) has been a hot topic for the past few years, with the majority of research efforts centered around technical and methodological aspects of explainability. However, the design of user-centered explanation interfaces has received comparatively less attention. In this paper, we present a comprehensive process that takes an XAI technique for clinical decision support through stages of prototyping, testing, and redesigning based on healthcare professionals’ feedback. We address both the technical challenges and the human-centered approach, involving healthcare professionals in co-designing the explanation interface. Our study also delves into the complex relationship between trust in AI and AI explanations, highlighting the importance of a user-centered perspective in enhancing the impact and effectiveness of AI in healthcare settings.

Featured Image

Why is it important?

This research is essential as it addresses the critical gap in user-centered explanation interfaces within the field of eXplainable AI (XAI), particularly for clinical Decision Support Systems (DSSs). By focusing on co-designing human-centered AI explanation interfaces and investigating their impact on trust calibration, the study offers valuable insights into fostering appropriate levels of trust in AI systems while mitigating the risk of automation bias. Involving healthcare professionals in the co-design process ensures that the interfaces cater to their specific needs, enhancing their relevance and utility in real-world clinical settings. Moreover, this research highlights the value of an iterative design process, encompassing prototyping, testing, and redesigning based on user feedback, leading to continuous improvement and more effective AI explanation interfaces.

Perspectives

Working on this publication was an insightful experience, as it allowed me to collaborate with a diverse group of experts and delve into the important intersection of AI, healthcare, and human-centered design. I hope that our findings will spark curiosity and foster discussions around trust, AI adoption, and the critical need for co-designing AI solutions with end-users in mind, ultimately leading to better healthcare outcomes

Dr. Cecilia Panigutti
European Commission - Joint Research Centre

Read the Original

This page is a summary of: Co-design of Human-centered, Explainable AI for Clinical Decision Support, ACM Transactions on Interactive Intelligent Systems, December 2023, ACM (Association for Computing Machinery),
DOI: 10.1145/3587271.
You can read the full text:

Read

Contributors

The following have contributed to this page