What is it about?

How can Explainable AI (XAI) frameworks support human-centered explanation design? What do they actually offer and how do we choose among them? From a scoping review of 73 papers, we present a unified model and a set of guiding questions to help identify, compare and select relevant frameworks across design stages, making it easier to bring human-centered XAI into real-world practice.

Featured Image

Why is it important?

With new AI capabilities being deployed in different contexts, human-centered explainability is crucial to ensure people can interact with novel AI systems safely and effectively. While the XAI field has produced a vast number of frameworks, it's unclear what these frameworks entail, what drives their development, and, more importantly, how they can support human-centered practices in real-world XAI contexts.

Read the Original

This page is a summary of: Designing, Implementing, and Evaluating AI Explanations: A Scoping Review of Explainable AI Frameworks, ACM Transactions on Computer-Human Interaction, December 2025, ACM (Association for Computing Machinery),
DOI: 10.1145/3769678.
You can read the full text:

Read

Contributors

The following have contributed to this page