What is it about?

This work discusses research that aims to improve the transparency of artificial intelligence (AI) systems by making them explainable to users. The focus of the research is on local model-agnostic explanation methods, which explain individual predictions to users. However, the user's perspective has received less attention in the research, leading to a lack of involvement of users in the design process of the explanations and a limited understanding of how users visually attend to them. The researchers refined representations of local explanations from four well-established model-agnostic XAI methods in an iterative design process with users. They evaluated the refined explanation representations in a laboratory experiment using eye-tracking technology and self-reports and interviews. The results show that users do not necessarily prefer simple explanations and that their individual characteristics, such as gender and previous experience with AI systems, strongly influence their preferences. In addition, users find that some explanations are only useful in certain scenarios, making the selection of an appropriate explanation highly dependent on context. The research contributes to ongoing efforts to improve the transparency of AI.

Featured Image

Why is it important?

Explainability is becoming increasingly important in AI systems to prevent biased decisions and their undesired consequences. The European Commission and the National Institute of Standards and Technology (NIST) have identified explainability as an essential requirement for trustworthy AI systems. Many companies also see it as a critical requirement, with 68% of business leaders expecting customers to demand more explainability from AI systems in the future. As a result, explainability is gaining more focus in commercialized AI systems, with companies adopting it to manage the risks of AI systems and improve customers' trust in them. Many companies, including IBM and H2O.ai, have developed AI platforms that provide explainability as one of their main features. Additionally, several leading technology companies have developed open-source libraries and toolkits to help gain a comprehensive understanding of AI systems and the decisions they deliver.

Read the Original

This page is a summary of: Does this Explanation Help? Designing Local Model-agnostic Explanation Representations and an Experimental Evaluation Using Eye-tracking Technology, ACM Transactions on Interactive Intelligent Systems, December 2023, ACM (Association for Computing Machinery),
DOI: 10.1145/3607145.
You can read the full text:

Read

Contributors

The following have contributed to this page