What is it about?

Current AI solutions function as a black-box without a rigorous explanation for their internal processes. To enhance trust and improve acceptance of AI-based technology in clinical medicine, there is a growing effort to address this challenge using eXplainable AI (XAI), a set of techniques, strategies, and algorithms with an explicit focus on explaining the “hows and whys” of DNNs. Here, we comprehensively review the state-of-the-art XAI techniques concerning healthcare applications and discuss current challenges and future directions.

Featured Image

Why is it important?

Clarifying and understanding the inner workings of a deep network is essential for instilling greater confidence in clinicians regarding their decision-making processes.

Perspectives

This paper will help to select an explainable AI (XAI) method according to user data such as image, text, tabular, and multimodal. We provide the limitations and strengths of each method and provide several recommendations for using each method.

Md Imran Hossain
University of South Florida

Read the Original

This page is a summary of: Explainable AI for Medical Data: Current Methods, Limitations, and Future Directions, ACM Computing Surveys, December 2023, ACM (Association for Computing Machinery),
DOI: 10.1145/3637487.
You can read the full text:

Read

Contributors

The following have contributed to this page