What is it about?
This is quite conceptual work, which tries to go beyond the currently dominating view of "Explainable AI" which assumes AI systems take decisions and humans have the task to try to understand these decisions. Instead we suggest AI should support the natural decision process of humans.
Featured Image
Photo by Leon on Unsplash
Why is it important?
The paper tackles a topic which is already heavily discussed and is likely to gain more importance in the upcoming years.
Perspectives
We are currently experiencing rapid advances in AI, but how we will live with AI in the future is not set in stone yet. Current trends in AI system design often do not promote AI as a natural support in our tasks, but rather constantly create puzzlement regarding the functioning of the AI system. Concentrating solely on explaining AI feels a bit like fixating on educating first-time computer users about how to use a command line interface. Personally, I believe that we can and should do better.
Tony Zhang
fortiss
Read the Original
This page is a summary of: Forward Reasoning Decision Support: Toward a More Complete View of the Human-AI Interaction Design Space, July 2021, ACM (Association for Computing Machinery),
DOI: 10.1145/3464385.3464696.
You can read the full text:
Contributors
The following have contributed to this page







