What is it about?

This paper will explore different methods as to how a (predictive) AI algorithm can be explained to users. This can be done through for example text, visualisations, graphs, etc. We also explored how users with different expertise interpret such explanations, to see if different type of explanations fit certain expertise groups better.

Featured Image

Why is it important?

Explanations for AI algorithms often adopt a "one-size-fits-all" approach, where, regardless of the type of end user, the same type of explanation is used. Our research showed that there is in fact a big gap between different expertise groups that can even lead to problematic outcomes (incorrectly interpreting explanations due to various cognitive biases) if expertise is not taken into account. Our paper therefore also pin-points and discusses what problems can arise if expertise is not taken into account.

Read the Original

This page is a summary of: Visual, textual or hybrid: the effect of user expertise on different explanations, April 2021, ACM (Association for Computing Machinery),
DOI: 10.1145/3397481.3450662.
You can read the full text:

Read

Contributors

The following have contributed to this page