What is it about?
In this paper, we propose a framework for categorising explanations of individual decisions, which we refer to as a typology of explanations, as it remains open-ended. This typology serves as a fundamental pillar of an explainability-by-design strategy—an approach aimed at systematically and coherently generating explanations for individual decisions across various contexts, such as organisational settings. This typology is particularly useful when a decision-making process can be described through a set of metadata, which we refer to as provenance metadata. By leveraging such a framework, legal engineers working at the intersection of compliance and engineering teams can help enhance transparency and accountability, especially in complex decision-making scenarios, such as those that are partially automated and involve multiple stakeholders. We evaluate this approach through two use cases: (1) a loan application process that relies on a credit score generated by a credit reference agency, and (2) a school admission process that considers multiple criteria to allocate pupils to local schools. Using this typology, we generate examples of explanations that could be computationally derived for these two cases.
Featured Image
Why is it important?
This work is important because various legal frameworks, including data protection laws, impose explanation requirements on decision-makers affecting individuals. However, decision-makers do not always have an incentive to explore the full range of options available for generating these explanations and often adopt a conservative approach. So far, the debate has largely focused on how much should be disclosed about the inner workings of black-box systems. Yet, not all forms of automated decision-making rely on black boxes, and governance information generated outside these systems is also crucial for explaining automated decisions. To fully understand the impact of explanation requirements, it is essential to take a comprehensive approach—one that goes beyond any single explanation method, such as counterfactual explanations. Moreover, working with concrete examples can help illustrate what is possible, rather than just what is currently considered realistic. In this paper, we thus adopt a comprehensive approach to generating explanations and provide concrete examples to facilitate the discussion on the impact of explanation requirements. We also advocate for the recognition of a new role within organisations: legal engineers, who should operate at the intersection of compliance and engineering teams.
Read the Original
This page is a summary of: A Typology of Explanations for Explainability-by-Design, ACM Journal on Responsible Computing, December 2024, ACM (Association for Computing Machinery),
DOI: 10.1145/3708504.
You can read the full text:
Contributors
The following have contributed to this page