What is it about?
When implementing AI solutions in decision-making contexts, problems may not only be caused by "technical" limitations - they may also be found in people's unwillingness to provide the data needed for AI to succeed. In this paper, we present our ethnographic findings on this matter and discusses implications for AI-supported practice and research.
Why is it important?
Our study focuses on caseworkers' decision-making tasks in a Danish jobcentre and their reasoning for not writing down their own descriptions of citizens - which are crucial to their work, but invisible to the records. When classifying people, the caseworkers know that they are producing a ‘type’ of person. These typifications of people are created, used, and reused, in combination, but people can and do change. Keeping information ‘confidential’ allows the caseworkers not only to use but also change their classifications. Thus, our paper addresses broader and more fundamental questions: what data is (and should be) made available for AI and for what purposes?
The following have contributed to this page: Richard Harper and Anette Petersen