What is it about?

There has been increasing interest in socially just use of Artificial Intelligence (AI) and Machine Learning (ML) in the development of technology that may be extended to marginalized people. However, the exploration of such technologies entails the development of an understanding of how they may increase and/or counter marginalization. The use of AI/ML algorithms can lead to several challenges, such as privacy and security concerns, biases, unfairness, and lack of cultural awareness, which especially affect marginalized people. This workshop will provide a forum to share experiences and challenges of developing AI/ML health and social wellbeing technologies with/for marginalized people and will work towards developing design methods to engage in the re-envisioning of AI/ML technologies for and with marginalized people. In doing so we will create cross-research area dialogues and collaborations. These discussions build a basis to (1) explore potential tools to support designing AI/ML systems with marginalized people, and (2) develop a design agenda for future research and AI/ML technology for and with marginalized people.

Featured Image

Read the Original

This page is a summary of: Artificially Intelligent Technology for the Margins: A Multidisciplinary Design Agenda, May 2021, ACM (Association for Computing Machinery),
DOI: 10.1145/3411763.3441333.
You can read the full text:

Read

Contributors

The following have contributed to this page