What is it about?
This article highlights the various harms and discriminatory risks suffered by marginalised groups due to AI ethical dilemma. It offers insights on the need for disadvantaged/minority groups to lend their voice to shape AI regulation to suit their circumstances.
Featured Image
Photo by Markus Winkler on Unsplash
Why is it important?
The paper recommends the guarded deployment of AI vigilantism to regulate the use of AI technologies and prevent harm arising from AI systems' operations.
Perspectives
Read the Original
This page is a summary of: AI ethical bias: a case for AI vigilantism (AIlantism) in shaping the regulation of AI, International Journal of Law and Information Technology, October 2021, Oxford University Press (OUP),
DOI: 10.1093/ijlit/eaab008.
You can read the full text:
Contributors
The following have contributed to this page