What is it about?

This article highlights the various harms and discriminatory risks suffered by marginalised groups due to AI ethical dilemma. It offers insights on the need for disadvantaged/minority groups to lend their voice to shape AI regulation to suit their circumstances.

Featured Image

Why is it important?

The paper recommends the guarded deployment of AI vigilantism to regulate the use of AI technologies and prevent harm arising from AI systems' operations.

Perspectives

It was indeed a great pleasure writing this article on the need to include minority/discriminated groups in shaping AI regulation so as to capture their peculiarities.

Dr Ifeoma Nwafor

Read the Original

This page is a summary of: AI ethical bias: a case for AI vigilantism (AIlantism) in shaping the regulation of AI, International Journal of Law and Information Technology, October 2021, Oxford University Press (OUP),
DOI: 10.1093/ijlit/eaab008.
You can read the full text:

Read

Contributors

The following have contributed to this page