What is it about?

This article reviews the issues related to robots and other forms of artificial intelligence and the concerns that arise if they were to become autonomous moral agents - in other words, if they can learn and eventually make ethical decisions without any human interference. If policymakers don't act soon to regulate it may be too late.

Featured Image

Why is it important?

Robots and AI currently depend upon algorithms established by humans. So they are not autonomous decision makers - their decisions are governed by the human-designed algorithm that is built in to them. If they are ever able to learn independently of humans, they could be in a position to make their own moral decisions. There is no guarantee such decisions are ones that humans might prefer. If even humans cannot establish 'perfect' moral algorithms, there is no reason to suppose that robots could.

Perspectives

The technology and the excitement about AI innovation is moving so rapidly - for both commercial and scientific/technological reasons - that there is a danger of losing control over the outcomes. Policymakers and regulators need to consider very carefully how to monitor and control these developments. 'Wait and see' is a dangerous policy and might be too late to prevent autonomous learning machines that have become independent moral agents from making decisions that risk harm to humans.

Dr Ron Iphofen
Independent

Read the Original

This page is a summary of: Regulating artificial intelligence and robotics: ethics by design in a digital society, Contemporary Social Science, January 2019, Taylor & Francis,
DOI: 10.1080/21582041.2018.1563803.
You can read the full text:

Read

Contributors

The following have contributed to this page