What is it about?
This article considers an alternate approach to morality: Like the first cells to form into bodies (a.k.a. "corporantia"), individual robots and humans could relinquish their independent moral thinking to become interdependent parts of a larger moral agent. Bodies can do what individual cells could never do, and larger moral agents may likewise be able to achieve greater morality than any of us could ever achieve independently. This article reviews the existing evidence for this possibility and ways to further test it.
Featured Image
Why is it important?
This is the age in which humans develop robots. If it is true that we shouldn't try to teach individual robots to act as moral agents, then we should develop them differently. This is also an age in which governments are threatened by political polarization. If it is true that we shouldn't try to teach individual humans to act as moral agents, that would explain our problem. If governments should not be designed around the assumption that humans will act as individual moral agents, then the recognition of that design flaw points the way to social reform.
Perspectives
Read the Original
This page is a summary of: Corporantia: Is moral consciousness above individual brains/robots?, Paladyn Journal of Behavioral Robotics, February 2018, De Gruyter,
DOI: 10.1515/pjbr-2018-0001.
You can read the full text:
Resources
Contributors
The following have contributed to this page