What is it about?

Artificial intelligence-based agents, interfaces, and environments surround us. Many of these technologies are machine-like, but some cross the "social" line, where we perceive them as social and react to them as we would other social beings, at least to some degree. But when and how is this line crossed? We provide an ontology, or a series of categories and their relationships, to help explain what's involved in intelligent agents becoming social and when. We provide two different cases -- a social robot and Apple's Siri voice assistant -- to illustrate how the ontology can describe this phenomenon.

Featured Image

Why is it important?

We've known for a long time that people tend to react to non-people with human-like features and behaviours as if they were people, at least to some degree. Advances in technology, especially artificial intelligence, is amping up the frequency and degree to which this occurs. But we don't have a good way of describing this phenomenon. This means that we may not fully understand it, as well as we may not be able to design these technologies to be social on purpose (or avoid designing them to be social, if that's needed). We also don't have a standard language for talking about this phenomenon.

Perspectives

This is a first step towards standardizing our language and mapping out this phenomenon. Let's work together to tests the ontology's applicability and robustness across the vast array of existing and emerging intelligent agents, interfaces, and environments that increasingly surround us.

Dr. Katie Seaborn
Tokyo Institute of Technology

Read the Original

This page is a summary of: Crossing the Tepper Line, May 2021, ACM (Association for Computing Machinery),
DOI: 10.1145/3411763.3451783.
You can read the full text:

Read

Resources

Contributors

The following have contributed to this page