What is it about?
The present Neuroview published in Neuron highlights how the human brain - specifically, its evolutionarily shaped social systems - is likely to influence and be influenced by increasing interactions with AI-based conversational agents (AICAs), such as ChatGPT or virtual avatars. The perspective links on decades of research in social neuroscience to outline how the brain processes interactions with others through specialized systems for mentalizing (inferring others’ thoughts), learning whom to trust, and adapting to group norms. These processes, which support human social cohesion and interaction, are now being activated during interactions with AI systems that mimic human communication. Initial behavioral and neuroimaging evidence suggests that people not only anthropomorphize AI agents but may also show similar patterns of trust, emotional engagement, and social conformity as in human-human interactions. Importantly, these social responses occur automatically, potentially without the user's awareness. As a result, AI systems may have a stronger-than-expected influence on beliefs, judgments, and emotional responses. The perspective emphasizes both the promises and potential risks of this development. On one hand, AI companions and digital assistants could provide scalable support in domains such as mental health, education, and social connectedness - particularly for individuals who face barriers to traditional forms of interaction. On the other hand, the same social processing mechanisms may make users vulnerable to manipulation, overtrust, and biased information flows. Moreover, with growing exposure - especially among younger users - there is a possibility that repeated interactions with highly personalized AI agents may induce experience-dependent plasticity in core social brain circuits. These changes could influence interpersonal behavior, identity development, and social comparison processes and could promote ‘personalized filter bubbles’ or impede personal development. The perspective calls for an integrated neuroscience and psychological framework to systematically study these dynamics and to keep the pace of the rapid developments in AI. Some key open questions that this new field of the social cognitive and affective neuroscience of AI needs to adress are how the brain distinguishes between AI and human agents, how social identity and moral authority are constructed in relation to AI, and how to optimize AI systems to support, rather than undermine, healthy social functioning.
Featured Image
Photo by Steve Johnson on Unsplash
Why is it important?
Given the rapid pace of AI development and the strong commercial incentives driving its integration into everyday life understanding how our evolutionary shaped ‘social brain’ interacts with AI is critical. As conversational AI becomes more personalized, emotionally engaging, and persuasive, it is critical to understand how these technologies interact with core human brain systems. A science-based framework is needed to balance the transformative opportunities - such as improved mental health support and accessibility - with potential societal and psychological risks, including manipulation, overreliance, and unintended changes in social behavior. Anticipating and guiding these developments will be essential to ensure that AI serves human well-being, rather than undermining it.
Perspectives
With the rapid development of AI we have to understand how our social brain will automatically shape our interactions with AI and how these circuits and our behavior may be influenced by interactions with AI
Benjamin Becker
The University of Hong Kong
Read the Original
This page is a summary of: Will our social brain inherently shape and be shaped by interactions with AI?, Neuron, June 2025, Elsevier,
DOI: 10.1016/j.neuron.2025.04.034.
You can read the full text:
Resources
Contributors
The following have contributed to this page







