What is it about?

Summary: We have all had the experience trying to understand what others are saying when we can’t see their face (e.g. when the speaker is wearing a mask). A common feature in autism is less flexible gaze to other’s faces and problems with speech and language. Because seeing a person’s face when they talk boosts understanding of what is said, this might contribute to difficulties in language development in autism. To better understand this, we asked how children with autism respond to talking faces. Using a technology called EEG that measures brain-based electrical activity, we showed a group of children with autism and a group with typical development videos of people talking. Children with autism were slightly less influenced by the face of a talker suggesting difficulties in the simultaneous processing of both visual and auditory speech. Also, brain responses to subtle changes in speech sounds differed between children with autism and their typically developing peers. This suggests that children with autism may have a more difficult time distinguishing small sound differences and are less helped by looking at a talker’s face. Since understanding others can be especially difficult in places where children are, like classrooms, cafeterias and playgrounds, which are noisy, these differences may have a cascading negative impact on social and academic success.

Featured Image

Why is it important?

Since understanding others can be especially difficult in places where children are, like classrooms, cafeterias and playgrounds, which are noisy, differences in understanding speech may have a cascading negative impact on social and academic success.

Perspectives

We hope that better understanding speech and language processing will allow us to develop effective therapies.

Julia Irwin
Southern Connecticut State University

Read the Original

This page is a summary of: Neural and Behavioral Differences in Speech Perception for Children With Autism Spectrum Disorders Within an Audiovisual Context, Journal of Speech Language and Hearing Research, June 2023, American Speech-Language-Hearing Association (ASHA),
DOI: 10.1044/2023_jslhr-22-00661.
You can read the full text:

Read

Contributors

The following have contributed to this page