What is it about?
When we listen to speech sounds, our brain needs to combine information from both hemispheres. How does the brain integrate acoustic information from remote areas? In a neuroimaging study, a team of researchers led by the Max Planck Institute of Psycholinguistics, the Donders Institute and the University of Zurich applied electrical stimulation to participants’ brains during a listening task. The stimulation affected the connection between the two hemispheres, which in turn changed participants’ listening behaviour.
Featured Image
Photo by Mark Paton on Unsplash
Why is it important?
This is the first demonstration in the auditory domain that interhemispheric connectivity is important for the integration of speech sound information. These results give us valuable insights into how the brain’s hemispheres are coordinated, and how we may use experimental techniques to manipulate this.
Perspectives
Read the Original
This page is a summary of: Selective modulation of interhemispheric connectivity by transcranial alternating current stimulation influences binaural integration, Proceedings of the National Academy of Sciences, February 2021, Proceedings of the National Academy of Sciences,
DOI: 10.1073/pnas.2015488118.
You can read the full text:
Resources
Changing the connection between the hemispheres affects speech perception
When we listen to speech sounds, our brain needs to combine information from both hemispheres. How does the brain integrate acoustic information from remote areas? In a neuroimaging study, a team of...
Synchronization of Brain Hemispheres Changes What We Hear
Most of the time, our brain receives different input from each of our ears, but we nevertheless perceive speech as unified sounds. This process takes place through synchronization of the areas of the brain involved with the help of gamma waves, neurolinguists at the University of Zurich have now discovered. Their findings may lead to new treatment approaches for tinnitus.
Changing the connection between the hemispheres affects speech perception
When we listen to speech sounds, the information that enters our left and right ear is not exactly the same. This may be because acoustic information reaches one ear before the other, or because the sound is perceived as louder by one of the ears. Information about speech sounds also reaches different parts of our brain, and the two hemispheres are specialised in processing different types of acoustic information. But how does the brain integrate auditory information from different areas?
Original Publication PNAS
Sensory processing depends upon the integration of widely distributed neural assemblies. During every day listening, our ears receive different information (due to interaural time and amplitude differences) and it is known that both hemispheres extract different acoustic features. Nonetheless, acoustic features belonging to the same source become integrated. It has been suggested that the brain overcomes this “binding problem” by synchronization of oscillatory activity across the relevant regions. Here we probe interhemispheric oscillatory synchronization as a mechanism for acoustic feature binding using bihemispheric transcranial alternating current stimulation. Concurrent functional MRI reveals that antiphase stimulation of auditory areas changes effective connectivity between these areas, and that this change in connectivity predicts perceptual integration of dichotic stimuli.
Contributors
The following have contributed to this page