What is it about?

This study is concerned with whether features or gestures play a role in speech production and perception. For this, we compare feature- and gesture-based models, taking into consideration recent MRI, external photoglottography and acoustic data on Korean consonants and two perception studies on how Koreans categorize Japanese sounds.

Featured Image

Why is it important?

For speech production, we take into consideration recent MRI, new noninvasive external photoglottography (henceforth, ePGG) and acoustic data on the three-way phonation contrast in Korean stops (Kim et al 2005, 2010, 2018) and MRI and acoustic data on tongue movements in Korean Palatalization (H. Kim 2012). For speech perception, we refer to two recent studies on Seoul Koreans’ categorization of Japanese geminates and plosives followed by a H or L vowel (H. Kim 2017, 2019). From the experimental data, we compare feature- and gesture-based models, arguing that discrete features are favored in speech production and perception as the linguistic representation of speech sounds and that the interaction of features with prosody in perception further favors feature-based models in Korean.

Perspectives

From these comparisons, we suggest that discrete features are preferred in speech production and perception as the linguistic representation of speech sounds and that the interaction of features with prosody in perception further favors feature-based models in Korean.

Hyunsoon Kim
Hongik University, Seoul

Read the Original

This page is a summary of: Features or gestures in speech production and perception?, June 2023, Oxford University Press (OUP),
DOI: 10.1093/oso/9780198791126.003.0006.
You can read the full text:

Read

Contributors

The following have contributed to this page