What is it about?
While it is crystal clear that communication can draw on many semiotic resources, research in the humanities has hitherto strongly focused on its verbal manifestations. “Multimodality” labels a variety of approaches and theories trying to remedy this bias by investigating how for instance visuals, music, and sound contribute to meaning-making. The contours of what is developing into a new discipline begin to be discernible. This handbook chapter provides a brief survey of various perspectives on multimodality, addresses the thorny issue of what should count as a mode, and makes suggestions for further development of the fledgling discipline.
Photo by Austin Chan on Unsplash
Why is it important?
Communication is increasingly multimodal. The discipline of multimodality, however, is still in its early stages. One of the biggest problems is that many multimodality scholars have a background in linguistics, and over-extend the analogy between verbal communication and other modes of communication -- e.g., via visuals, music, and/or sounds. Crucially, language has a grammar and a vocabulary, while visuals and sounds do not -- at least not in the literal, strict sense of what "grammar" and "vocabulary" mean (things may be different for music). That said, these modes may have typical "structures" and "components." Each mode communicates in its own, unique way. Multimodality scholars therefore need to develop expertise in at least two different modes, and thus be knowledgeable about minimally two types of monomodal discourse in order to be able to theorize how these can combine to create multimodal meaning. A multimodal medium such as film requires knowledge about at least the modes of language, (moving) images, music, and sound.
Read the Original
This page is a summary of: Multimodality, May 2021, Taylor & Francis, DOI: 10.4324/9781351034708-45.
You can read the full text:
The following have contributed to this page