What is it about?
This study explores how computers can learn to recognize lesser-known rhetorical figures. These are language patterns like repetition, contrast, or exaggeration that humans often use to make speech or writing more persuasive, emotional, or memorable. While computers can often detect common figures like metaphor or sarcasm, they usually miss others like antithesis (“rich and poor”), hyperbole (“a million times”), or alliteration (“wild and windy”). The authors reviewed over 100 research papers to understand what techniques work, where the challenges lie, and how future models can better understand this kind of creative and non-literal language. Their findings provide a roadmap for improving how computer systems, including AI with its large language models, interpret human communication, especially in tasks like hate speech detection, propaganda analysis, or sentiment analysis.
Featured Image
Photo by Paolo Chiabrando on Unsplash
Why is it important?
Language technologies, from content moderation tools to chatbots, often fail when people use rhetorical language. This is a major problem because such language is everywhere: in social media, news headlines, political speeches, advertising, and everyday conversations. What makes this study timely and unique is its focus on rhetorical figures that have so far been ignored in this research field. Instead of building yet another model for metaphor, the authors shine a light on underexplored devices like zeugma ("he stole my heart and my wallet"), litotes ("not uninteresting"), or epizeuxis ("never, never, never"). By analyzing the methods, datasets, and technical gaps in the field, they show how limited our current models really are and what’s needed to improve them. Especially with the growing use of large language models like ChatGPT in high-stakes domains such as misinformation detection and education, this work has the potential to enable fairer, more intelligent, and more human-aware systems, while also contributing to an authentic interaction with language models.
Perspectives
When we started working in this field, we quickly realized how fragmented and even contradictory the information was. Definitions of rhetorical figures vary widely, and the same term could mean different things depending on the source or tradition. For someone new to the field, this landscape can feel overwhelming. We also noticed that most research focuses only on three well-known figures: metaphor, irony, and sarcasm. But there are many other rhetorical devices, like antithesis, hyperbole, or chiasmus, that are just as common in language but barely studied in computational research. We wanted to change that by looking into these lesser-known figures and including examples from other languages, not just English. With this publication, we hope to contribute to a clearer vision. Our goal was to bring together scattered research, offer structured insights, and provide a starting point that both informs and inspires new researchers. If we want authentic communication with AI systems, we need to help them recognize rhetorical figures. This work is an important step in that direction.
Ramona Kuehn
Universitat Passau
Read the Original
This page is a summary of: Computational Approaches to the Detection of Lesser-Known Rhetorical Figures: A Systematic Survey and Research Challenges, ACM Computing Surveys, June 2025, ACM (Association for Computing Machinery),
DOI: 10.1145/3744554.
You can read the full text:
Contributors
The following have contributed to this page







