What is it about?

Electric vehicles have changed what we hear on the road. Or more precisely, they have changed what we do not hear. At low speeds, electric vehicles are much quieter than conventional cars. That may sound like progress, and in many ways it is. But it also creates a safety problem. Pedestrians, cyclists, and other road users often rely on sound to notice an approaching vehicle. For this reason, Acoustic Vehicle Alerting Systems, or AVAS, have become mandatory in many regions. The engineering problem seems simple at first: add a sound so the vehicle can be heard. But the real design problem is harder: what kind of sound should an electric vehicle make? A recent study approaches that question from a useful angle. Instead of asking only what satisfies regulations, it asks what different kinds of people actually prefer. That shift matters. A warning sound is not just a technical signal. It also communicates meaning. It can sound futuristic, calm, powerful, annoying, familiar, or artificial. The paper argues that EV sound design should not be based only on acoustic measurements, but also on how listeners describe and emotionally interpret what they hear. The researchers recorded five real EV AVAS sounds, including several electric cars and one electric truck, using a dummy head in a low-noise outdoor test environment set up according to UNECE R138 measurement conditions. These sounds were then used in an online listening study. Twenty-nine participants with prior exposure to EVs listened to the samples and rated them on 25 semantic scales such as soft–hard, calm–aggressive, weak–powerful, electric-like, futuristic, annoying, and truck-like–car-like. In simple terms, the study tried to connect the physical features of a sound to the words people naturally use when they describe it. That idea is more important than it may first appear. Engineers often work with spectra, levels, and psychoacoustic metrics. Listeners do not. A listener does not usually say, “this sound has high tonality in a certain frequency range.” They say, “this sounds futuristic,” or “this is too harsh,” or “this feels calm.” Semantic attributes are useful because they translate technical sound behavior into human meaning. The study builds on earlier work in vehicle sound design, but extends it into the EV and AVAS domain, where artificiality, low annoyance, safety, and brand character all have to coexist. One of the strongest parts of the study is that it does not treat consumers as one homogeneous group. The researchers divided participants into three consumer segments using a scoring system and k-means clustering: Early Adopters, Mainstream Consumers, and a Hybrid or Undecided group. Early Adopters were more attracted to novelty, innovation, and environmental aspects. Mainstream Consumers cared more about reliability, cost, and familiarity. The Hybrid group sat between the two. In the final sample, about 29% were classified as Early Adopters, 41% as Mainstream Consumers, and 30% as Hybrid or Undecided. This segmentation changed the story completely. The two groups did not want the same thing from an EV sound. The word clouds shown in the paper make this especially clear. Early Adopters tended to describe desirable EV sounds with words such as “futuristic,” “high-tech,” and “distinctive.” Mainstream Consumers, in contrast, leaned toward words like “quiet,” “low,” and more familiar-sounding qualities. That means the design target for AVAS is not universal. A sound that feels exciting and technologically advanced to one group may feel intrusive or unnatural to another. The psychoacoustic analysis helps explain why. The study found that tonality had a strong positive relationship with the perception of “futuristic.” Sounds with stronger tonal content, especially in the mid-high Bark bands between about 8.5 and 12.5 Bark, were more often judged as innovative and technologically advanced. Early Adopters preferred these sounds and even rated them as less annoying when the artificial character matched their expectations of advanced technology. That is a useful result because it shows that annoyance is not purely a physical property. It is partly interpretive. The same artificial sound feature may be accepted by one listener group and rejected by another depending on what the feature seems to mean. In this case, stronger tonality was not simply “more noticeable.” For Early Adopters, it also signaled modernity. Mainstream Consumers responded differently. They preferred quieter, less intrusive sounds. In the study, one particular sound was favored by this group because it was perceived as quieter and had lower fluctuation strength in lower Bark bands, roughly from 2.5 to 6.5 Bark. A simple way to think about fluctuation strength is this: it describes slow, noticeable variations in sound over time. When that quality was reduced, the sound felt calmer and less attention-demanding. That matched the Mainstream group’s preference for unobtrusive and familiar warning sounds. This leads to a broader design lesson. AVAS should probably not be treated as a one-size-fits-all product feature. If a manufacturer wants to communicate cutting-edge innovation, stronger tonal cues in selected hearing-relevant bands may help. If the goal is broader public acceptance and lower perceived annoyance, reducing fluctuation-related cues in lower bands may be a better strategy. The point is not that one sound profile is objectively best. The point is that sound design should follow the intended user experience and market position. There is also a methodological lesson here. The paper shows the value of combining three layers that are often kept separate: consumer psychology, semantic evaluation, and psychoacoustic modelling. Consumer psychology tells us that people differ. Semantic evaluation captures how those differences appear in language. Psychoacoustic modelling then links that language back to measurable properties of the sound. That combination is much more useful than relying on regulations alone or on expert listeners alone. The study is promising, but it also has limits. The sample was relatively small, with 29 participants, and the work was based on online listening rather than fully controlled laboratory conditions. It used five recorded vehicle sounds, which is a good start but still a narrow design space. So the results should not be treated as final design rules for all EVs in all markets. They are better understood as a strong prototype for a more consumer-centered AVAS development process. Still, the central message is convincing: an electric vehicle sound is not only a safety signal; it is also a perceptual identity. If designers want AVAS sounds that people accept, understand, and even prefer, they need to look beyond audibility and start asking what different listeners actually want to hear. This study shows that the answer depends on who is listening.

Featured Image

Read the Original

This page is a summary of: Psychoacoustic Modelling of AVAS Sounds: Consumer-Centric Semantic Attribute Development for Electric Vehicles, December 2025, European Acoustics Association,
DOI: 10.61782/fa.2025.0714.
You can read the full text:

Read

Contributors

The following have contributed to this page