What is it about?

We developed a new technique for predicting mood of the music based on acoustic signals and social tags. The model is adaptive and utilises the specific properties of different genres of music. This model was more accurate than any existing model and could handle such typically challenging dimensions of emotions as positive-negative (also known as the emotional valence). In other words, the model handles the moods in rock differently than in world music or in classical music.

Featured Image

Why is it important?

An automatic prediction of mood of the music is fundamental for helping listeners to find suitable music in music streaming services. Estimating the brod emotion category can be done fairly well based on acoustic features or metadata but more subtle prediction of emotional gradients is nowhere near how well listener are able to distinguish emotional nuances in music. Our idea takes into account the musical differences embedded in the different aesthetics of genres. For instance, what is deemed positive emotion might have different meaning in classical music, blue and techno, and incorporating this kind of internal knowledge of the genres ables the model to capture the emotional nuances of music better.

Read the Original

This page is a summary of: Genre-Adaptive Semantic Computing and Audio-Based Modelling for Music Mood Annotation, IEEE Transactions on Affective Computing, April 2016, Institute of Electrical & Electronics Engineers (IEEE),
DOI: 10.1109/taffc.2015.2462841.
You can read the full text:

Read

Contributors

The following have contributed to this page