What is it about?

We propose a computational model to learn how to recognize emotional expressions from visual (face expressions and body motion) and auditory cues based on deep neural networks. Also, our model can learn how to cluster emotion expressions, learning the emotion concepts by itself, which means it can learn happy or sad expressions without anyone telling it to do so.

Featured Image

Why is it important?

We propose a deep learning model to learn multimodal emotion expression which can be used for real-world emotion expression description. Also, the model is able to learn by itself different emotion categories, without the constriction of knowing the categories in advance.

Read the Original

This page is a summary of: Developing crossmodal expression recognition based on a deep neural model, Adaptive Behavior, October 2016, SAGE Publications,
DOI: 10.1177/1059712316664017.
You can read the full text:

Read

Contributors

The following have contributed to this page