What is it about?

This paper presents towards recognizing affective facial expressions along with hand movement analysis in Spatio-temporal representation, where faces and hands are tracked in multiple frames to extract faces and hands as the regions of interest (ROIs) and further their features are concatenated and classified into seven classes representing disgust, neutral, happy, sad, scared, angry, and surprise expressions.

Featured Image

Why is it important?

Combining manual and non-manual features of Sign Language is a complex domain. the current study is a contribution to understanding sign language in a better way. In this current research, three modified architectures are combined together to provide a novel hybrid architecture MM-SLR to recognize non-manual features based on facial expressions along with manual gestures in the spatial-temporal domain representing hand movements in automatic sign language recognition. Experiments are conducted on three public SLT datasets and the results from multiple aspects and multiple levels. The average loss of modified architecture is 0.34 and stable after 2000 iterations. Further qualitative analysis is performed for all three datasets with the milieus of precision, recall, and F1 score, and the model performs promising for all. Overall our model classifies the gesture based on manual and non-manual features using LSTM architecture and for PkSLMNM datasets the training and validation accuracy is 83% and 79% respectively.

Perspectives

In this current research, three modified architectures are combined together to provide a novel hybrid architecture MM-SLR to recognize non-manual features based on facial expressions along with manual gestures in spatial-temporal domain representing hand movements in automatic sign language recognition. which gives prominence to hand movements as well as body movements which can be an important aspect of sign language understanding.

Sameena Javaid
Bahria University

Read the Original

This page is a summary of: Manual and non-manual sign language recognition framework using hybrid deep learning techniques, Journal of Intelligent & Fuzzy Systems, August 2023, IOS Press,
DOI: 10.3233/jifs-230560.
You can read the full text:

Read

Resources

Contributors

The following have contributed to this page