What is it about?

This study proposes a method to help people with different degrees of hearing impairment to better integrate into society and perform more convenient human-to-human and human-to-robot sign language interaction through computer vision. Traditional sign language recognition methods make it challenging to get good results on scenes with backgrounds close to skin color, background clutter, and partial occlusion.

Featured Image

Why is it important?

Sign language recognition can be used in areas such as assistive devices for people with disabilities, healthcare, and home service robots, allowing people with hearing impairments or expression difficulties to better receive information, express themselves positively, and better integrate physically and psychologically into their families and society. The research can subsequently be bridged to spelling and word formation functions, especially for special people who are unfamiliar with the position of letters when using keyboard input, by retaining the results of multiple letter sign language to complete the effect of output words, enabling faster expression to complete the interaction.


In order to realize faster realtime display, by comparing standard single-target recognition algorithms, we choose the best effect YOLOv8 model, and based on this, we propose a lighter and more accurate SLR-YOLO network model that improves YOLOv8. Firstly, the SPPF module is replaced with RFB module in the backbone network to enhance the feature extraction capability of the network; secondly, in the neck, BiFPN is used to enhance the feature fusion of the network, and the Ghost module is added to make the network lighter; lastly, in order to introduce partial masking during the training process and to improve the data generalization capability, Mixup, Random Erasing and Cutout three data enhancement methods are compared, and finally the Cutout method is selected. The accuracy of the improved SLRYOLO model on the validation sets of the American Sign Language Letters Dataset and Bengali Sign Language Alphabet Dataset is 90.6% and 98.5%, respectively. Compared with the performance of the original YOLOv8, the accuracy of both is improved by 1.3 percentage points, the amount of parameters is reduced by 11.31%, and FLOPs are reduced by 11.58%.

Wanjun Jia
Xinjiang University

Read the Original

This page is a summary of: SLR-YOLO: An improved YOLOv8 network for real-time sign language recognition, Journal of Intelligent & Fuzzy Systems, January 2024, IOS Press,
DOI: 10.3233/jifs-235132.
You can read the full text:



The following have contributed to this page