What is it about?

Recognizing human activity from video stream has become one of the most interesting applications in computer vision. In this paper, a novel hybrid technique for human action recognition is proposed based on fast HOG3D of integral videos and Smith-Waterman partial shape matching of the fused frame. The proposed technique is divided into two main stages, the first stage extracts a set of foreground snippets from the input video, and extracts the histogram of 3d gradient orientations from the spatio-temporal volumetric data; and the second stage fuses a set of key frames from current snippet and extracts the contours from the fused frame. Non-linear SVM Decision Trees is used to classify HOG3D features into one of fixed action categories. In the other hand, Smith-Waterman partial shape matching is used to compare between the contour of the fused frame and the stored template contour of specified action. The results from SVM and SmithWaterman partial shape matching are then combined. The experimental results show that combining non-linear SVM decision trees of HOG3D features and Smith-Waterman partial shape matching of fused contours improved the accuracy of the classification model while maintaining efficiency in time elapsed for training.

Featured Image

Why is it important?

Human action recognition is considered one of the most prominent fields of computer vision. It can be used in a variety of applications such as video analysis and indexing, real-time surveillance, frame transition and manipulations, and gesture recognition.

Perspectives

It is a hybrid technique for human action recognition based on fast HOG3D of integral videos and Smith-Waterman partial shape matching of the fused frame.

Kareem Ahmed
Beni Suef University

Read the Original

This page is a summary of: Action Recognition Using Fast HOG3D of Integral Videos and Smith-Waterman Partial Matching , IET Image Processing, January 2018, the Institution of Engineering and Technology (the IET),
DOI: 10.1049/iet-ipr.2016.0627.
You can read the full text:

Read

Contributors

The following have contributed to this page