Human activity detection and action recognition in video using convolutional neural networks

Human activity recognition from video scenes has become a significant area of research in the field of computer vision applications. Action recognition is one of the most challenging problems in the area of video analysis and it finds applications in human-computer interaction, anomalous activity de...

Full description

Saved in:
Bibliographic Details
Main Authors: Basavaiah, Jagadeesh, Patil, Chandrashekar Mohan
Format: Article
Language:English
Published: Universiti Utara Malaysia 2020
Subjects:
Online Access:http://repo.uum.edu.my/26967/1/JICT%2019%202%202020%20157-183.pdf
http://repo.uum.edu.my/26967/
http://jict.uum.edu.my/index.php/currentissues#a1
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Human activity recognition from video scenes has become a significant area of research in the field of computer vision applications. Action recognition is one of the most challenging problems in the area of video analysis and it finds applications in human-computer interaction, anomalous activity detection, crowd monitoring and patient monitoring. Several approaches have been presented for human activity recognition using machine learning techniques. The main aim of this work is to detect and track human activity, and classify actions for two publicly available video databases. In this work, a novel approach of feature extraction from video sequence by combining Scale Invariant Feature Transform and optical flow computation are used where shape, gradient and orientation features are also incorporated for robust feature formulation. Tracking of human activity in the video is implemented using the Gaussian Mixture Model. Convolutional Neural Network based classification approach is used for database training and testing purposes. The activity recognition performance is evaluated for two public datasets namely Weizmann dataset and Kungliga Tekniska Hogskolan dataset with action recognition accuracy of 98.43% and 94.96%, respectively. Experimental and comparative studies have shown that the proposed approach outperformed state-of the art techniques.