An improved open-view human action recognition with unsupervised domain adaptation

One of the primary concerns with open-view human action recognition (HAR) is the large differences between data distributions of the target and source views. Subsequently, such differences cause the data shift problem to occur, and hence, decreasing the performance of the system. This problem comes...

Full description

Saved in:
Bibliographic Details
Main Authors: Samsudin, M. S. Rizal, Syed Abu Bakar, Syed Abdul Rahman, Mohd. Mokji, Musa
Format: Article
Language:English
Published: Springer Science and Business Media B.V. 2022
Subjects:
Online Access:http://eprints.utm.my/103343/1/SyedAbdulRahman2022_AnImprovedOpenViewHumanAction.pdf
http://eprints.utm.my/103343/
http://dx.doi.org/10.1007/s11042-022-12822-2
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:One of the primary concerns with open-view human action recognition (HAR) is the large differences between data distributions of the target and source views. Subsequently, such differences cause the data shift problem to occur, and hence, decreasing the performance of the system. This problem comes from the fact that real-world situation deals with unconstrained rather than constrained situations such as differences in camera resolutions, field of views, and non-uniform illumination which are not found in constrained datasets. The primary goal of this paper is to improve this open-view HAR by proposing the unsupervised domain adaptation approach. In particular, we demonstrated that the balanced weighted unified discriminant and distribution alignment (BW-UDDA) managed to handle the dataset with significant differences across views such as those found in the MCAD dataset. We showed that by using the MCAD dataset on two types of cross-view evaluations, our proposed technique outperformed other unsupervised domain adaptation methods with average accuracies of 13.38% and 61.45%. Additionally, we applied our method to a constrained multi-view IXMAS dataset and achieved an average accuracy of 90.91%. The results confirmed the superiority of the proposed technique.