Frontal view gait recognition with fusion of depth features from a time of flight camera

Frontal view gait recognition for people identification has been carried out using single RGB, stereo RGB, Kinect 1.0, and Doppler radar. However, existing methods based on these camera technologies suffer from several problems. Therefore, we propose a four-part method for frontal view gait recognit...

Full description

Saved in:
Bibliographic Details
Main Authors: Tengku Mohd Afendi, Zulcaffle, Kurugollu, F.,, Crookes, D.,, Bouridane, A.,, Farid, M.
Format: Article
Language:English
Published: Institute of Electrical and Electronics Engineers Inc. 2018
Subjects:
Online Access:http://ir.unimas.my/id/eprint/29615/1/Frontal.pdf
http://ir.unimas.my/id/eprint/29615/
https://www.scopus.com/record/display.uri?eid=2-s2.0-85053302219&doi=10.1109%2fTIFS.2018.2870594&origin=inward&txGid=88af31ec287a1d43b6c4a10e814d0979
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Frontal view gait recognition for people identification has been carried out using single RGB, stereo RGB, Kinect 1.0, and Doppler radar. However, existing methods based on these camera technologies suffer from several problems. Therefore, we propose a four-part method for frontal view gait recognition based on the fusion of multiple features acquired from a Time-of-Flight (ToF) camera. We have developed a gait data set captured by a ToF camera. The data set includes two sessions recorded seven months apart, with 46 and 33 subjects, respectively, each with six walks with five covariates. The four-part method includes: A new human silhouette extraction algorithm that reduces the multiple reflection problem experienced by ToF cameras; a frame selection method based on a new gait cycle detection algorithm; four new gait image representations; and a novel fusion classifier. Rigorous experiments are carried out to compare the proposed method with state-of-the-art methods. The results show distinct improvements over recognition rates for all covariates. The proposed method outperforms all major existing approaches for all covariates and results in 66.1% and 81.0% Rank 1 and Rank 5 recognition rates, respectively, in overall covariates, compared with a best state-of-the-art method performance of 35.7% and 57.7%. © 2005-2012 IEEE.