Fused features mining for depth-based hand gesture recognition to classify blind human communication
Gesture recognition and hand pose tracking are applicable techniques in human–computer interaction fields. Depth data obtained by depth cameras present a very informative explanation of the body or in particular hand pose that it can be used for more accurate gesture recognition systems. The hand de...
Saved in:
Main Authors: | , , , , |
---|---|
Format: | Article |
Published: |
Springer London
2017
|
Subjects: | |
Online Access: | http://eprints.utm.my/id/eprint/76989/ https://www.scopus.com/inward/record.uri?eid=2-s2.0-84960113763&doi=10.1007%2fs00521-016-2244-5&partnerID=40&md5=6fc82f4f1895e6aa4ef25122271764fe |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Gesture recognition and hand pose tracking are applicable techniques in human–computer interaction fields. Depth data obtained by depth cameras present a very informative explanation of the body or in particular hand pose that it can be used for more accurate gesture recognition systems. The hand detection and feature extraction process are very challenging task in the RGB images that they can be effectively dissolved with simple ways with depth data. However, depth data could be combined with the color information for more reliable recognition. A common hand gesture recognition system requires identifying the hand and its position or direction, extracting some useful features and applying a suitable machine-learning method to detect the performed gesture. This paper presents the novel fusion of the enhanced features for the classification of static signs of the sign language. It begins by explaining how the hand can be separated from the scene by depth data. Then, a combination feature extraction method is introduced for extracting some appropriate features of the images. Finally, an artificial neural network classifier is trained with these fused features and applied to critically analyze various descriptors performance. |
---|