Emotion Detection through Speech and Facial Expressions

Human machine interaction is one of the most burgeoning area of research in the field of information technology. To date a majority of research in this field has been conducted using unimodal and multimodal systems with asynchronous data. Because of the above, the improper synchronization, which has...

Full description

Saved in:
Bibliographic Details
Main Authors: Kudiri, K.M., Said, A.M., Nayan, M.Y.
Format: Conference or Workshop Item
Published: Institute of Electrical and Electronics Engineers Inc. 2015
Online Access:https://www.scopus.com/inward/record.uri?eid=2-s2.0-84962082444&doi=10.1109%2fCASH.2014.22&partnerID=40&md5=cecbc5ea8f6d4b0773059954b1649b01
http://eprints.utp.edu.my/31588/
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Human machine interaction is one of the most burgeoning area of research in the field of information technology. To date a majority of research in this field has been conducted using unimodal and multimodal systems with asynchronous data. Because of the above, the improper synchronization, which has become a common problem, due to that, the system complexity increases and the system response time decreases. To counter this problem, a novel approach has been introduced to predict human emotions using human speech and facial expressions. The approach uses two feature vectors, namely, relative bin frequency coefficient (RBFC) and relative sub-image based coefficient (RSB) for speech and visual data respectively. Support vector machine with radial basis kernel is used for feature level classification based fusion technique between two modalities. The proposed novel approach has resulted in galvanizing results for a myriad of inputs and can be adapted to asynchronous data. © 2014 IEEE.