Deep learning-based approach for continuous affect prediction from facial expression images in valence-arousal space

Facial emotion recognition has attracted extensive attention from the affective computing community and several approaches have been proposed, mainly providing classification of facial expressions images using a set of discrete emotional labels. The quantification of human emotions from faces has be...

Full description

Saved in:
Bibliographic Details
Main Authors: Hwooi, Stephen Khor Wen, Othmani, Alice, Sabri, Aznul Qalid Md
Format: Article
Published: Institute of Electrical and Electronics Engineers 2022
Subjects:
Online Access:http://eprints.um.edu.my/41266/
Tags: Add Tag
No Tags, Be the first to tag this record!
id my.um.eprints.41266
record_format eprints
spelling my.um.eprints.412662023-09-15T07:51:40Z http://eprints.um.edu.my/41266/ Deep learning-based approach for continuous affect prediction from facial expression images in valence-arousal space Hwooi, Stephen Khor Wen Othmani, Alice Sabri, Aznul Qalid Md QA75 Electronic computers. Computer science Facial emotion recognition has attracted extensive attention from the affective computing community and several approaches have been proposed, mainly providing classification of facial expressions images using a set of discrete emotional labels. The quantification of human emotions from faces has been studied in a continuous 2D emotional space of valence and arousal, that describes the level of pleasantness and the intensity of excitement, respectively. Emotion assessment using valence-arousal computation is a challenging topic with several possible applications in health monitoring, e-learning, mental health diagnosis and monitoring of customer interest. Supervised learning of emotional valence-arousal for continuous affect prediction requires labeled data. However, annotation of facial images with the values for valence and arousal requires training human experts. In this paper, we propose a new and robust approach based on deep learning for continuous affect recognition and prediction. The novelty of our approach is that it maps the discrete labels and a learned facial expression representation to the continuous valence-arousal dimensional space. Given a discrete class of emotion and a facial image, our deep learning-based approach can predict the valence and arousal values accurately. Our proposed approach outperforms existing approaches for arousal and valence prediction on AffectNet dataset and it shows an impressive generalization ability on an unseen dataset for valence prediction. Institute of Electrical and Electronics Engineers 2022 Article PeerReviewed Hwooi, Stephen Khor Wen and Othmani, Alice and Sabri, Aznul Qalid Md (2022) Deep learning-based approach for continuous affect prediction from facial expression images in valence-arousal space. IEEE Access, 10. pp. 96053-96065. ISSN 2169-3536, DOI https://doi.org/10.1109/ACCESS.2022.3205018 <https://doi.org/10.1109/ACCESS.2022.3205018>. 10.1109/ACCESS.2022.3205018
institution Universiti Malaya
building UM Library
collection Institutional Repository
continent Asia
country Malaysia
content_provider Universiti Malaya
content_source UM Research Repository
url_provider http://eprints.um.edu.my/
topic QA75 Electronic computers. Computer science
spellingShingle QA75 Electronic computers. Computer science
Hwooi, Stephen Khor Wen
Othmani, Alice
Sabri, Aznul Qalid Md
Deep learning-based approach for continuous affect prediction from facial expression images in valence-arousal space
description Facial emotion recognition has attracted extensive attention from the affective computing community and several approaches have been proposed, mainly providing classification of facial expressions images using a set of discrete emotional labels. The quantification of human emotions from faces has been studied in a continuous 2D emotional space of valence and arousal, that describes the level of pleasantness and the intensity of excitement, respectively. Emotion assessment using valence-arousal computation is a challenging topic with several possible applications in health monitoring, e-learning, mental health diagnosis and monitoring of customer interest. Supervised learning of emotional valence-arousal for continuous affect prediction requires labeled data. However, annotation of facial images with the values for valence and arousal requires training human experts. In this paper, we propose a new and robust approach based on deep learning for continuous affect recognition and prediction. The novelty of our approach is that it maps the discrete labels and a learned facial expression representation to the continuous valence-arousal dimensional space. Given a discrete class of emotion and a facial image, our deep learning-based approach can predict the valence and arousal values accurately. Our proposed approach outperforms existing approaches for arousal and valence prediction on AffectNet dataset and it shows an impressive generalization ability on an unseen dataset for valence prediction.
format Article
author Hwooi, Stephen Khor Wen
Othmani, Alice
Sabri, Aznul Qalid Md
author_facet Hwooi, Stephen Khor Wen
Othmani, Alice
Sabri, Aznul Qalid Md
author_sort Hwooi, Stephen Khor Wen
title Deep learning-based approach for continuous affect prediction from facial expression images in valence-arousal space
title_short Deep learning-based approach for continuous affect prediction from facial expression images in valence-arousal space
title_full Deep learning-based approach for continuous affect prediction from facial expression images in valence-arousal space
title_fullStr Deep learning-based approach for continuous affect prediction from facial expression images in valence-arousal space
title_full_unstemmed Deep learning-based approach for continuous affect prediction from facial expression images in valence-arousal space
title_sort deep learning-based approach for continuous affect prediction from facial expression images in valence-arousal space
publisher Institute of Electrical and Electronics Engineers
publishDate 2022
url http://eprints.um.edu.my/41266/
_version_ 1778161649527554048
score 13.18916