On the use of voice activity detection in speech emotion recognition

Emotion recognition through speech has many potential applications, however the challenge comes from achieving a high emotion recognition while using limited resources or interference such as noise. In this paper we have explored the possibility of improving speech emotion recognition by utilizing t...

Full description

Saved in:
Bibliographic Details
Main Authors: Alghifari, Muhammad Fahreza, Gunawan, Teddy Surya, Wan Nordin, Mimi Aminah, Ahmad Qadri, Syed Asif, Kartiwi, Mira, Janin, Zuriati
Format: Article
Language:English
English
Published: Institute of Advanced Engineering and Science 2019
Subjects:
Online Access:http://irep.iium.edu.my/73890/1/73890_On%20the%20Use%20of%20Voice%20Activity.pdf
http://irep.iium.edu.my/73890/7/73890_On%20the%20use%20of%20voice%20activity%20detection%20in%20speech%20emotion%20recognition_Scopus.pdf
http://irep.iium.edu.my/73890/
http://www.beei.org/index.php/EEI/article/view/1646/1208
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Emotion recognition through speech has many potential applications, however the challenge comes from achieving a high emotion recognition while using limited resources or interference such as noise. In this paper we have explored the possibility of improving speech emotion recognition by utilizing the voice activity detection (VAD) concept. The emotional voice data from the Berlin Emotion Database (EMO-DB) and a custom-made database LQ Audio Dataset are firstly preprocessed by VAD before feature extraction. The features are then passed to the deep neural network for classification. In this paper, we have chosen MFCC to be the sole determinant feature. From the results obtained using VAD and without, we have found that the VAD improved the recognition rate of 5 emotions (happy, angry, sad, fear, and neutral) by 3.7% when recognizing clean signals, while the effect of using VAD when training a network with both clean and noisy signals improved our previous results by 50%.