Comparative analysis of different parameters used for optimization in the process of speaker and speech recognition using deep neural network

The process of speaker recognition in a noisy and distant environment is a difficult task as it faces numerous challenges before clean speaker speech signal reaching the microphone. While developing a deep neural network for robust functioning in extreme conditions, the selection of a perfectly comp...

Full description

Saved in:
Bibliographic Details
Main Authors: Natarajan, Sureshkumar, Al-Haddad, Syed Abdul Rahman, Ahmad, Faisul Arif, Hassan, Mohd Khair, Raja Kamil, Azrad, Syaril, Yahya, Mohammed Nawfal, Macleans, June Francis, Salvekar, Pratiksha Prashant
Format: Conference or Workshop Item
Published: IEEE 2022
Online Access:http://psasir.upm.edu.my/id/eprint/37829/
https://ieeexplore.ieee.org/document/10040065
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:The process of speaker recognition in a noisy and distant environment is a difficult task as it faces numerous challenges before clean speaker speech signal reaching the microphone. While developing a deep neural network for robust functioning in extreme conditions, the selection of a perfectly compatible optimizer, loss function, and dropout is necessary. This paper presents a comparative study of the optimization process in the neural network, how loss function effectively unites in seeking the optimizer. It emphasizes on the selection of the number of input nodes, hidden layers, and time consumed by each set of selections. This study elaborates the accuracy obtained at different combinations of parameters for robust deep neural network structure. This paper is classified under speaker and speech recognition process into improving accuracy. Experiment results shows that Adam optimizer with 150 epochs offers around 95% accuracy for speaker classification under the noisy condition at different SNR values.