Analysis of Auditory Evoked Potential Signals Using Wavelet Transform and Deep Learning Techniques

Hearing deficiency is the world’s most common sensation of impairment and impedes human communication and learning. One of the best ways to solve this problem is early and successful hearing diagnosis using electroencephalogram (EEG). Auditory Evoked Potential (AEP) seems to be a form of EEG signal...

Full description

Saved in:
Bibliographic Details
Main Authors: Islam, Md Nahidul, Norizam, Sulaiman, Rashid, Mamunur, Md Jahid, Hasan, Mahfuzah, Mustafa, Anwar P. P., Abdul Majeed
Format: Conference or Workshop Item
Language:English
English
Published: Springer 2020
Subjects:
Online Access:http://umpir.ump.edu.my/id/eprint/33314/1/AnalysisofAuditoryEvokedPotentialSignals.pdf
http://umpir.ump.edu.my/id/eprint/33314/2/AnalysisofAuditoryEvokedPotentialSignals1.pdf
http://umpir.ump.edu.my/id/eprint/33314/
https://doi.org/10.1007/978-981-16-4803-8_39
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Hearing deficiency is the world’s most common sensation of impairment and impedes human communication and learning. One of the best ways to solve this problem is early and successful hearing diagnosis using electroencephalogram (EEG). Auditory Evoked Potential (AEP) seems to be a form of EEG signal with an auditory stimulus produced from the cortex of the brain. This study aims to develop an intelligent system of auditory sensation to analyze and evaluate the functional reliability of the hearing to solve these problems based on the AEP response. We create deep learning frameworks to enhance the training process of the deep neural network in order to achieve highly accurate hearing deficit diagnoses. In this study, a publicly available AEP dataset has been used and the responses have been obtained from the five subjects when the subject hears the auditory stimulus in the left or right ear. First, through a wavelet transformation, the raw AEP data is transformed into time-frequency images. Then, to remove lower-level functionality, a pre-trained network is used. Then the labeled images of time-frequency are then used to fine-tune the neural network architecture’s higher levels. On this AEP dataset, we have achieved 92.7% accuracy. The proposed deep CNN architecture provides better outcomes with fewer learnable parameters for hearing loss diagnosis.