Performance of machine learning classifiers in distress keywords recognition for audio surveillance applications

The ability to recognize distress speech is the essence of an intelligent audio surveillance system. With this ability, the surveillance system can be configured to detect specific distress keywords and launch appropriate actions to prevent unwanted incidents from progressing. This paper aims to fin...

Full description

Saved in:
Bibliographic Details
Main Authors: Nadhirah Johari, Mazlina Mamat, Ali Chekima
Format: Proceedings
Language:English
English
Published: Institute of Electrical and Electronics Engineers 2021
Subjects:
Online Access:https://eprints.ums.edu.my/id/eprint/32526/1/Performance%20of%20machine%20learning%20classifiers%20in%20distress%20keywords%20recognition%20for%20audio%20surveillance%20applications.ABSTRACT.pdf
https://eprints.ums.edu.my/id/eprint/32526/2/Performance%20of%20machine%20learning%20classifiers%20in%20distress%20keywords%20recognition%20for%20audio%20surveillance%20applications.pdf
https://eprints.ums.edu.my/id/eprint/32526/
https://ieeexplore.ieee.org/document/9573852
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:The ability to recognize distress speech is the essence of an intelligent audio surveillance system. With this ability, the surveillance system can be configured to detect specific distress keywords and launch appropriate actions to prevent unwanted incidents from progressing. This paper aims to find potential distress keywords that the audio surveillance system could recognize. The idea is to use a machine learning classifier as the recognition engine. Five distress keywords: ‘Help’, ‘No’, ‘Oi’, ‘Please’, and ‘Tolong’ were selected to be analyzed. A total of 515 audio signals comprising these five distress keywords were collected and used in the training and testing of 27 classifier models, derived from the Decision Tree, Naïve Bias, Support Vector Machine, K-Nearest Neighbour, Ensemble, and Artificial Neural Network. The features extracted from each audio signal are the Mel-frequency Cepstral Coefficients, while the Principal Component Analysis was applied for feature reduction. The results show that the keyword 'Please' is the most recognized, followed by ‘Help’, ‘Oi’, ‘No’ and ‘Tolong’, respectively. This observation was achieved using the Ensemble Bagged Trees classifier, which can recognize ‘Please’ with 99% accuracy in training and 100% accuracy in testing.