Deep learning model for 5W (What, When, Where, Who, and Why) sign language translation system / Raihah Aminuddin, Ummu Mardhiah Abdul Jalil and Norsyamimi Hasran

Sign language is a way of communicating that uses hand movements. This ensures that other people can understand the message the hearing-impaired person is trying to convey. This research presents a 5W sign language identification system based on the Convolutional Neural Network technique and the You...

Full description

Saved in:
Bibliographic Details
Main Authors: Aminuddin, Raihah, Abdul Jalil, Ummu Mardhiah, Hasran, Norsyamimi
Format: Book Section
Language:English
Published: Faculty of Computer and Mathematical Sciences 2023
Subjects:
Online Access:https://ir.uitm.edu.my/id/eprint/93570/1/93570.pdf
https://ir.uitm.edu.my/id/eprint/93570/
https://jamcsiix.uitm.edu.my/
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Sign language is a way of communicating that uses hand movements. This ensures that other people can understand the message the hearing-impaired person is trying to convey. This research presents a 5W sign language identification system based on the Convolutional Neural Network technique and the You Only Look Once algorithm. The project follows the waterfall model, which consists of four phases: requirement analysis, design, implementation, and testing. The data was collected from the internet and a custom dataset. 100 images are collected for each 5W (what, when, where, who, and why) category. The images were labelled and classified as data training or data testing. After the pre-processing phase, the system was trained and tested using the Darknet-53 framework. The average total detection time is 7 seconds, with 98.81% accuracy. In future work, the project aims to investigate other sign languages, such as human emotions such as confusion, happiness, anger, etc.