Lips tracking identification of a correct Quranic letters pronunciation for tajweed teaching and learning

Mastering the recitation of the holy Quran is an obligation among Muslims. It is an important task to fulfill other Ibadat like prayer, pilgrimage, and zikr. However, the traditional way of teaching Quran recitation is a hard task due to the extensive training time and effort required from both teac...

Full description

Saved in:
Bibliographic Details
Main Authors: Altalmas, Tareq, M., Jamil, Muhammad Ammar, Ahmad, Salmiah, Sediono, Wahju, Salami, Momoh Jimoh Eyiomika, Shahbudin Hassan, Surul, Embong, Abd Halim
Format: Article
Language:English
English
Published: IIUM Press, International Islamic University Malaysia 2017
Subjects:
Online Access:http://irep.iium.edu.my/57757/1/57757_Lips%20tracking%20identification%20of%20a%20correct%20Quranic%20letters.pdf
http://irep.iium.edu.my/57757/7/57757_Lips%20tracking%20identification%20of%20a%20correct%20Quranic%20letters_SCOPUS.pdf
http://irep.iium.edu.my/57757/
http://journals.iium.edu.my/ejournal/index.php/iiumej/article/view/646
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Mastering the recitation of the holy Quran is an obligation among Muslims. It is an important task to fulfill other Ibadat like prayer, pilgrimage, and zikr. However, the traditional way of teaching Quran recitation is a hard task due to the extensive training time and effort required from both teacher and learner. In fact, learning the correct pronunciation of the Quranic letters or alphabets is the first step in mastering Tajweed (Rules and Guidance) in Quranic recitation. The pronunciation of Arabic alphabets is based on its points of articulation and the characteristics of a particular alphabet. In this paper, we implement a lip identification technique from video signal acquired from experts to extract the movement data of the lips while pronouncing the correct Quranic alphabets. The extracted lip movement data from experts helps in categorizing the alphabets into 5 groups and in deciding the final shape of the lips. Later, the technique was tested among a public reciter and then compared for similarity verification between the novice and the professional reciter. The system is able to extract the lip movement of the random user and draw the displacement graph and compare with the pronunciation of the expert. The error will be shown if the user has mistakenly pronounced the alphabet and suggests ways for improvement. More subjects with different backgrounds will be tested in the very near future with feedback instructions. Machine learning techniques will be implemented at a later stage for the real time learning application.