Analysis of algorithms variation in multilayer perceptron neural network for agarwood oil qualities classification

This study investigates the performance of the Multilayer Perceptron (MLP) classifier in discriminating the qualities of agarwood oil significant compounds by different qualities based on three training algorithms namely Scaled Conjugate Gradient (SCG), Levernbergh-Marquardt (LM) and Resilient Backp...

Full description

Saved in:
Bibliographic Details
Main Authors: Nurul Shakila, Ahmad Zubir, Mohamad Aqib Haqmi, Abas, Ismail, N. A., Nor Azah, Mohd Ali, Mohd Hezri, Fazalul Rahiman, Ng, K. M., Saiful Nizam, Tajuddin
Format: Conference or Workshop Item
Language:English
English
Published: IEEE 2017
Subjects:
Online Access:http://umpir.ump.edu.my/id/eprint/28990/1/Analysis%20of%20algorithms%20variation%20in%20multilayer%20perceptron%20neural%20network%20.pdf
http://umpir.ump.edu.my/id/eprint/28990/2/Analysis%20of%20algorithms%20variation%20in%20multilayer%20perceptron%20neural%20network_FULL.pdf
http://umpir.ump.edu.my/id/eprint/28990/
https://doi.org/10.1109/ICSGRC.2017.8070580
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:This study investigates the performance of the Multilayer Perceptron (MLP) classifier in discriminating the qualities of agarwood oil significant compounds by different qualities based on three training algorithms namely Scaled Conjugate Gradient (SCG), Levernbergh-Marquardt (LM) and Resilient Backpropagation (RP) Neural Network by using Matlab version 2013a. The dataset used in this study were obtained at Forest Research Institute Malaysia (FRIM) and University Malaysia Pahang (UMP). Further, the areas (abundances, %) of chemical compounds is set as an input and the quality represented (high or low) as an output. The MLP performance was examined with different number of hidden neurons which is in the ranged of 1 to 10. Their performances were observed to accurately found the best technique of optimization to apply to the model. It was found that the LM is effective in reducing the error by enhancing the number of hidden neurons during the network development. The MSE of LM is the smallest among SCG and RP. Besides that, the accuracy of training, validation and testing of LM performed the best accuracy (100%).