Transfer learning based performance comparison of the pre-trained deep neural networks

Deep learning has grown tremendously in recent years, having a substantial impact on practically every discipline. Transfer learning allows us to transfer the knowledge of a model that has been formerly trained for a particular task to a new model that is attempting to solve a related but not identi...

Full description

Saved in:
Bibliographic Details
Main Authors: Kumar, Jayapalan Senthil, Anuar, Syahid, Hassan, Noor Hafizah
Format: Article
Language:English
Published: Science and Information Organization 2022
Subjects:
Online Access:http://eprints.utm.my/id/eprint/100857/1/JayapalanSenthilKumar2022_TransferLearningbasedPerformanceComparison.pdf
http://eprints.utm.my/id/eprint/100857/
http://dx.doi.org/10.14569/IJACSA.2022.0130193
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Deep learning has grown tremendously in recent years, having a substantial impact on practically every discipline. Transfer learning allows us to transfer the knowledge of a model that has been formerly trained for a particular task to a new model that is attempting to solve a related but not identical problem. Specific layers of a pre-trained model must be retrained while the others must remain unmodified to adapt it to a new task effectively. There are typical issues in selecting the layers to be enabled for training and layers to be frozen, setting hyperparameter values, and all these concerns have a substantial effect on training capabilities as well as classification performance. The principal aim of this study is to compare the network performance of the selected pre-trained models based on transfer learning to help the selection of a suitable model for image classification. To accomplish the goal, we examined the performance of five pre-trained networks, such as SqueezeNet, GoogleNet, ShuffleNet, Darknet-53, and Inception-V3 with different Epochs, Learning Rates, and Mini-Batch Sizes to compare and evaluate the network’s performance using confusion matrix. Based on the experimental findings, Inception-V3 has achieved the highest accuracy of 96.98%, as well as other evaluation metrics, including precision, sensitivity, specificity, and f1-score of 92.63%, 92.46%, 98.12%, and 92.49%, respectively.