Face recognition and machine learning at the edge

The number of IoT is expected to reach 20 billion by year 2020. This is due to data that log in the sensors or cameras are all send to the cloud for further processing. Cloud computing is not able to support big data analytic anymore due to network bandwidth. Face recognition is chosen as a case stu...

Full description

Saved in:
Bibliographic Details
Main Authors: Yee, Joanne Ling Sin, Sheikh, Usman Ullah, Mohd. Mokji, Musa, Syed Abd. Rahman, Syed Abd. Rahman
Format: Conference or Workshop Item
Language:English
Published: 2020
Subjects:
Online Access:http://eprints.utm.my/id/eprint/92133/1/UsmanUllahSheikh2020_FaceRecognitionandMachineLearningattheEdge.pdf
http://eprints.utm.my/id/eprint/92133/
http://dx.doi.org/10.1088/1757-899X/884/1/012084
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:The number of IoT is expected to reach 20 billion by year 2020. This is due to data that log in the sensors or cameras are all send to the cloud for further processing. Cloud computing is not able to support big data analytic anymore due to network bandwidth. Face recognition is chosen as a case study to demonstrate the challenges to shift the application to the edge. The objective of this project is to develop a face recognition system that is suitable to be used at the edge using a deep neural network. Secondly, investigate the performance in terms of model size, speed and inference time after different bit-width fixed point quantization on the weights of the network. Lastly, deploy the model to Raspberry Pi 3 and test the performance. The chosen data set is AT&T. MATLAB is used to train the network in laptop with i5-7300 CPU while OpenCV-python is used to load and test the network in Raspberry Pi3 and laptop. The proposed system is designed by doing transfer learning on SqueezeNet to classify face. Fixed-point quantization is being applied to the weights of the layers to reduce the size of the model. From the experiment result, it is recommended to use 8-bit fixed-point quantization to the weights in all the layers in the model to compress the size of the network up to 2.5 times while maintaining the original accuracy 90%. That is only 1.1× speed up of the model on Raspberry Pi 3 after different bit-width weight quantization.