Analyzing the instructions vulnerability of dense convolutional network on GPUS

Recently, Deep Neural Networks (DNNs) have been increasingly deployed in various healthcare applications, which are considered safety-critical applications. Thus, the reliability of these DNN models should be remarkably high, because even a small error in healthcare applications can lead to injury o...

Full description

Saved in:
Bibliographic Details
Main Authors: Khalid, Adam, Izzeldin, I. Mohd, Ibrahim, Younis
Format: Article
Language:English
Published: Institute of Advanced Engineering and Science 2021
Subjects:
Online Access:http://umpir.ump.edu.my/id/eprint/30696/1/Analyzing%20the%20instructions%20vulnerability%20of%20dense%20convolutional%20network%20on%20GPUS.pdf
http://umpir.ump.edu.my/id/eprint/30696/
http://ijece.iaescore.com/index.php/IJECE/article/view/24607/15136
http://doi.org/10.11591/ijece.v11i5.pp4481-4488
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Recently, Deep Neural Networks (DNNs) have been increasingly deployed in various healthcare applications, which are considered safety-critical applications. Thus, the reliability of these DNN models should be remarkably high, because even a small error in healthcare applications can lead to injury or death. Due to the high computations of the DNN models, DNNs are often executed on the Graphics Processing Units (GPUs). However, the GPUs have been reportedly impacted by soft errors, which are extremely serious issues in the healthcare applications. In this paper, we show how the fault injection can provide a deeper understanding of DenseNet201 model instructions vulnerability on the GPU. Then, we analyze vulnerable instructions of the DenseNet201 on the GPU. Our results show that the most significant vulnerable instructions against soft errors PR, STORE, FADD, FFMA, SETP and LD can be reduced from 4.42% to 0.14% of injected faults, after we applied our mitigation strategy.