Incremental learning of deep neural network for robust vehicle classification
Existing single-lane free flow (SLFF) tolling systems either heavily rely on contact-based treadle sensor to detect the number of vehicle wheels or manual operator to classify vehicles. While the former is susceptible to high maintenance cost due to wear and tear, the latter is prone to human error....
Saved in:
Main Authors: | , |
---|---|
Format: | Article |
Language: | English English |
Published: |
UKM Press
2022
|
Subjects: | |
Online Access: | http://irep.iium.edu.my/98750/1/2_Journal_JKeJ_Accepted.pdf http://irep.iium.edu.my/98750/7/Acceptance%20Letter_JKeJ.pdf http://irep.iium.edu.my/98750/ |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Existing single-lane free flow (SLFF) tolling systems either heavily rely on contact-based treadle sensor to detect the number of vehicle wheels or manual operator to classify vehicles. While the former is susceptible to high maintenance cost due to wear and tear, the latter is prone to human error. This paper proposes a vision-based solution to SLFF vehicle classification by adapting a state-of-the-art object detection model as a backbone of the proposed framework and an incremental training scheme to train our VehicleDetNet in a continual manner to cater the challenging problem of continuous growing dataset in real-world environment. It involved four experiment set-ups where the first stage involved CUTe datasets. VehicleDetNet is utilized for the framework of vehicle detection, and it presents an anchorless network which enable the elimination of the bounding boxes of candidates’ anchors. The classification of vehicles is performed by detecting the vehicle's location and inferring the vehicle's class. We augment the model with a wheel detector and enumerator to add more robustness, showing improved performance. The proposed method was evaluated on live dataset collected from the Gombak toll plaza at Kuala Lumpur-Karak Expressway. The results show that within two months of observation, the mean accuracy increases from 87.3 % to 99.07 %, which shows the efficacy of our proposed method. |
---|