Instance Segmentation Evaluation for Traffic Signs
This research paper focuses on developing a traffic sign recognition system based on the You Only Look At Coefficients (YOLACT) model, a one-stage instance segmentation model that offers high performance in terms of accuracy and reliability. However, the YOLACT performance was influenced by various...
Saved in:
Main Authors: | , , , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
semarak ilmu
2024
|
Subjects: | |
Online Access: | http://eprints.uthm.edu.my/11728/1/J17153_357848f5090ea08a5f811eabd05736d0.pdf http://eprints.uthm.edu.my/11728/ https://doi.org/10.37934/araset.34.2.327341 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
id |
my.uthm.eprints.11728 |
---|---|
record_format |
eprints |
spelling |
my.uthm.eprints.117282024-11-27T07:33:10Z http://eprints.uthm.edu.my/11728/ Instance Segmentation Evaluation for Traffic Signs Siow, Shi Heng Shamsudin, Abu Ubaidah Soomro, Zubair Adil Ahmad, Anita Abdul Rahim, Ruzairi Ahmad, Mohd Khairul Ikhwan Adriansyah, Andi QA71-90 Instruments and machines This research paper focuses on developing a traffic sign recognition system based on the You Only Look At Coefficients (YOLACT) model, a one-stage instance segmentation model that offers high performance in terms of accuracy and reliability. However, the YOLACT performance was influenced by various conditions such as day/night and different angles of objects. Therefore, this study aims to evaluate the impact of different angles and environments on the system performance. The paper discusses the framework, backbone structure, prototype generation branch, mask coefficient, and mask assembly used in the system. ResNet-101 and ResNet-50 were used as the backbone structure to extract feature maps of objects in the input image. The prototype generation branch generates prototype masks using fully convolutional networks (FCN), and the mask coefficient branch generates the Mask assembly using the sigmoid nonlinearity. Two models, YOLACT and Mask-RCNN, were evaluated by mean precision (mAP) and frames per second (FPS) with the front view dataset. The results show that YOLACT outperforms Mask-RCNN in terms of accuracy and speed. For an image resolution of 550x550, YOLACT with Resnet-101 is considered the best model in this article since it achieves over 80% precision, recall, specificity, and accuracy in various conditions such as day, night, left and right, and forward-looking angles. semarak ilmu 2024 Article PeerReviewed text en http://eprints.uthm.edu.my/11728/1/J17153_357848f5090ea08a5f811eabd05736d0.pdf Siow, Shi Heng and Shamsudin, Abu Ubaidah and Soomro, Zubair Adil and Ahmad, Anita and Abdul Rahim, Ruzairi and Ahmad, Mohd Khairul Ikhwan and Adriansyah, Andi (2024) Instance Segmentation Evaluation for Traffic Signs. Journal of Advanced Research in Applied Sciences and Engineering Technology, 34 (2). pp. 327-341. ISSN 2462-1943 https://doi.org/10.37934/araset.34.2.327341 |
institution |
Universiti Tun Hussein Onn Malaysia |
building |
UTHM Library |
collection |
Institutional Repository |
continent |
Asia |
country |
Malaysia |
content_provider |
Universiti Tun Hussein Onn Malaysia |
content_source |
UTHM Institutional Repository |
url_provider |
http://eprints.uthm.edu.my/ |
language |
English |
topic |
QA71-90 Instruments and machines |
spellingShingle |
QA71-90 Instruments and machines Siow, Shi Heng Shamsudin, Abu Ubaidah Soomro, Zubair Adil Ahmad, Anita Abdul Rahim, Ruzairi Ahmad, Mohd Khairul Ikhwan Adriansyah, Andi Instance Segmentation Evaluation for Traffic Signs |
description |
This research paper focuses on developing a traffic sign recognition system based on the You Only Look At Coefficients (YOLACT) model, a one-stage instance segmentation model that offers high performance in terms of accuracy and reliability. However, the YOLACT performance was influenced by various conditions such as day/night and
different angles of objects. Therefore, this study aims to evaluate the impact of different angles and environments on the system performance. The paper discusses the framework, backbone structure, prototype generation branch, mask coefficient, and mask assembly used in the system. ResNet-101 and ResNet-50 were used as the backbone structure to extract feature maps of objects in the input image. The prototype generation branch generates prototype masks using fully convolutional networks (FCN), and the mask coefficient branch generates the Mask assembly using the sigmoid nonlinearity. Two models, YOLACT and Mask-RCNN, were evaluated by mean precision (mAP) and frames per second (FPS) with the front view dataset. The results show that YOLACT outperforms Mask-RCNN in terms of accuracy and speed. For an image resolution of 550x550, YOLACT with Resnet-101 is considered the best model in this article since it achieves over 80% precision, recall, specificity, and accuracy in various conditions such as day, night, left and right, and forward-looking angles. |
format |
Article |
author |
Siow, Shi Heng Shamsudin, Abu Ubaidah Soomro, Zubair Adil Ahmad, Anita Abdul Rahim, Ruzairi Ahmad, Mohd Khairul Ikhwan Adriansyah, Andi |
author_facet |
Siow, Shi Heng Shamsudin, Abu Ubaidah Soomro, Zubair Adil Ahmad, Anita Abdul Rahim, Ruzairi Ahmad, Mohd Khairul Ikhwan Adriansyah, Andi |
author_sort |
Siow, Shi Heng |
title |
Instance Segmentation Evaluation for Traffic Signs |
title_short |
Instance Segmentation Evaluation for Traffic Signs |
title_full |
Instance Segmentation Evaluation for Traffic Signs |
title_fullStr |
Instance Segmentation Evaluation for Traffic Signs |
title_full_unstemmed |
Instance Segmentation Evaluation for Traffic Signs |
title_sort |
instance segmentation evaluation for traffic signs |
publisher |
semarak ilmu |
publishDate |
2024 |
url |
http://eprints.uthm.edu.my/11728/1/J17153_357848f5090ea08a5f811eabd05736d0.pdf http://eprints.uthm.edu.my/11728/ https://doi.org/10.37934/araset.34.2.327341 |
_version_ |
1817845160949579776 |
score |
13.222552 |