Autonomous person-following telepresence robot using monocular camera and deep learning YOLO
Telepresence robots (TRs) are increasingly important for remote communication and collaboration, particularly in situations where physical presence is not possible. One key feature of TRs is person-following, which relies on the detection and distance estimation of individuals. This study proposes a...
Saved in:
Main Authors: | , , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
ARQII Publication
2024
|
Online Access: | http://eprints.utem.edu.my/id/eprint/27514/2/01084260420249539774.PDF http://eprints.utem.edu.my/id/eprint/27514/ http://arqiipubl.com/ojs/index.php/AMS_Journal/article/view/574 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
id |
my.utem.eprints.27514 |
---|---|
record_format |
eprints |
spelling |
my.utem.eprints.275142024-07-25T11:33:48Z http://eprints.utem.edu.my/id/eprint/27514/ Autonomous person-following telepresence robot using monocular camera and deep learning YOLO Mat Lazim, Izzuddin Sakri, Ahmad Amin Firdaus Mauzi, Suffian At-Tsauri Sahrim, Musab Ramli, Liyana Noordin, Aminurrashid Telepresence robots (TRs) are increasingly important for remote communication and collaboration, particularly in situations where physical presence is not possible. One key feature of TRs is person-following, which relies on the detection and distance estimation of individuals. This study proposes an autonomous person-following TR using a monocular camera and deep-learning YOLO for person detection and distance estimation. To compensate for the monocular camera's inability to provide depth information, a novel distance estimation algorithm based on focal length and person width is introduced. The estimated width information of the detected person is extracted from the bounding box generated by YOLO. A pre-trained model using the MS COCO dataset is employed with YOLO for the person detection task. For robot movement control, a region-based controller is proposed to enable the robot to move based on the detected person's location in the image captured by the camera. Finally, integration and deployment of the proposed method in the TR is carried out using the Robot Operating System (ROS). Experimental results demonstrate that the TR can successfully follow a person using the proposed algorithm, thus highlighting its effectiveness for person-following tasks. ARQII Publication 2024-04 Article PeerReviewed text en http://eprints.utem.edu.my/id/eprint/27514/2/01084260420249539774.PDF Mat Lazim, Izzuddin and Sakri, Ahmad Amin Firdaus and Mauzi, Suffian At-Tsauri and Sahrim, Musab and Ramli, Liyana and Noordin, Aminurrashid (2024) Autonomous person-following telepresence robot using monocular camera and deep learning YOLO. Applications of Modelling and Simulation, 8. pp. 101-109. ISSN 2600-8084 http://arqiipubl.com/ojs/index.php/AMS_Journal/article/view/574 |
institution |
Universiti Teknikal Malaysia Melaka |
building |
UTEM Library |
collection |
Institutional Repository |
continent |
Asia |
country |
Malaysia |
content_provider |
Universiti Teknikal Malaysia Melaka |
content_source |
UTEM Institutional Repository |
url_provider |
http://eprints.utem.edu.my/ |
language |
English |
description |
Telepresence robots (TRs) are increasingly important for remote communication and collaboration, particularly in situations where physical presence is not possible. One key feature of TRs is person-following, which relies on the detection and distance estimation of individuals. This study proposes an autonomous person-following TR using a monocular camera and deep-learning YOLO for person detection and distance estimation. To compensate for the monocular camera's inability to provide depth information, a novel distance estimation algorithm based on focal length and person width is introduced. The estimated width information of the detected person is extracted from the bounding box generated by YOLO. A pre-trained model using the MS COCO dataset is employed with YOLO for the person detection task. For robot movement control, a region-based controller is proposed to enable the robot to move based on the detected person's location in the image captured by the camera. Finally, integration and deployment of the proposed method in the TR is carried out using the Robot Operating System (ROS). Experimental results demonstrate that the TR can successfully follow a person using the proposed algorithm, thus highlighting its effectiveness for person-following tasks. |
format |
Article |
author |
Mat Lazim, Izzuddin Sakri, Ahmad Amin Firdaus Mauzi, Suffian At-Tsauri Sahrim, Musab Ramli, Liyana Noordin, Aminurrashid |
spellingShingle |
Mat Lazim, Izzuddin Sakri, Ahmad Amin Firdaus Mauzi, Suffian At-Tsauri Sahrim, Musab Ramli, Liyana Noordin, Aminurrashid Autonomous person-following telepresence robot using monocular camera and deep learning YOLO |
author_facet |
Mat Lazim, Izzuddin Sakri, Ahmad Amin Firdaus Mauzi, Suffian At-Tsauri Sahrim, Musab Ramli, Liyana Noordin, Aminurrashid |
author_sort |
Mat Lazim, Izzuddin |
title |
Autonomous person-following telepresence robot using monocular camera and deep learning YOLO |
title_short |
Autonomous person-following telepresence robot using monocular camera and deep learning YOLO |
title_full |
Autonomous person-following telepresence robot using monocular camera and deep learning YOLO |
title_fullStr |
Autonomous person-following telepresence robot using monocular camera and deep learning YOLO |
title_full_unstemmed |
Autonomous person-following telepresence robot using monocular camera and deep learning YOLO |
title_sort |
autonomous person-following telepresence robot using monocular camera and deep learning yolo |
publisher |
ARQII Publication |
publishDate |
2024 |
url |
http://eprints.utem.edu.my/id/eprint/27514/2/01084260420249539774.PDF http://eprints.utem.edu.my/id/eprint/27514/ http://arqiipubl.com/ojs/index.php/AMS_Journal/article/view/574 |
_version_ |
1806455616471826432 |
score |
13.214267 |