An automatic text recognition tool in signage for the visually impaired
Text comprehension poses a significant challenge for visually impaired individuals, as they lack visual capabilities. Moreover, visually impaired individuals often encounter crucial text signage that requires immediate attention, such as warnings for hazardous areas, open holes, wet floors, or restr...
Saved in:
Main Authors: | , , , |
---|---|
Format: | Proceeding Paper |
Language: | English English |
Published: |
IEEE
2024
|
Subjects: | |
Online Access: | http://irep.iium.edu.my/115587/1/115587_An%20automatic%20text%20recognition%20tool.pdf http://irep.iium.edu.my/115587/2/115587_An%20automatic%20text%20recognition%20tool_SCOPUS.pdf http://irep.iium.edu.my/115587/ https://ieeexplore.ieee.org/abstract/document/10652391 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Text comprehension poses a significant challenge for visually impaired individuals, as they lack visual capabilities. Moreover, visually impaired individuals often encounter crucial text signage that requires immediate attention, such as warnings for hazardous areas, open holes, wet floors, or restricted access zones, thereby jeopardizing their safety. While existing text recognition tools aid in perceiving text, they frequently rely on physical actions like button presses or camera shaking, lacking automatic functionality, and thereby limiting their usefulness. This proof of-concept paper presents an automatic text recognition tool designed to enhance accessibility to crucial signage information for visually impaired individuals. The tool integrates real-time
object recognition, text recognition, and text-to-speech
conversion. It consists of a shoulder-mounted web camera,
earphones for audio output, and a portable processing unit.
The camera captures continuous video feed, which is processed to detect and extract text from signage. Preliminary tests under various lighting conditions yielded accuracy rates ranging from 68.25% to 94.11%, with the highest accuracy under indirect lighting. Future work will address factors such as walking speed, user movement patterns, and environmental conditions. |
---|