An end-to-end model for multi-view scene text recognition
Due to the increasing applications of surveillance and monitoring such as person re-identification, vehicle reidentification and sports events tracking, the necessity of text detection and end-to-end recognition is also growing. Although the past deep learning-based models have addressed several cha...
Saved in:
Main Authors: | , , , , |
---|---|
Format: | Article |
Published: |
Elsevier
2024
|
Subjects: | |
Online Access: | http://eprints.um.edu.my/45920/ https://doi.org/10.1016/j.patcog.2023.110206 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Due to the increasing applications of surveillance and monitoring such as person re-identification, vehicle reidentification and sports events tracking, the necessity of text detection and end-to-end recognition is also growing. Although the past deep learning-based models have addressed several challenges such as arbitraryshaped text, multiple scripts, and variations in the geometric structure of characters, the scope of the models is limited to a single view. This paper presents an end-to-end model for text recognition through refining the multi-views of the same scene, which is called E2EMVSTR (End-to-End Model for Multi-View Scene Text Recognition). Considering the common characteristics shared in multi-view texts, we propose a cycle consistency pairwise similarity-based deep learning model to find texts more efficiently in three input views. Further, the extracted texts are supplied to a Siamese network and semi-supervised attention embedding combinational network for obtaining recognition results. The proposed model combines natural language processing and genetic algorithm models to restore missing character information and correct wrong recognition results. In experiments on our multi-view dataset and several benchmark datasets, the proposed method is proven effective compared to the state-of-the-art methods. The dataset and codes will be made available to the public upon acceptance. |
---|