Multi-task learning for scene text image super-resolution with multiple transformers
Scene text image super-resolution aims to improve readability by recovering text shapes from low-resolution degraded text images. Although recent developments in deep learning have greatly improved super-resolution (SR) techniques, recovering text images with irregular shapes, heavy noise, and blurr...
Saved in:
Main Authors: | , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI
2022
|
Subjects: | |
Online Access: | http://eprints.utm.my/103563/1/AliSelamat2022_MultiTaskLearningforSceneTextImage.pdf http://eprints.utm.my/103563/ http://dx.doi.org/10.3390/electronics11223813 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
id |
my.utm.103563 |
---|---|
record_format |
eprints |
spelling |
my.utm.1035632023-11-19T07:53:59Z http://eprints.utm.my/103563/ Multi-task learning for scene text image super-resolution with multiple transformers Honda, Kosuke Kurematsu, Masaki Fujita, Hamido Selamat, Ali T Technology (General) Scene text image super-resolution aims to improve readability by recovering text shapes from low-resolution degraded text images. Although recent developments in deep learning have greatly improved super-resolution (SR) techniques, recovering text images with irregular shapes, heavy noise, and blurriness is still challenging. This is because networks with Convolutional Neural Network (CNN)-based backbones cannot sufficiently capture the global long-range correlations of text images or detailed sequential information about the text structure. In order to address this issue, this paper proposes a Multi-task learning-based Text Super-resolution (MTSR) Network to approach this problem. MTSR is a multi-task architecture for image reconstruction and SR. It uses transformer-based modules to transfer complementary features of the reconstruction model, such as noise removal capability and text structure information, to the SR model. In addition, another transformer-based module using 2D positional encoding is used to handle irregular deformations of the text. The feature maps generated from these two transformer-based modules are fused to attempt improvement of the visual quality of images with heavy noise, blurriness, and irregular deformations. Experimental results on the TextZoom dataset and several scene text recognition benchmarks show that our MTSR significantly improves the accuracy of existing text recognizers. MDPI 2022 Article PeerReviewed application/pdf en http://eprints.utm.my/103563/1/AliSelamat2022_MultiTaskLearningforSceneTextImage.pdf Honda, Kosuke and Kurematsu, Masaki and Fujita, Hamido and Selamat, Ali (2022) Multi-task learning for scene text image super-resolution with multiple transformers. Electronics (Switzerland), 11 (22). pp. 1-18. ISSN 2079-9292 http://dx.doi.org/10.3390/electronics11223813 DOI : 10.3390/electronics11223813 |
institution |
Universiti Teknologi Malaysia |
building |
UTM Library |
collection |
Institutional Repository |
continent |
Asia |
country |
Malaysia |
content_provider |
Universiti Teknologi Malaysia |
content_source |
UTM Institutional Repository |
url_provider |
http://eprints.utm.my/ |
language |
English |
topic |
T Technology (General) |
spellingShingle |
T Technology (General) Honda, Kosuke Kurematsu, Masaki Fujita, Hamido Selamat, Ali Multi-task learning for scene text image super-resolution with multiple transformers |
description |
Scene text image super-resolution aims to improve readability by recovering text shapes from low-resolution degraded text images. Although recent developments in deep learning have greatly improved super-resolution (SR) techniques, recovering text images with irregular shapes, heavy noise, and blurriness is still challenging. This is because networks with Convolutional Neural Network (CNN)-based backbones cannot sufficiently capture the global long-range correlations of text images or detailed sequential information about the text structure. In order to address this issue, this paper proposes a Multi-task learning-based Text Super-resolution (MTSR) Network to approach this problem. MTSR is a multi-task architecture for image reconstruction and SR. It uses transformer-based modules to transfer complementary features of the reconstruction model, such as noise removal capability and text structure information, to the SR model. In addition, another transformer-based module using 2D positional encoding is used to handle irregular deformations of the text. The feature maps generated from these two transformer-based modules are fused to attempt improvement of the visual quality of images with heavy noise, blurriness, and irregular deformations. Experimental results on the TextZoom dataset and several scene text recognition benchmarks show that our MTSR significantly improves the accuracy of existing text recognizers. |
format |
Article |
author |
Honda, Kosuke Kurematsu, Masaki Fujita, Hamido Selamat, Ali |
author_facet |
Honda, Kosuke Kurematsu, Masaki Fujita, Hamido Selamat, Ali |
author_sort |
Honda, Kosuke |
title |
Multi-task learning for scene text image super-resolution with multiple transformers |
title_short |
Multi-task learning for scene text image super-resolution with multiple transformers |
title_full |
Multi-task learning for scene text image super-resolution with multiple transformers |
title_fullStr |
Multi-task learning for scene text image super-resolution with multiple transformers |
title_full_unstemmed |
Multi-task learning for scene text image super-resolution with multiple transformers |
title_sort |
multi-task learning for scene text image super-resolution with multiple transformers |
publisher |
MDPI |
publishDate |
2022 |
url |
http://eprints.utm.my/103563/1/AliSelamat2022_MultiTaskLearningforSceneTextImage.pdf http://eprints.utm.my/103563/ http://dx.doi.org/10.3390/electronics11223813 |
_version_ |
1783876382692999168 |
score |
13.214268 |