Automated image annotation with novel features based on deep ResNet50-SLT.

Due to their vast size, the growing number of digital images found in personal archives and on websites has become unmanageable, making it challenging to accurately retrieve images from these large databases. While these collections are popular due to their convenience, they are often not equipped w...

Full description

Saved in:
Bibliographic Details
Main Authors: Adnan, Myasar Mundher, Mohd. Rahim, Mohd. Shafry, Khan, Amjad Rehman, Alkhayyat, Ahmed, Alamri, Faten S., Saba, Tanzila, Bahaj, Saeed Ali
Format: Article
Language:English
Published: Institute of Electrical and Electronics Engineers Inc. 2023
Subjects:
Online Access:http://eprints.utm.my/104871/1/MyasarMundherAdnanMohdShafryMohdRahimAmjadRehmanKhan2023_AutomatedImageAnnotationWithNovelFeatures.pdf
http://eprints.utm.my/104871/
http://dx.doi.org/10.1109/ACCESS.2023.3266296
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Due to their vast size, the growing number of digital images found in personal archives and on websites has become unmanageable, making it challenging to accurately retrieve images from these large databases. While these collections are popular due to their convenience, they are often not equipped with proper indexing information, making it difficult for users to find what they need. One of the most significant challenges in the field of computer vision and multimedia is image annotation, which involves labeling images with descriptive keywords. However, computers do not possess the capability to understand the essence of images in the same way that humans do, and people can only identify images based on their visual attributes, not their deeper semantic meaning. Therefore, image annotation requires the use of keywords to effectively communicate the contents of an image to a computer system. However, raw pixels in an image do not provide enough information to generate semantic concepts, making image annotation a complex task. Unlike text annotation, where the dictionary linking words to semantics is well established, image annotation lacks a clear definition of "words" or "sentences" that can be associated with the meaning of the image, known as the semantic gap. To address this challenge, this study aimed to characterize image content meaningfully to make information retrieval easier. An improved automatic image annotation (AIA) system was proposed to bridge the semantic gap between low-level computer features and human interpretation of images by assigning one or multiple labels to images. The proposed AIA system can convert raw image pixels into semantic-level concepts, providing a clearer representation of the image content. The study combined the ResNet50 and slantlet transform with word2vec and principal component analysis with t-distributed stochastic neighbor embedding to balance precision and recall. This allowed the researchers to determine the optimal model for the proposed ResNet50-SLT AIA framework. A Word2vec model with ResNet50-SLT was used with principal component analysis and t-distributed stochastic neighbor embedding to improve IA prediction accuracy. The distributed representation approach involved encoding and storing information about image features. The proposed AIA system utilized seq2seq to generate sentences depending on feature vectors. The system was implemented on the most popular datasets (Flickr8k, Corel-5k, ESP-Game). The results showed that the newly developed AIA scheme overcame the computational time complexity associated with most existing image annotation models during the training phase for large datasets. The performance evaluation of the AIA scheme showed its excellent flexibility of annotation, improved accuracy, and reduced computational costs, thus outperforming the existing state-of-the-art methods. In conclusion, this AIA framework can provide immense benefits in accurately selecting and extracting image features and easily retrieving images from large databases. The extracted features can effectively be used to represent the image, thus accelerating the annotation process and minimizing the computational complexity.