Improving the performance of multi-modality ontology image retrieval system using DBpedia
Image Retrieval System (IRS) is commonly based on searching keywords in the surrounding text of images by employing content-independent metadata, or data that is not directly concerned with image content. Content Based Image Retrieval (CBIR) focuses on image features, as it extracts image featur...
Saved in:
Main Authors: | , |
---|---|
Format: | Conference or Workshop Item |
Language: | English |
Published: |
2013
|
Subjects: | |
Online Access: | http://irep.iium.edu.my/39063/1/1973-7276-2-PB.pdf http://irep.iium.edu.my/39063/ http://www.world-education-center.org/index.php/P-ITCS/article/viewArticle/1973 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Image Retrieval System (IRS) is commonly based on searching keywords in the surrounding text of images by
employing content-independent metadata, or data that is not directly concerned with image content. Content
Based Image Retrieval (CBIR) focuses on image features, as it extracts image features such as dominant color,
color histogram, texture, and object shape. The main problem with CBIR is the semantic gap between low-level
image features and high-level human-understandable concepts. There is a lack of agreement between
information that is extracted from visual data and the text description. Ontologies are at the heart of all
Semantic Web applications, representing domain concepts and relations in the form of a semantic network. In
this study, we applied ontology to bridge the semantic gap by developing a prototype multi-modality ontology
IRS based on sports news and by integrating it with DBpedia to enrich the knowledge base while overcoming the
problem of semantic heterogeneity. We evaluate our approach using precision and recall and compare this
approach with single modality ontology. The results show that multi-modality ontology IRS give high precision
(P) and recall (R) compared to visual-based ontology and keyword-based ontology.
|
---|