Improving the performance of multi-modality ontology image retrieval system using DBpedia

Image Retrieval System (IRS) is commonly based on searching keywords in the surrounding text of images by employing content-independent metadata, or data that is not directly concerned with image content. Content Based Image Retrieval (CBIR) focuses on image features, as it extracts image featur...

全面介紹

Saved in:
書目詳細資料
Main Authors: Mohd Khalid, Yanti Idaya Aspura, Mohd Noah, Shahrul Azman
格式: Conference or Workshop Item
語言:English
出版: 2013
主題:
在線閱讀:http://irep.iium.edu.my/39063/1/1973-7276-2-PB.pdf
http://irep.iium.edu.my/39063/
http://www.world-education-center.org/index.php/P-ITCS/article/viewArticle/1973
標簽: 添加標簽
沒有標簽, 成為第一個標記此記錄!
實物特徵
總結:Image Retrieval System (IRS) is commonly based on searching keywords in the surrounding text of images by employing content-independent metadata, or data that is not directly concerned with image content. Content Based Image Retrieval (CBIR) focuses on image features, as it extracts image features such as dominant color, color histogram, texture, and object shape. The main problem with CBIR is the semantic gap between low-level image features and high-level human-understandable concepts. There is a lack of agreement between information that is extracted from visual data and the text description. Ontologies are at the heart of all Semantic Web applications, representing domain concepts and relations in the form of a semantic network. In this study, we applied ontology to bridge the semantic gap by developing a prototype multi-modality ontology IRS based on sports news and by integrating it with DBpedia to enrich the knowledge base while overcoming the problem of semantic heterogeneity. We evaluate our approach using precision and recall and compare this approach with single modality ontology. The results show that multi-modality ontology IRS give high precision (P) and recall (R) compared to visual-based ontology and keyword-based ontology.