Multi-granularity semantic information integration graph for cross-modal hash retrieval

With the development of intelligent collection technology and popularization of intelligent terminals, multi-source heterogeneous data are growing rapidly. The effective utilization of rich semantic information contained in massive amounts of multi-source heterogeneous data to provide users with hig...

全面介紹

Saved in:
書目詳細資料
Main Authors: Han, Zhichao, Azman, Azreen, Khalid, Fatimah, Mustaffa, Mas Rina
格式: Article
語言:English
出版: Institute of Electrical and Electronics Engineers 2024
在線閱讀:http://psasir.upm.edu.my/id/eprint/112890/1/112890.pdf
http://psasir.upm.edu.my/id/eprint/112890/
https://ieeexplore.ieee.org/document/10477417
標簽: 添加標簽
沒有標簽, 成為第一個標記此記錄!
實物特徵
總結:With the development of intelligent collection technology and popularization of intelligent terminals, multi-source heterogeneous data are growing rapidly. The effective utilization of rich semantic information contained in massive amounts of multi-source heterogeneous data to provide users with high-quality cross-modal information retrieval services has become an urgent problem to be solved in the current field of information retrieval. In this paper, we propose a novel cross-modal retrieval method, named MGSGH, which deeply explores the internal correlation between data of different granularities by integrating coarse-grained global semantic information and fine-grained scene graph information to model global semantic concepts and local semantic relationship graphs within a modality respectively. By enforcing cross-modal consistency constraints and intra-modal similarity preservation, we effectively integrate the visual features of image data and semantic information of text data to overcome the heterogeneity between the two types of data. Furthermore, we propose a new method for learning hash codes directly, thereby reducing the impact of quantization loss. Our comprehensive experimental evaluation demonstrated the effectiveness and superiority of the proposed model in achieving accurate and efficient cross-modal retrieval. © 2013 IEEE.