MULTIMODAL FAKE NEWS DETECTION
In recent years, social media has increasingly become one of the popular ways for people to consume news. As proliferation of fake news on social media has the negative impacts on individuals and society, automatic fake news detection has been explored by different research communities for combati...
Saved in:
Main Author: | |
---|---|
Format: | Article |
Language: | English |
Published: |
Universiti Malaysia Terengganu
2024
|
Subjects: | |
Online Access: | http://umt-ir.umt.edu.my:8080/handle/123456789/20680 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | In recent years, social media has increasingly become one of the popular ways for people to
consume news. As proliferation of fake news on social media has the negative impacts on individuals and society, automatic fake news detection has been explored by different research
communities for combating fake news. With the development of multimedia technology, there is
a phenomenon that cannot be ignored is that more and more social media news contains information with different modalities, e.g., texts, pictures and videos. The multiple information modalities show more evidence of the happening of news events and present new opportunities to
detect features in fake news. First, for multimodal fake news detection task, it is a challenge of
keeping the unique properties for each modality while fusing the relevant information between
different modalities. Second, for some news, the information fusion between different modalities
may produce the noise information which affects model’s performance. Unfortunately, existing
methods fail to handle these challenges. To address these problems, we propose a multimodal
fake news detection framework based on Crossmodal Attention Residual and Multichannel
convolutional neural Networks (CARMN). The Crossmodal Attention Residual Network (CARN)
can selectively extract the relevant information related to a target modality from another source
modality while maintaining the unique information of the target modality. The Multichannel
Convolutional neural Network (MCN) can mitigate the influence of noise information which may
be generated by crossmodal fusion component by extracting textual feature representation from
original and fused textual information simultaneously. We conduct extensive experiments on four
real-world datasets and demonstrate that the proposed model outperforms the state-of-the-art
methods and learns more discriminable feature representations. |
---|