Twitter Fake News Detection Using Graph Neural Networks
DOI:
https://doi.org/10.18372/1990-5548.78.18259Keywords:
fake news detection, graph neural networks, Twitter, binary classification, graph poolingAbstract
This article is devoted to the intellectual processing of text information for the purpose of detecting rail news. To solve the given task, the use of deep graph neural networks is proposed. Fake news detection based on user preferences is augmented with deeper graph neural network topologies, including Hierarchical Graph Pooling with Structure Learning, to improve the graph convolution operation and capture richer contextual relationships in news graphs. The paper presents the possibilities of extending the framework of fake news detection based on user preferences using deep graph neural networks to improve fake news recognition. Evaluation on the FakeNewsNet dataset (a subset of Gossipcop) using the PyTorch Geometric and PyTorch Lightning frameworks demonstrates that the developed deep graph neural network model achieves 94% accuracy in fake news classification. The results show that deeper graph neural networks with integrated text and graph features offer promising options for reliable and accurate fake news detection, paving the way for improved information quality in social networks and beyond.
References
Americans Who Mainly Get Their News on Social Media Are Less Engaged, Less Knowledgeable, Website, 2020 https://www.pewresearch.org/journalism/2020/07/30/americans-who-mainly-get-their-news-on-social-media-are-less-engaged-less-knowledgeable/
K. Shu, A. Sliva, S. Wang, J. Tang, and H. Liu, “Fake news detection on social media: A data mining perspective,” ACM SIGKDD Explorations Newsletter, vol. 19, no. 1, pp. 22–36, 2017. https://doi.org/10.1145/3137597.3137600
T. Bian, X. Xiao, T. Xu, P. Zhao, W. Huang, Y. Rong, and J. Huang, “Rumor detection on social media with bi-directional graph convolutional networks,” in AAAI, vol. 34, no. 01, 2020, pp. 549–556. https://doi.org/10.1609/aaai.v34i01.5393
Michael M. Bronstein, Joan Bruna, Taco Cohen, Petar Veličković, Geometric Deep Learning: Grids, Groups, Graphs, Geodesics, and Gauges. https://doi.org/10.48550/arXiv.2104.13478
T. N. Kipf and M. Welling, “Semi-supervised classification with graph convolutional networks,” in Proc. of ICLR, 2017. https://doi.org/10.48550/arXiv.1609.02907
P. Veličković, G. Cucurull, A. Casanova, A. Romero, P. Lio, and Y. Bengio, “Graph attention networks,” arXiv preprint arXiv:1710.10903, 2017 https://doi.org/10.48550/arXiv.1710.10903
W. Hamilton, Z. Ying, and J. Leskovec, “Inductive representation learning on large graphs,” NeurIPS, 2017. https://doi.org/10.48550/arXiv.1706.02216
Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. How powerful are graph neural networks? arXiv preprint arXiv:1810.00826, 2018. https://doi.org/10.48550/arXiv.1810.00826
Scarselli, Franco; Gori, Marco; Tsoi, Ah Chung; Hagenbuchner, Markus; Monfardini, Gabriele (2009). “The Graph Neural Network Model,” IEEE Transactions on Neural Networks, 20(1): 61–80. https://doi.org/10.1109/TNN.2008.2005605
Alessio Micheli, “Neural Network for Graphs: A Contextual Constructive Approach,” IEEE Transactions on Neural Networks, 20(3): 498–511. https://doi.org/10.1109/TNN.2008.2010350
Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio, “Neural machine translation by jointly learning to align and translate,” International Conference on Learning Representations (ICLR), 2015. https://doi.org/10.48550/arXiv.1409.0473
Understanding Pooling in Graph Neural Networks https://doi.org/10.48550/arXiv.2110.05292
Hierarchical Graph Pooling with Structure Learning https://doi.org/10.48550/arXiv.1911.05954
K. Church and P. Hanks, “Word association norms, mutual information, and lexicography,” Computational linguistics, vol. 16, no. 1, pp. 22–29, 1990. https://dl.acm.org/doi/10.3115/981623.981633
https://doi.org/10.1145/3485447.3512163
Clint Burfoot and Timothy Baldwin, “Automatic satire detection: Are you having a laugh?,” In Proceedings of the ACL-IJCNLP 2009 conference short papers, 2009, pp. 161–164. Association for Computational Linguistics. https://doi.org/10.3115/1667583.1667633
V. Vaibhav, R. Mandyam, and E. Hovy, “Do sentence interactions matter? leveraging sentence level representations for fake news classification,” in Proceedings of the Thirteenth Workshop on Graph-Based Methods for Natural Language Processing, 2019, pp. 134–139. https://doi.org/10.48550/arXiv.1910.12203
K. Shu, D. Mahudeswaran, S. Wang, D. Lee, and H. Liu, “Fakenewsnet: A data repository with news content, social context, and spatiotemporal information for studying fake news on social media,” Big data, vol. 8, no. 3, pp. 171–188, 2020. https://doi.org/10.48550/arXiv.1809.01286
Y. Dou, K. Shu, C. Xia, P. S. Yu, and L. Sun, “User preference aware fake news detection,” in Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, 2021, pp. 2051–2055. https://doi.org/10.48550/arXiv.2104.12259
Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean, Efficient estimation of word representations in vector space. 2013. arXiv preprint arXiv:1301.3781 (2013). https://doi.org/10.48550/arXiv.1301.3781
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova, Bert: Pre-training of deep bidirectional transformers for language understanding, 2018. arXiv preprint arXiv:1810.04805 (2018) https://doi.org/10.48550/arXiv.1810.04805
Downloads
Published
Issue
Section
License
Authors who publish with this journal agree to the following terms:
Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal.
Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.
Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See The Effect of Open Access).