Fake News Detection via NLP is Vulnerable to Adversarial Attacks
Zhixuan Zhou, Huankang Guan, Meghana Bhat, Justin Hsu
2019
Abstract
News plays a significant role in shaping people’s beliefs and opinions. Fake news has always been a problem, which wasn’t exposed to the mass public until the past election cycle for the 45th President of the United States. While quite a few detection methods have been proposed to combat fake news since 2015, they focus mainly on linguistic aspects of an article without any fact checking. In this paper, we argue that these models have the potential to misclassify fact-tampering fake news as well as under-written real news. Through experiments on Fakebox, a state-of-the-art fake news detector, we show that fact tampering attacks can be effective. To address these weaknesses, we argue that fact checking should be adopted in conjunction with linguistic characteristics analysis, so as to truly separate fake news from real news. A crowdsourced knowledge graph is proposed as a straw man solution to collecting timely facts about news events.
DownloadPaper Citation
in Harvard Style
Zhou Z., Guan H., Bhat M. and Hsu J. (2019). Fake News Detection via NLP is Vulnerable to Adversarial Attacks.In Proceedings of the 11th International Conference on Agents and Artificial Intelligence - Volume 2: ICAART, ISBN 978-989-758-350-6, pages 794-800. DOI: 10.5220/0007566307940800
in Bibtex Style
@conference{icaart19,
author={Zhixuan Zhou and Huankang Guan and Meghana Bhat and Justin Hsu},
title={Fake News Detection via NLP is Vulnerable to Adversarial Attacks},
booktitle={Proceedings of the 11th International Conference on Agents and Artificial Intelligence - Volume 2: ICAART,},
year={2019},
pages={794-800},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0007566307940800},
isbn={978-989-758-350-6},
}
in EndNote Style
TY - CONF
JO - Proceedings of the 11th International Conference on Agents and Artificial Intelligence - Volume 2: ICAART,
TI - Fake News Detection via NLP is Vulnerable to Adversarial Attacks
SN - 978-989-758-350-6
AU - Zhou Z.
AU - Guan H.
AU - Bhat M.
AU - Hsu J.
PY - 2019
SP - 794
EP - 800
DO - 10.5220/0007566307940800