
• Cross-Linguistic Studies. Analyze disinforma-
tion patterns across different languages and re-
gions.
• Network Analysis. Investigate user behaviors
and network structures that facilitate disinforma-
tion spread.
• Refining Few-Shot Techniques. Improve prompt
designs and methodologies for large language
models to reduce reliance on manual annotations.
• Intervention Strategies. Develop effective
counter-disinformation measures based on identi-
fied emotional and linguistic patterns.
• Intelligent Agents. Design and implement intel-
ligent agents capable of real-time detection and
mitigation of disinformation. These agents could
combine machine learning techniques with rule-
based systems to analyze linguistic and emotional
cues, detect malicious content, and take auto-
mated countermeasures. For instance, intelligent
agents could:
– Flag potential disinformation for review or fur-
ther analysis.
– Provide users with context or verified informa-
tion to counter false claims.
– Interact with social media algorithms to limit
the spread of harmful content while promoting
verified, accurate information.
Addressing these areas will enhance the detec-
tion and mitigation of disinformation on social me-
dia, thereby strengthening the integrity of information
ecosystems during critical events.
ACKNOWLEDGEMENTS
This work was done in the framework of the Iberian
Digital Media Observatory (IBERIFIER Plus),
co-funded by the EC under the Call DIGITAL-
2023-DEPLOY-04 (Grant 101158511), and of
the Malicious Actors Profiling and Detection in
Online Social Networks Through Artificial In-
telligence (MARTINI) research project, funded
by MCIN/AEI/10.13039/501100011033 and by
NextGenerationEU/PRTR (Grant PCI2022-135008-
2).
REFERENCES
Allcott, H. and Gentzkow, M. (2017). Social media and
fake news in the 2016 election. Journal of Economic
Perspectives, 31(2):211–236.
Ekman, P. (1992). An argument for basic emotions. Cogni-
tion and Emotion, 6(3-4):169–200.
European Commission (2019). Tackling online disinforma-
tion.
Islam, M., Sarkar, T., Khan, S., Mostofa Kamal, A., Hasan,
S., Kabir, A., Yeasmin, D., Islam, M., Amin Chowd-
hury, K., Anwar, K., Chughtai, A., and Seale, H.
(2020). Covid-19-related infodemic and its impact
on public health: A global social media analysis.
American Journal of Tropical Medicine and Hygiene,
103(4):1621–1629.
Kumar, A. and Taylor, J. W. (2024). Feature importance
in the age of explainable ai: Case study of detecting
fake news & misinformation via a multi-modal frame-
work. European Journal of Operational Research,
317(2):401–413.
Lee, J., Hameleers, M., and Shin, S. Y. (2024). The
emotional effects of multimodal disinformation: How
multimodality, issue relevance, and anxiety affect mis-
perceptions about the flu vaccine. New Media & Soci-
ety, 26(12):6838–6860.
Luvembe, A. M., Li, W., Li, S., Liu, F., and Wu, X. (2024).
Caf-odnn: Complementary attention fusion with opti-
mized deep neural network for multimodal fake news
detection. Information Processing & Management,
61(3):103653.
McLoughlin, K. L., Brady, W. J., Goolsbee, A., Kaiser,
B., Klonick, K., and Crockett, M. J. (2024). Misin-
formation exploits outrage to spread online. Science,
386(6725):991–996.
Osmundsen, M., Bor, A., Vahlstrup, P. B., Bechmann, A.,
and Petersen, M. B. (2021). Partisan polarization is
the primary psychological motivation behind political
fake news sharing on twitter. American Political Sci-
ence Review, 115(3):999–1015.
Pan, Z., Mao, Y., Xiong, L., Pang, T., and Ping, P. (2024).
Mfae: Multimodal fusion and alignment for entity-
level disinformation detection. Pattern Recognition
Letters, 184:59–65.
Pennebaker, J., Francis, M., and Booth, R. (1999). Linguis-
tic inquiry and word count (liwc).
Pennycook, G., Bear, A., and Collins, E. (2019). The im-
plied truth effect: Attaching warnings to a subset of
fake news headlines increases perceived accuracy of
headlines without warnings. Management Science.
Pennycook, G. and Rand, D. G. (2021). The psychology of
fake news. Trends in cognitive sciences, 25(5):388–
402.
Poletto, F., Basile, V., Sanguinetti, M., Bosco, C., and Patti,
V. (2020). Resources and benchmark corpora for hate
speech detection: a systematic review. Language Re-
sources and Evaluation, 55:477 – 523.
Qu, Z., Meng, Y., Muhammad, G., and Tiwari, P. (2024).
Qmfnd: A quantum multimodal fusion-based fake
news detection model for social media. Information
Fusion, 104:102172.
Rashid, J., Kim, J., and Masood, A. (2024). Unraveling the
tangle of disinformation: A multimodal approach for
fake news identification on social media. In Compan-
ion Proceedings of the ACM Web Conference 2024,
Divergent Emotional Patterns in Disinformation on Social Media? An Analysis of Tweets and TikToks About the DANA in Valencia
935