Authors:
Tim Polzehl
1
;
2
;
Vera Schmitt
2
;
Nils Feldhus
1
;
Joachim Meyer
3
and
Sebastian Möller
1
;
2
Affiliations:
1
German Research Center for Artificial Intelligence, Berlin, Germany
;
2
Technische Universität Berlin, Berlin, Germany
;
3
Tel Aviv University, Tel Aviv, Israel
Keyword(s):
Disinformation, Fake Detection, Multimodal Multimedia Text Audio Speech Video Analysis, Trust, XAI, Bias, Human in the Loop, Crowd, HCI.
Abstract:
Methods for automatic disinformation detection have gained much attention in recent years, as false information can have a severe impact on societal cohesion. Disinformation can influence the outcome of elections, the spread of diseases by preventing adequate countermeasures adoption, and the formation of allies, as the Russian invasion in Ukraine has shown. Hereby, not only text as a medium but also audio recordings, video content, and images need to be taken into consideration to fight fake news. However, automatic fact-checking tools cannot handle all modalities at once and face difficulties embedding the context of information, sarcasm, irony, and when there is no clear truth value. Recent research has shown that collaborative human-machine systems can identify false information more successfully than human or machine learning methods alone. Thus, in this paper, we present a short yet comprehensive state of current automatic disinformation detection approaches for text, audio, vi
deo, images, multimodal combinations, their extension into intelligent decision support systems (IDSS) as well as forms and roles of human collaborative co-work. In real life, such systems are increasingly applied by journalists, setting the specifications to human roles according to two most prominent types of use cases, namely daily news dossiers and investigative journalism.
(More)