nalists, allowing them to concentrate on complex re-
porting while AI handles routine tasks. It highlights
how AI can reduce variable costs in journalism by au-
tomating data analysis, fact-checking, news produc-
tion and personalising content for readers. However,
it notes that implementing AI can require significant
technological investment.
The article also looks at how AI, such as Chat-
GPT, can help with tasks related to journalism, such
as fact-checking, writing news articles, creating head-
lines and analysing data. It emphasises the impor-
tance of using AI tools judiciously and verifying in-
formation from any source.
Overall, while AI presents opportunities to sim-
plify journalistic processes and reduce costs, it em-
phasises the need for ethical considerations and hu-
man oversight when using these technologies.
Another article (Kotenidis and Veglis, 2021)
notes that algorithmic technology has advanced con-
siderably in recent years, but faces challenges, espe-
cially in the automated production of content. One
crucial limitation is the reliance on structured data. In
addition, although algorithms can mimic human writ-
ing, they still lack in areas such as analytical thinking,
flexibility and creativity. This creates a disconnect
between algorithms and humans, especially in auto-
mated newsrooms.
In addition to automated content production, there
are challenges in other areas, such as data mining,
where the results can be insignificant or even incor-
rect.
Despite this, algorithmic technology is promis-
ing for solving contemporary problems in journal-
ism, such as information overload and credibility. Al-
though the introduction of more sophisticated algo-
rithms may cause turbulence, it is hoped that they
will help produce news faster and on a larger scale,
expanding coverage to unprofitable events. However,
this could lead to information overload, exacerbated
by the spread of fake news.
This recent study (Lermann Henestrosa et al.,
2023) investigated how readers perceive content pro-
duced automatically by algorithms, focussing specif-
ically on science journalism articles. Although much
content is already generated automatically, there is lit-
tle knowledge about how artificial intelligence (AI)
authoring affects audience perception, especially in
more complex texts.
The researchers highlighted technological ad-
vances in automated text generation, citing large-scale
language models such as OpenAI’s GPT-3, Microsoft
and NVIDIA’s Megatron-Turing as examples of ad-
vanced natural language generation (NLG) capabili-
ties. These models illustrate the ability to simulate
human writing, but there is still a lack of studies on
the impact of AI authoring on complex texts.
The studies conducted analysed readers’ percep-
tions of science journalism articles written by algo-
rithms. Surprisingly, even when an AI author pre-
sented information in an evaluative way on a scientific
topic, there was no decrease in the credibility or trust
attributed to the text. The presentation of the infor-
mation was identified as the main factor influencing
readers’ perceptions, regardless of the declared au-
thorship.
An interesting point was that although the partici-
pants considered the AI author to be less ”human” in
their writing, this perception did not impact the eval-
uation of the messages. This raised questions about
the importance of the nature of the author for readers
in terms of credibility and trust in the content.
The results indicated an acceptance of AI as an
author of scientific texts, provided certain conditions
were met. In addition, the studies revealed positive
attitudes towards automation, suggesting a favourable
attitude towards the use of algorithms in content pro-
duction.
However, the studies had some limitations, includ-
ing the influence of participants’ prior beliefs and the
perceived neutrality of the information presented. Fu-
ture research should further explore how readers un-
derstand the workings of text production algorithms
and the relationship between the perceived ”humani-
sation” of AI and the credibility of the content gener-
ated.
In summary, the study has contributed to a deeper
understanding of how AI authorship affects the pub-
lic’s perception of complex content, paving the way
for reflection on the acceptance of algorithmically
generated texts in more diverse contexts.
The study (Sir
´
en-Heikel et al., 2023) examines
the influence of AI technologies on journalism by
studying the logics underpinning the construction of
technical solutions. It uses a theoretical framework
to understand the interrelationships between institu-
tions, individuals, and organizations in social sys-
tems. The integration of AI technologies into news
organizations impacts how work is organized and re-
shapes journalism. The study explores companies that
develop and sell NLG (natural language generation)
services for journalism, revealing how technologists
view their interactions with news organizations.
The participants in the study represent different
educational backgrounds, cultures, and languages, yet
share a sensemaking of their relationship with jour-
nalism. The presupposition that technologists and
journalists occupy separate fields of logic is validated
through the interviews. The companies involved in
ICEIS 2024 - 26th International Conference on Enterprise Information Systems
890