Bits and Biases: Exploring Perceptions in Human-like AI Interactions
Using the Stereotype Content Model
Fernando Jorge F. Macieira
1
, Diego Costa Pinto
1
, Tiago Oliveira
1
and Mitsuru Yanaze
2
1
NOVA Information Management School (NOVA IMS), Universidade NOVA de Lisboa,
Campus de Campolide, Lisboa, Portugal
2
Escola de Comunicações e Artes, Universidade de São Paulo, Av. Prof. Lúcio Martins Rodrigues, 443, São Paulo, Brazil
Keywords: SCM, CASA, AI, Chatbot, Anthropomorphism.
Abstract: In an AI-infused world, user trust in responses generated by autonomous systems is of critical importance.
Building upon the work of Ahn, Kim, and Sung (2022), this study examines the impact of stereotypes
attributed to chatbots on user trust using the Stereotype Content Model (SCM), which relies on dimensions
like warmth and competence for universal cross-culture social judgment. This research investigates how age-
related stereotypes influence user perceptions of anthropomorphic AI, specifically chatbots, and their
perceived warmth and competence. We conducted two experiments: Study 1 used AI-generated illustrations
to present "young" and "old" chatbot personas, while Study 2 used realistic photos. Participants watched pre-
recorded interactions with the chatbot "Dave" and evaluated its warmth and competence on a 9-point Likert
scale. Data were collected through Prolific, ensuring a diverse sample. Study 1 found no significant
differences in perceptions of warmth and competence between the young and old chatbot personas. However,
Study 2 revealed that the younger persona was perceived as warmer than the older one, indicating that the
realism of the chatbot's appearance affects stereotype activation. These results underscore the importance of
aligning chatbot personas with user expectations to enhance trust and satisfaction.
1 INTRODUCTION
Artificial Intelligence (AI) language models are the
most recent “hype” in technology. New models and
generative AI are “born” every day, and AI usage has
become almost ubiquitous. However, its acceptance
may depend on how trustful (Choung, David, and
Ross, 2023) and friendly it is considered to be (Tay,
Jung, and Park, 2014) and its “human likeness” (Kim,
Kang, and Bae, 2022).
To facilitate this interaction and improve AI’s
trustfulness, research suggests that anthropomorphism
(i.e. applying human characteristics to inanimate
objects) can make human-AI agent interactions more
familiar—nonetheless, guided by the same norms
governing interpersonal relationships. (Aggarwal and
McGill, 2012; Ahn, Kim, and Sung, 2022; Sreejesh
and Anusree, 2017)
Hence, like human interactions, these relationships
are also influenced by how we perceive and evaluate
others’ “personality” traits (Ahn, Kim, and Sung,
2022). Since people interact with anthropomorphic
robots and computers as if they were real humans
(Nass et al., 1995; Nass, Steuer, and Tauber, 1994),
various social traits, gender, and personality are
important for interpersonal relationships, affecting
relationships and evoking social stereotypes (Tay,
Jung, and Park, 2014).
Though most studies regarding AI and stereotypes
are focused on biases that appear in the agent
responses, a few scholars have studied how these
stereotypes may affect the trust in AI agent’s
responses. The study of Ahn, Kim and Sung (2022) is
an example; it explores the effects of gender
stereotypes on evaluating AI recommendations, using
the Stereotype Content Model (SCM) of Fiske,
Cuddy and Glick (2007) which relies on dimensions
like warmth and competence for universal cross-
culture social judgment (Fiske, 2017; Fiske, Cuddy,
and Glick, 2007). Others like Liu et al. (2022) have
used the same scale applied to brands (Kervyn, Fiske,
and Malone, 2022) and different anthropomorphic
representations (El Hedhli et al., 2023).
Despite their interesting findings, the authors
acknowledge the need for more research using other
types of stereotypes, such as age, religion, social
Macieira, F. J. F., Pinto, D. C., Oliveira, T. and Yanaze, M.
Bits and Biases: Exploring Perceptions in Human-like AI Interactions Using the Stereotype Content Model.
DOI: 10.5220/0013192700003956
Paper published under CC license (CC BY-NC-ND 4.0)
In Proceedings of the 7th International Conference on Finance, Economics, Management and IT Business (FEMIB 2025), pages 161-166
ISBN: 978-989-758-748-1; ISSN: 2184-5891
Proceedings Copyright © 2025 by SCITEPRESS Science and Technology Publications, Lda.
161
status, etc., and cross-cultural versions (Ahn, Kim,
and Sung, 2022). Building on this, we conducted
experiments focusing on age as the stereotype to be
elicited..
This paper is structured in 6 sections: Section 2
offers a brief theoretical background on AI and
human interaction and a deeper one on the concept of
the “Computers Are Social Actors (CASA)”
paradigm, as well as the Stereotype Content Model
the scale used to measure the expected results. In
Section 3 and its subsections, we explain the
experiments used and discuss the findings. Section 4
comprises conclusions and a brief general discussion;
sections 5 and 6, address implications followed by
limitations and future research suggestions.
2 THEORETICAL FRAMEWORK
AI technology is reshaping the service industry,
creating new service experiences with substantial
implications for both customers and managers (Doorn
et al., 2017) and to make these human-machine
interactions more familiar and easier, companies and
developers use anthropomorphism (applying human
characteristics to inanimate objects) (Aggarwal and
McGill, 2012; Ahn, Kim, and Sung, 2022; Sreejesh
and Anusree, 2017).
Individuals tend to attribute human-like qualities
to entities that exhibit distinct human traits, like
smiling expressions (Epley, Waytz, and Cacioppo,
2007). Prior research has highlighted the
effectiveness of specific cues in conferring human-
like attributes on inanimate objects and social agents.
Notably, characteristics like human-like physical
forms, perceived animacy, and interactivity induce
anthropomorphism in objects (El Hedhli et al., 2023).
Researchers on human-machine interaction like
Nass, Steuer and Tauber (1994) and Nass et. al (1995)
have discovered that people interact with
anthromorphic robotic/machine entities as if they
were human, resembling regular human-human
interactions; therefore, proposing a paradigm called
CASA—Computers Are Social Actors.
The CASA paradigm avers that people will infer
computers’ “personalities” through cues received
during interactions and will respond to these
personalities as if they were human interactions (Nass
et al., 1995). Accordingly, the CASA paradigm is
often used to understand how people perceive AI in
the context of human-computer interaction (Hong,
Choi, and Williams, 2020). More recently, De
Kervenoael et al. (2024) reinforced the CASA
paradigm when interacting with service robots in
retail environments, meaning that the concept still
applies to human-computer interactions.
CASA studies consider human-human interactions
and expect similar results when replicating them in a
human-computer interaction (Edwards et al., 2019).
Hence, the present study focuses on experimenting
with human age-related stereotypes and technology
knowledge in a chatbot interaction, as it would occur
in a common human-human interaction.
Considering these similarities with human-human
interactions, we propose to use the Stereotype
Content Model (Fiske et al., 2002; Fiske, Cuddy, and
Glick, 2007; Fiske, 2017; Kervyn, Fiske, and Malone,
2022) to assess how people evaluate machine entities
regarding prejudices and social groups.
The Stereotype Content Model (SCM) posits that
the stereotypes are encapsulated by two fundamental
dimensions: warmth and competence (Fiske et al.,
2002; Fiske, Cuddy, and Glick, 2007; Cuddy et al.,
2009).
When individuals interact with someone for the
first time, they instinctively assess whether the person
exhibits benevolent and cooperative intentions,
reflecting qualities associated with warmth. These
qualities encompass traits like honesty, friendliness,
and sincerity. Concurrently, they evaluate the person's
capacity to translate these intentions into action,
marking competence—characterized by attributes like
knowledge, creativity, and efficiency (Cuddy, Glick,
and Beninger ,2011; El Hedhli et al., 2023).
Thus, warmth signifies how individuals perceive
others’ intentions. When someone is perceived as
well-intentioned, they are generally considered
trustworthy; in contrast, competence pertains to an
individual's ability to actualize their intentions.
Despite some concerns expressed by Friehs et al.
(2022), the SCM has been used as a validated scale in
various stereotype studies, regarding humans, and
brands (Kervyn, Fiske, and Malone, 2022; Fournier
and Alvarez, 2012; Liu et al., 2022), artificial
intelligence (Kim, Kang, and Bae, 2022; Ahn, Kim,
and Sung, 2022), robots (Tay, Jung, and Park, 2014)
and even virtual influencers (El Hedhli et al., 2023).
Fiske et al. (2002) explicitly report in SCM
studies about the age stereotype, especially related to
older people being considered as having lower
perceived competence and higher warmth. Thus, the
main contribution brought by this article is integrate
SCM and CASA, using experimental design studies
to assess how people evaluate their dimensions of
warmth and competence when interacting with
different chatbot personas (younger vs. older).
We hypothesize that chatbot’s persona (young vs.
old) has a more positive impact on warmth when
FEMIB 2025 - 7th International Conference on Finance, Economics, Management and IT Business
162
older and a more positive impact on competence
perception when younger. Accordingly, younger
chatbot personas will enhance the perception of
competence while simultaneously diminishing
perceptions of warmth. On the other hand, at least
regarding technology, older AI personas are
evaluated as being warmer but presenting less
competence than the younger ones.)
3 OVERVIEW OF STUDIES
In our research, we designed two experiments (Study
1 and Study 2) to assess the influence of the age of AI
personas on their evaluation, grounded in the SCM
framework (Fiske, Cuddy, and Glick, 2007). In these
studies, we intend to systematically vary AI personas’
age cues to assess their respective impacts on the
perceived dimensions of warmth and competence.
The study was designed by requesting participants
to evaluate a pre-recorded interaction between a
chatbot named “Dave” and a user seeking
recommendations for purchasing a new wireless
mouse. We chose the wireless mouse product based
on previous studies where the product was found to
be relevant but utilitarian, lacking significant
emotional involvement or hedonic features (Ahn,
Kim, and Sung, 2022). To provide hints about the
chatbot's personality, we used AI-generated images.
All chatbots presented in the pre-recorded
interactions were referred to simply as "Dave." Other
chatbot characteristics, such as the text used,
remained identical for all interactions.
In both studies, participants were randomly
directed to one of the interactions (young vs. old) and
responded to six questions related to the SCM using
a 9-point Likert scale, attention checks, and
manipulation checks. Additionally, some questions
related to participant profiles, such as age and gender,
were included.
3.1 Study 1
In this study, we aimed to identify the impacts on the
"warmth" and "competence" dimensions resulting
from variations in the age of the persona adopted by
the chatbot.
Using the Prolific platform, we collected 203
observations, one of which did not pass the attention
check (“What was the name of the chatbot on the
interaction you just saw?”). This was excluded from
the study, leaving 202 valid observations, which were
divided into 100 in the "Young Dave" condition
(watching the interaction with the young chatbot) and
102 in the "Old Dave" condition (watching the
interaction with the old chatbot). Figure 1 shows the
pictures used to represent the chatbot’s persona in the
first experiment.
Besides seeking confirmation of the chatbot's
name (Dave), we created a manipulation check, in
which the participant had to evaluate the chatbot's
age. Participants rated Old Dave's age as 59 on
average (Std. Deviation = 19,8) and Young Dave's
age as 28 on average (Std. Deviation = 13,5).
Therefore, we considered all 202 observations as
valid. Female participants comprised 60% of the
sample and male participants 40% (one participant
did not disclose gender), all of them declaring English
to be their main language. The average age of the
participants was 37,1 years—37,7 for female
participants and 36,3 for males.
Figure 1: Illustrations used as stimuli to impersonate
Old/Young Dave chatbot in study 1.
Preliminary analysis using SPSS indicated that
the questionnaire related to the SCM demonstrated
satisfactory validity (COMPETENCE Cronbach’s
Alpha = 0.914 / WARMTH Cronbach’s Alpha =
0,910), confirming the scale's applicability.
3.2 Results
We conducted a MANOVA to identify significant
differences in the means of the "warmth" and
"competence" dimensions among participants
assigned to each of the conditions of AI_PERSONA
(Old Dave and Young Dave). The multivariate
analysis did not reveal any significant differences in
the perceptions of these two dimensions.
Contrary to previous work on SCM and CASA, it
seems—at least relating to illustration-like chatbot
personas—that age stereotypes are not elicited.
Therefore, our main hypothesis, that the “older” Dave
chatbot would elicit the elderly people stereotype
(being warmer and less competent than “young”
Dave) was not supported. Due to the non-significant
differences in the previous study, we redesigned the
experiment using more realistic photos, instead of
illustrations as the use of illustration-like stimuli
could interfere with the perceived anthropomorphism
level.
Bits and Biases: Exploring Perceptions in Human-like AI Interactions Using the Stereotype Content Model
163
Table 1: Descriptive Statistics and MANOVA results for
study 1 – (Young Dave vs Old Dave).
3.3 Study 2
Due to our results and considerations on Study 1, we
replicated the same experiment, with the same pre-
recorded interaction, but with more realistic
anthropomorphism (Figure 2) instead of illustration-
like pictures. Like the previous experiment, the
following images were AI generated.
Figure 2: More realistic pictures used as stimuli to
impersonate Real Old/Young Dave chatbot in Study 2.
Using Prolific again, we collected 206 completed
questionnaires. Four respondents did not pass the
attention check (“What was the name of the chatbot
on the interaction you just saw?”) and were excluded,
leaving 202 valid observations: 100 in the "Young
Dave" condition (watching the interaction with the
young chatbot) and 102 in the "Old Dave" condition
(watching the interaction with the old chatbot).
Female participants constituted 60,8% and male
participants (two participants identified as non-
binary) 39,1%, all of whom declared that English was
their primary language. The average age of the
participants was 38,2 38,0 for female participants
and 38,9 for males.
Besides seeking confirmation of the chatbot's
name (Dave) as an attention check, we also conducted
a manipulation check, in which the participant had to
evaluate the chatbot's age (N=200). Participants rated
Old Dave's age as 44,8 on average (N=100; Std.
Deviation = 16,0) and Young Dave's age as 27,2 on
average (N=100; Std. Deviation = 11,3).
3.4 Results
Again, after verifying SCM scale validity
(COMPETENCE Cronbachs Alpha = 0.883 /
WARMTH Cronbach’s Alpha = 0,904), we
conducted a MANOVA to identify significant
differences in the means of the "warmth" and
"competence" dimensions among respondents
assigned to each of the conditions of chatbot’s
persona (Old Dave vs. Young Dave).
Table 2: Descriptive Statistics and MANOVA results for
study 2 – young/old with more realistic images.
In accordance with our initial analysis and
understanding, more realistic representations used as
stimuli were in fact able to create differences in
respondents’ perceptions. However, while we
expected the older persona to be rated warmer, the
REAL YOUNG DAVE was rated warmer than the
REAL OLD DAVE instead (OLD DAVE = 5,6340;
YOUNG DAVE = 6,1961).
Since the average estimated age of REAL OLD
DAVE was 45, we postulate that the perceived age
was not significant enough to characterize the picture
as “elderly”. Therefore, the associated prejudices did
not arise.
4 CONCLUSIONS
Our research explores the impact of age-related
variations in a chatbot's persona on the two
fundamental dimensions of "warmth" and
"competence." In two separate studies, we presented
participants with interactions featuring a chatbot
named "Dave" portrayed as young and older
individuals.
However, a preliminary analysis in the first study
did not reveal any significant differences in the
perceptions of these dimensions between the two age
groups. This outcome suggests that the abstract and
less anthropomorphic nature of illustrations might
have hindered the activation of age-related
stereotypes. The result goes against previous SCM
and CASA research, implying that at least when
experimenting with illustration-like AI generated
images as chatbot’s personas age stereotypes are not
elicited. Consequently, considering these results and
theoretical base, we conducted a follow-up
experiment replacing the illustrations with more
realistic photos to better understand whether the
persona's realism would influence the participants'
judgments. In contrast, in the second experiment, the
more realistic stimuli had effect on perceptions –
highlighting how more lifelike depictions may
prompt stronger stereotype-related judgments.
Dependent Variable df F Sig. Chatbot Persona Means Std. Deviation N
Old Dave 6,4854 1,71992 103
Young Dave 6,5657 1,66834 99
Old Dave 6,0065 1,88272 103
Young Dave 5,9394 1,88283 99
0,737
0,8
COMPETENCE
WARMTH
1
1
0,113
0,064
Dependent Variable df F Sig. Chatbot Persona Means Std. Deviation N
Old Dave 6,2680 1,82938 102
Young Dave 6,6800 1,51455 100
Old Dave 5,6340 2,06789 102
Young Dave 6,2000 1,45412 100
COMPETENCE 1 3,034 0,083
WARMTH 1 5,046 0,026
FEMIB 2025 - 7th International Conference on Finance, Economics, Management and IT Business
164
However, contrary to the hypothesized outcome
that the older persona would exhibit higher warmth,
the younger persona was rated warmer, likely because
the perceived age of the older persona was
insufficient to evoke elderly-related biases. These
findings highlight the importance of ensuring that
stimuli used in stereotype studies effectively convey
the intended traits, since prior research suggests that
stereotype activation depends on clear and
pronounced cues (Fiske, 2017).
5 MANAGERIAL IMPLICATIONS
On a theoretical level, our study contributes to the
ongoing discourse surrounding human-AI interactions
by shedding light on the complex interplay between
anthropomorphism, stereotypes, and user perceptions.
By exploring the impact of age stereotypes on
evaluations of chatbot personas, we extend existing
research on the "computers are social actors"
paradigm, demonstrating its relevance in
contemporary AI applications. Moreover, our
findings raise questions about the role of stereotypes
in shaping user behaviors in and evaluations of
human-AI interactions.
Partially reinforcing the principles outlined in the
"computers are social actors" paradigm (Nass et al.,
1995), our study offers insights and questions for
managers considering the deployment of or already
having chatbots deployed for customer service
where chatbots are seen as a promising technology for
service providers (Nicolescu & Tudorache, 2022). It
is necessary to seriously research and study which
"persona" or personality to attribute to chatbots
because they directly influence the perceptions
consumers derive from their interactions (Lian &
Lian, 2023). Choosing the appropriate persona for
chatbots becomes relevant as it directly influences
consumers' perceptions of warmth and competence,
which in turn affect their trust (Choung et al., 2023)
and satisfaction levels (Hsu & Lin, 2023). Thus, we
emphasize the importance of matching chatbot
personas with user expectations to optimize trust and
engagement, a critical insight for digital customer
service managers seeking to enhance user satisfaction
and the effectiveness of their channels.
6 LIMITATIONS
Despite its limitations, our study contributes to the
dynamics of human-AI interactions, particularly
regarding the implications of anthropomorphism and
stereotypes on user perceptions. While the use of pre-
recorded interactions in this study offers a controlled
environment for assessing respondents’ perceptions,
it lacks the complexity of real-world interactions.
Future research could benefit from incorporating live
interactions between participants and AI systems to
capture the dynamic nature of human-AI interactions
more accurately. By doing so, researchers could
observe how perceptions of warmth and competence
evolve over the course of a conversation, providing
insights into the sustained impact of
anthropomorphism and stereotypes in longer
interactions.
Moreover, the study's focus on a single utilitarian
product (wireless mouse) limits the generalizability
of its findings to other contexts and how different
personas may have different evaluations according to
the subject of interaction. To address this limitation,
future research could explore interactions with a
broader range of products, including those with
varying levels of emotional involvement or hedonic
features.
Furthermore, while the study examines age
stereotypes, there is a notable absence of exploration
into other types of stereotypes that may influence
perceptions of chatbot personas. Future research
could expand upon this by investigating how
stereotypes related to gender, ethnicity, socio-
economic status, or technological expertise impact
users' willingness to accept suggestions and engage
with AI systems. By examining a wider array of
stereotypes, researchers can gain a more
comprehensive understanding of the factors that
shape users' perceptions and behaviors in human-AI
interactions.
REFERENCES
Aggarwal, Pankaj, and Ann L. McGill. 2012. “When
Brands Seem Human, Do Humans Act Like Brands?
Automatic Behavioral Priming Effects of Brand
Anthropomorphism.” Journal of Consumer Research
39 (2): 307–23.
Ahn, Jungyong, Jungwon Kim, and Yongjun Sung. 2022.
“The Effect of Gender Stereotypes on Artificial
Intelligence Recommendations.” Journal of Business
Research 141: 50–59.
Choung, Hyesun, Prabu David, and Arun Ross. 2023.
Trust in AI and Its Role in the Acceptance of AI
Technologies.” International Journal of Human–
Computer Interaction 39 (9): 1727–39.
Cuddy, Amy J. C., Susan T. Fiske, Virginia S. Y. Kwan,
Peter Glick, Stéphanie Demoulin, Jacques-Philippe
Bits and Biases: Exploring Perceptions in Human-like AI Interactions Using the Stereotype Content Model
165
Leyens, Michael Harris Bond, et al. 2009. “Stereotype
Content Model across Cultures: Towards Universal
Similarities and Some Differences.” British Journal of
Social Psychology 48 (1): 1–33.
Cuddy, Amy J.C., Peter Glick, and Anna Beninger. 2011.
“The Dynamics of Warmth and Competence
Judgments, and Their Outcomes in Organizations.”
Research in Organizational Behavior 31 (January): 73–
98.
De Kervenoael, Ronan, Alexandre Schwob, Rajibul Hasan,
and Evangelia Psylla. 2024. “SIoT Robots and
Consumer Experiences in Retail: Unpacking Repeat
Purchase Intention Drivers Leveraging Computers Are
Social Actors (CASA) Paradigm.” Journal of Retailing
and Consumer Services 76 (January): 103589.
Doorn, Jenny van, Martin Mende, Stephanie M. Noble,
John Hulland, Amy L. Ostrom, Dhruv Grewal, and J.
Andrew Petersen. 2017. “Domo Arigato Mr. Roboto:
Emergence of Automated Social Presence in Organiza-
tional Frontlines and Customers’ Service Experiences.”
Journal of Service Research 20 (1): 43–58.
Edwards, Chad, Autumn Edwards, Brett Stoll, Xialing Lin,
and Noelle Massey. 2019. “Evaluations of an Artificial
Intelligence Instructor’s Voice: Social Identity Theory
in Human-Robot Interactions.” Computers in Human
Behavior 90 (January): 357–62.
El Hedhli, Kamel, Haithem Zourrig, Amr Al Khateeb, and
Ibrahim Alnawas. 2023. “Stereotyping Human-like
Virtual Influencers in Retailing: Does Warmth Prevail
over Competence?” Journal of Retailing and Consumer
Services 75 (November): 103459.
Epley, Nicholas, Adam Waytz, and John T. Cacioppo.
2007. “On Seeing Human: A Three-Factor Theory of
Anthropomorphism.” Psychological Review 114 (4):
864–86.
Fiske, Susan T. 2017. “Prejudices in Cultural Contexts:
Shared Stereotypes (Gender, Age) Versus Variable
Stereotypes (Race, Ethnicity, Religion).” Perspectives
on Psychological Science 12 (5): 791–99.
Fiske, Susan T., Amy J. C. Cuddy, Peter Glick, and Jun Xu.
2002. “A Model of (Often Mixed) Stereotype Content:
Competence and Warmth Respectively Follow from
Perceived Status and Competition.” Journal of
Personality and Social Psychology 82 (6): 878–902.
Fiske, Susan T., Amy J.C. Cuddy, and Peter Glick. 2007.
“Universal Dimensions of Social Cognition: Warmth
and Competence.” Trends in Cognitive Sciences 11 (2):
77–83.
Fournier, Susan, and Claudio Alvarez. 2012. “Brands as
Relationship Partners: Warmth, Competence, and in-
Between.” Journal of Consumer Psychology 22 (2):
177–85.
Friehs, Maria-Therese, Patrick F. Kotzur, Johanna
Böttcher, Ann-Kristin C. Zöller, Tabea Lüttmer, Ulrich
Wagner, Frank Asbrock, and Maarten H. W. Van Zalk.
2022. “Examining the Structural Validity of Stereotype
Content Scales A Preregistered Re-Analysis of
Published Data and Discussion of Possible Future
Directions.” International Review of Social Psychology
35 (1): 1.
Hong, Joo-Wha, Sukyoung Choi, and Dmitri Williams.
2020. “Sexist AI: An Experiment Integrating CASA
and ELM.” International Journal of Human–Computer
Interaction 36 (20): 1928–41.
Hsu, Chin-Lung, and Judy Chuan-Chuan Lin. 2023.
“Understanding the User Satisfaction and Loyalty of
Customer Service Chatbots.” Journal of Retailing and
Consumer Services 71 (March): 103211.
Kervyn, Nicolas, Susan T. Fiske, and Chris Malone. 2022.
“Social Perception of Brands: Warmth and Competence
Define Images of Both Brands and Social Groups.”
Consumer Psychology Review 5 (1): 51–68.
Kim, Juran, Seungmook Kang, and Joonheui Bae. 2022.
“Human Likeness and Attachment Effect on the
Perceived Interactivity of AI Speakers.” Journal of
Business Research 144 (May): 797–804.
Lian, Lee Kim, and Song Bee Lian. 2023. “Examining
Anthropomorphism of Chatbots and Its Effect on User
Satisfaction and User Loyalty in the Service Industry.”
Electronic Journal of Business and Management 8 (1):
1–14.
Liu, Fu, Haiying Wei, Zhenzhong Zhu, and Haipeng
(Allan) Chen. 2022. “Warmth or Competence: Brand
Anthropomorphism, Social Exclusion, and
Advertisement Effectiveness.” Journal of Retailing and
Consumer Services 67 (July): 103025.
Nass, Clifford, Youngme Moon, B. J. Fogg, Byron Reeves,
and D. Christopher Dryer. 1995. “Can Computer
Personalities Be Human Personalities?” International
Journal of Human-Computer Studies 43 (2): 223–39.
Nass, Clifford, Jonathan Steuer, and Ellen R. Tauber. 1994.
“Computers Are Social Actors.” In Proceedings of the
SIGCHI Conference on Human Factors in Computing
Systems Celebrating Interdependence - CHI ’94, 72–
78. Boston, Massachusetts, United States: ACM Press.
Nicolescu, Luminița, and Monica Teodora Tudorache.
2022. “Human-Computer Interaction in Customer
Service: The Experience with AI Chatbots—A
Systematic Literature Review.” Electronics 11 (10):
1579.
Sreejesh, S., and M.R. Anusree. 2017. “Effects of
Cognition Demand, Mode of Interactivity and Brand
Anthropomorphism on Gamers’ Brand Attention and
Memory in Advergames.” Computers in Human
Behavior 70 (May): 575–88.
Tay, Benedict, Younbo Jung, and Taezoon Park. 2014.
“When Stereotypes Meet Robots: The Double-Edge
Sword of Robot Gender and Personality in Human–
Robot Interaction.” Computers in Human Behavior 38
(September): 75–84.
FEMIB 2025 - 7th International Conference on Finance, Economics, Management and IT Business
166