Authors:
Kevin Warwick
and
Huma Shah
Affiliation:
Coventry University, United Kingdom
Keyword(s):
Artificial Intelligence, Conversation, Imitation Game, Intelligent Agents, Linguistic Devices.
Related
Ontology
Subjects/Areas/Topics:
Agents
;
AI and Creativity
;
Applications
;
Artificial Intelligence
;
Cognitive Systems
;
Computational Intelligence
;
Conversational Agents
;
Evolutionary Computing
;
Knowledge Engineering and Ontology Development
;
Knowledge-Based Systems
;
Natural Language Processing
;
Pattern Recognition
;
Soft Computing
;
Symbolic Systems
Abstract:
What do humans say/ask beyond initial greetings? Are humans always the best at conversation? How easy is it to distinguish an intelligent human from an ‘intelligent agent’ just from their responses to unrestricted questions during a conversation? This paper presents an insight into the nature of human communications, including behaviours and interactions, from a type of interaction - stranger-to-stranger discourse realised from implementing Turing’s question-answer imitation games at Bletchley Park UK in 2012 as part of the Turing centenary commemorations. The authors contend that the effects of lying, misunderstanding, humour and lack of shared knowledge during human-machine and human-human interactions can provide an impetus to building better conversational agents increasingly deployed as virtual customer service agents. Applying the findings could improve human-robot interaction, for example as conversational companions for the elderly or unwell. But do we always want these agent
s to talk like humans do? Suggestions to advance intelligent agent conversation are provided.
(More)