such as A.L.I.C.E. Another, Talkbot has a cartoon robot as its character. Others, like
Jabberwock have no image.
Conversationally, a few responses were interesting but no impression of human-like
analogy making and metaphor-use was exhibited. One produced a human-like response
of ”pottering about in the garden” when discussing what to do when the weather is good
(Frizella, CBC).
With Jabberwacky, it gave the response ”I play in the evenings. The piano mostly”
to the question: what do you play? Jabberwacky is a ’captured thoughts’ system, the
sum of all its interactions with human users. For further discussion on Jabberwacky see
’Constraining Random Dialogue in a Modern Eliza’ [11]. Project Zandra ACE claims
22,700 patterns with the ability for short-term learning allowing it to ”express all the
ways humans express a thought” whilst ”tracking current topic” to maintain context in
conversation (source: CBC). However, no evidence of this was supported in its dialogue.
It repeated ”By the way, who am I talking to anyway?/ what’s your name?” throughout
the conversation.
One ACE (Zero) draws a distinction by its creator: that it is designed by a computer
as a means to develop natural language processing and fuzzy logic script. Its knowledge
is said to comprise numerous logs of other people’s conversations as a means of learn-
ing [12], or ’convo-logging’ also used in other designs, such as Jabberwacky. CBC 2005
overall winner Jabberwock, which won the 2003 Loebner bronze prize for most human-
like machine, has, as its purpose, entertainment only. Juergen Pirner, Jabberwock’s cre-
ator has no pretension that the programme is intelligent or contains knowledge. How-
ever, he has used his background in journalism to produce a standard conversational
system that can discuss any topic.
3 Discussion
Chatterbox Challenge, as a competition to test artificial conversational systems, is merely
a culture-specific assessment of how ACE are fairing against each other. Not only the
question phase, but the conversational phase too, in an attempt to gauge human-like
qualities from each ACE puts them at a disadvantage. For example, what if they were
judged by asking: if human, what type of human did you feel you were talking to, for
instance a normal human or one with a linguistic or psychological impairment?
Fundamentally, ACE designers tackle their system creation with an idea of imitating
what they think a human would say, along the lines that Turing advocated in his 1950 pa-
per. Some, such as Carpenter’s Jabberwacky, log all dialogues: convo-logging. Accord-
ing to Carpenter, this is not just regurgitating human users’ utterances into other con-
versations. Jabberwacky claims learning through interaction. Others, such as A.L.I.C.E.
are modern Elizas, occasionally generating utterances that appear clever, at other times
meaningless and random. Most ACE were lacking in the human trait of sharing personal
information, revealing emotions. An important feature missing from all ACE, a prob-
lem that may be deemed too hard to solve by designers, is metaphor use. Analogies,
using metaphors is an aspect of human conversation that helps to convey information
of an unshared event or experience by two or more people in a conversation. This is not
to say there are no uses for ACE.
137