not human-like. I will call this the objection of the un-
known agent.
This is superficially plausible as an objection,
since it is true that an unknown mind might not dis-
play its intelligence in ways that are humanly perceiv-
able or detectable by our tests. However, it might;
being unknown, we cannot rule out this possibility.
Furthermore, if we had independent evidence that the
mind in question did in fact generate artifacts similar
to those produced by humans, then the force of the ob-
jection would vanish. Even barring such independent
evidence, the degree to which the unknown actor pro-
duced artifacts similar to those produced by humans is
the degree to which our tests might (correctly) detect
its intelligence. Thus, in regards to intelligent design
in nature, we have an empirical question: how simi-
lar are the artifacts of nature to human artifacts? The
fact that springs (Shin and Tam, 2007), gears (Bur-
rows and Sutton, 2013), compasses (Qin et al., 2015),
boolean logic networks (Robinson, 2006), digital
codes (Hood and Galas, 2003) and other human in-
ventions have been found to preexist in biology at
least suggests the possibility that a mind behind natu-
ral phenomena would be sufficiently similar to human
intelligence to allow for its detection through observa-
tion of its engineered artifacts.
Lastly, the objection, at its core, concerns false
negatives, not false positives. The objection is basi-
cally that an unknown agent may escape detection,
not that an unknowing system might be mistaken as
intelligent. Given the danger of falsely attributing in-
telligence to unintelligent systems, this is exactly the
form of bias we want. Furthermore, if we did detect
an unknown intelligence behind a set of objects, the
detection itself would be evidence in favor of it be-
ing sufficiently similar to human intelligence, since
such tests only output positive classifications when ar-
tifacts are sufficiently similar to those anticipated by
humans.
Therefore, the possibility of unknown agents does
not create sufficient separation between the Turing
Test and other methods of inferring design to warrant
any distinction.
5.4 The Interrogation Objection
Lastly, there remains a seemingly powerful argument
that can be raised against the full equivalence of the
Turing Test to other design detection methods: the
call-and-response nature of the Turing Test seems ab-
sent from other methodologies, and thus might be
used to validate one while invalidating the other. The
Turing Test presupposes that one can interrogate the
subject in question, gathering specific responses to
specific questions, whereas one cannot demand spe-
cific answers from nature.
To ensure this argument does not point to a dis-
tinction without a difference, we must specify exactly
what it is about the call-and-response that makes it
uniquely suitable to convey the existence of intelli-
gence, and ensure that this same quality cannot be at-
tained by other means. One such feature could be that
in responding to posed questions, a system is forced to
respond to and overcome unanticipated or novel chal-
lenges. However, the same can be said of biological
systems, which have shown adaptive architectures for
overcoming a variety of fluctuating and novel envi-
ronmental challenges. When viewed from this per-
spective, we have systems which are given input con-
figurations (e.g., questions from a judge or environ-
mental structure in nature) and create response config-
urations that convey intelligence (e.g., an apropos re-
sponse or clever environmental adaptation). Further-
more, nature can be probed for specific answers in the
form of experiments and tested hypotheses, thus mim-
icking, albeit imperfectly,the call-and-response struc-
ture of the Turing Test. Thus, this distinction might
not present any real difference.
Examining this critique in greater depth, we see
that even given the call-and-response structure of
the Turing Test, one cannot fully control the re-
sponses given, thus lessening the force of this objec-
tion. While an interrogator is free to ask the system
in a Turing Test any question, he cannot be guaran-
teed that the response will be sufficient or even rel-
evant. For example, consider the responses of Eu-
gene Goostman, a computer system claimed to have
passed the Turing Test by fooling a small set of human
judges into thinking it was human (Auerbach, 2014).
The following excerpts were taken from transcripts of
conversations held with Goostman (Auerbach, 2014):
Judge: Why do birds suddenly appear?
Eugene: Just because 2 plus 2 is 5! By the way,
what’s your occupation? I mean - could
you tell me about your work?
...
Judge: It is ok, I get sick of sick people. How
is your stomach feeling today? Is it upset
maybe?
Eugene: I think you can’t smile at all. I bet you
work in a funeral agency.
Clearly, the output of such a system may have lit-
tle or no correlation to the input questions. When the
responses are unrelated to the questions asked, we are
essentially given an arbitrary collection of sentences,
having no say in their contents, similar to the situa-
tion in nature. However, even such a rigid collection