featuring a centralised exchange and remote graph-
ical trading terminals was employed. Each experi-
ment had a fixed duration, and pitted 6 robot traders
against 6 human traders; both robots and humans
were equally split into 3 buyers and 3 sellers. Ex-
periments simulated sales trading: real world sales
traders aim at maximising their own profit, which is
the sum of the commission they charge their clients
for every sale or purchase they execute, on their be-
half, of a specific amount of a certain commodity
at a given price. In the simulated sales trading ses-
sions, a definite part of the automated experimental
economics system simulated the agents’ clients, and
communicated to the agents (both human and robot)
the clients’ will of buying or selling the virtual com-
modity, and the numeric values of quantity and price;
De Luca and Cliff refer to such instructions as assign-
ments, and to the predetermined sequence of assign-
ments distributed to each agent in the course of an
experiment as the schedule of that agent. At the start
of the experiment, the system released the first assign-
ment in the agent’s schedule; the agent will start trad-
ing such assignment; when (and if) the assignment is
traded, the system distributes the second assignment
to the agent, and so on, until there are no more as-
signments left for that agent, or the experiment time
is up.
De Luca et al. (De Luca et al., 2011) subsequently
ran further human vs. robot (ZIP, AA) experiments on
a more realistic market previously explored by Cliff
and Preist (Cliff and Preist, 2001): in it, the sched-
ule of each agent still consisted of a fixed number of
clients’ instructions, but the instructions were period-
ically released at predetermined times, until the mar-
ket simulator was stopped. To use Cliff and Preist’s
nomenclature, we will call such timed instructions
permits, and we will refer to markets operating on
a permit schedule as continuous-replenishment, or
simply continuous markets. Crucially, unlike assign-
ments, permits are released regardless of whether or
not the agent has finished trading the previous per-
mit: they are triggered solely by time. Yet, the re-
sults De Luca et al. found were strongly contrasting
with those obtained previously: indeed, humans out-
performed robots in the continuous market simulated
in (De Luca et al., 2011), although the victory was not
as manifest as that shown in (Das et al., 2001).
The finding of such a trading performance unbal-
ance in favour of humans was as controversial as un-
expected: first, because the preceding two human vs.
robot trading agents studies showed the undisputed
victory of robots; second, because with the realism
added by the novel continuous-replenishment mech-
anism, one would anticipate a scenario closer to the
real-world, where the use of automated traders is wide
spread because of their excellent performance; and
third, because, based on common sense, one would
generally expect machines to be better than humans
at numerical tasks such as trading.
The matter was later studied by Cartlidge and Cliff
(Cartlidge and Cliff, 2012; Cartlidge and Cliff, 2013),
who confirmed that, in a market continuously replen-
ished of currency and stock, human traders perform
better than robot traders (AA).
Also, Cartlidge and Cliff revealed an undesired
behaviour in OpEx’s AA implementation, for which
AA robot buyers (sellers) would systematically trade
with the seller (buyer) offering (bidding) the best
price, whenever the difference between the two out-
standing bid and ask prices, divided by the mean of
the two outstanding prices, dropped below a fixed
threshold. In this context, it is useful to recall that
in a CDA the outstanding bid price and ask price are
often referred to as best bid and best ask; the differ-
ence between the best ask and the best bid is what is
commonly called spread; and we refer to the spread
divided by the mean of the best prices as relative
spread. AA’s behaviour is then usually referred to as
crossing the spread or jumping the spread.
Thus, in further experiments, Cartlidge and Cliff
pitted human traders against robot traders implement-
ing a revised version of the AA strategy: one that
was free of the unwanted spread-jumping behaviour
1
.
They found that, under those conditions, robots per-
formed better than humans, thus concluded that the
spread-jumping bug caused the robot traders to per-
form worse, both in their experiments, and in De Luca
et al.’s previous work (De Luca et al., 2011). Indeed,
the reassuring victory of robot traders over human
traders that they obtained in their experiments was the
most recent finding on mixed human/robot agents ex-
perimental financial markets at the time we wrote this
paper.
We have seen how, in continuous markets, all
players receive permits to buy or sell continuously
throughout the simulation. We will call orders the
instructions sent by the trading agents (human and
robot) to the market; agents send orders to the market
to trade the permits they receive from their clients: a
new order is first sent to the market, and can then be
amended (i.e. its quantity and price can be modified),
or canceled (i.e., removed from the market). Here, we
1
In reality, the AA strategy would still jump the spread
methodically, but the minimum value of the relative spread
that triggered the aggressive behaviour had been reduced
considerably, with respect to the value previously used. For
more details on the spread-crossing behaviour of the AA
robot, refer to section 3.
WhyRobotsFailed-DemonstratingtheSuperiorityofMultiple-orderTradingAgentsinExperimentalHuman-agent
FinancialMarkets
45