allowing bidder participation to be affected by tender
experience.In summary, the agent-based modelling
study simulated aucton environments and design fea-
tures that could not be explored through the field tri-
als.
The paper is organized as follows. The next
section presents the case for agent-based modelling
in the design of auctions. Agent-based computa-
tional approaches are being increasingly utilized in
the economicsliterature to complement analytical and
human-experimental approaches (Epstein and Axtell,
1996; Tesfatsion, 2002). The distinguishing feature
of agent-based modelling is that it is based on exper-
imentation or simulation in a computational environ-
ment using an artificial society of agents that emulate
the behaviours of the economic agents in the system
being studied (Tesfatsion, 2002). These features make
the technique a convenient tool in contexts where an-
alytical solutions are intractable and the researcher
has to resort to simulation and/or in contexts where
modelling outcomes need to be enriched through the
incorporation of agent heterogeneity, agent interac-
tions, inductive learning, or other features. Section
3 presents the auction design features explored in the
study. These include budget levels, scope of conser-
vation activities, endoegeneity of participation levels,
and two alternative pricing formats. Simulated results
are presented and discussed in Section 4. The final
section summarizes the study and draws conclusions.
2 AGENT-BASED AUCTION
MODEL
Auction theory has focused on optimal auction de-
sign, but its results are usually valid only under very
restrictive assumptions on the auction environment
and the rationality of the players. Theoretical analysis
rarely incorporates computational limitations, of ei-
ther the mechanisms or the agents (Arifovic and Led-
yard, 2002). Experimental results (Erev and Roth,
1998; Camerer, 2003) demonstrate that the way peo-
ple play is better captured by learning models rather
than by the Nash-Equilibrium predictions of eco-
nomic theory. So, in practice, what we would observe
is people learning over time, not people landing on the
Nash equilibrium at the outset of the game. The need
to use alternative methods to generate the outcomes
of the learning processes has led to an increasing use
of human experimental as well as computational ap-
proaches such as agent-based modelling.
Our agent-based model has two types of agents
representing the players in a procurement auction,
namely one buyer (the government) and multiple sell-
ers (landholders) competing to sell conservation ser-
vices. Each landholder has an opportunity cost that is
private knowledge. The procuring agency or govern-
ment agent has a conservation budget that determines
the number of environmental service contracts.
Each simulated auction round involves the follow-
ing three major steps. First, landholder agents formu-
late and submit their bids. Second, the government
agent ranks the submitted bids based on their environ-
mental benefit score to cost ratios and selects winning
bids. The number of successful bids depends on the
size of the budget and the auction price format. In the
case of discriminatory or pay-as-bid pricing, the gov-
ernment agent allocates the money starting with the
highest ranked bidder until the budget is exhausted.
In a uniform pricing auction, all winning bidders are
paid the same amount per environmental benefit. The
cutoff point (marginal winner) for this auction is de-
termined by searching for the bid price that would ex-
haust the budget if all equally and better ranked bids
are awarded contracts. Third, landholder agents apply
learning algorithms that take into account auction out-
comes to update their bids for the next round. In the
very initial rounds, these bids are truthful. In subse-
quent rounds, these bids might be truthful or involve
mark-ups over and above opportunity costs.
Bids are updated through learning. Different
learning models have been developed over the last
several decades and can inform simulated agent be-
haviour in the model. A typology of learning mod-
els presented by (Camerer, 2003) shows the relation-
ship between these learning algorithms. This model
combines two types of learning models: a direction
learning model (Hailu and Schilizzi, 2004; Hailu and
Schilizzi, 2005) and a reinforcement learning algo-
rithm (Hailu and Thoyer, 2006; Hailu and Thoyer,
2007). These two algorithms are attractive for mod-
elling bid adjustment because they do not require that
the bidder know the forgone payoffs for alternative
strategies (or bid levels) that they did not utilize in
previous bids.
Learning direction theory asserts that ex-post ra-
tionality is the strongest influence on adaptive be-
haviour (Selten and Stoecker, 1986; Selten et al.,
2001). According to this theory, more frequently than
randomly, expected behavioural changes, if they oc-
cur, are oriented towards additional payoffs that might
have been gained by other actions. For example, a
successful bidder, who changes a bid, is likely to in-
crease subsequent bid levels. Reinforcement learning
(Roth and Erev, 1995; Erev and Roth, 1998) does not
impose a direction on behaviour but is based on the
reinforcement principle that is widely accepted in the
psychology literature. An agent’s tendency to select
AUCTION SCOPE, SCALE AND PRICING FORMAT - Agent-based Simulation of the Performance of a Water Quality
Tender
81