is expected to save up to m time units (pending, as
above, on the amount of agents ahead), as SFB has
to wait for FB responses (maximal message time) be-
fore passing on the CPA. Note that the time c added to
SFB is not insignificant, but A
i+1
consumes the same
time in AFB as it expands the CPA (and responds to
the FB-request). Upon failure of forward-bounding,
there is a significant cost to AFB’s run time. This is
because some agent A
j
, ( j > i + 1) may be occupied
with irrelevant computation caused by messages from
all agents A
i+1
,A
i+2
,...,A
j−1
at the time the next mes-
sage from A
i
arrives. The actual amount of delay dif-
fers according to the precise relation between c and m
and the amount of agents ahead, but by examining for
instance, the delay imposed by A
i+3
as it may receive
irrelevant messages from A
i+1
as well as from A
i+2
causing it to become unavailable to process the next
(relevant) FB-request from A
i
as it arrives, it is clear
that the search process could be delayed by up to 2c
time units. Under the assumption that c is roughly
m/2 (the expected message time), the expected delay
imposed by A
i+3
for each failure is slightly over m/7
( 0.14731 · m)
2
. Thus, for a success/failure ratio of
1 : 7 or worse, SFB runs faster.
4.2 Comparing to ConcFB
Let us turn now to the method of decreasing idle time
that is employed by the third algorithm - ConcFB.
In ConcFB, the search space is divided into several
disjoint sub-spaces, each explored by an independent
Synchronous Search-Process. Each SP has a random
or dynamically created order of agents, so while some
agent awaits a reply or a CPA in one SP, it is kept
busy computing for other SPs. This calls for some
more memory, to keep track of each SP, but idle time
is reduced, and there is less irrelevant computation.
The only time where a computation may be irrele-
vant is when a New Solution message is in the in-
box queue of some agent A
i
, carrying an UB so low
that it is about to prune the CPA that A
i
is currently
computing upon. New Solution messages are a lot
rarer than both FB-requests and CPA messages. Em-
pirically, problems with 12 agents, and 6 domain val-
ues each, have about 10 New Solution messages dur-
ing the entire search. The total amount of messages
during the same concFB search is roughly 200,000.
Consequently, this type of concurrency does not cre-
ate massive irrelevant computation, and since it also
evenly spreads the computation load (and order), it is
2
Based on a complex probabilistic aggregation of prob-
abilities for delay times the delays imposed in each possible
message arrival time and order for m = 10, (n − i) = 10 and
c = 5.
less susceptible to the impact of message delays.
For better intuition, consider ConcFB’s time uti-
lization potential: As each SP is an SFB-like protocol,
at any given time, a search-process may be (1)at some
agent, expanding the cpa or (2)calculating FB esti-
mates. A third option exists, where the SP awaits at-
tendance in an occupied agent’s inbox, which is why
the mechanism must balance the amount of concur-
rent SPs, but this is also not directly relevant to the
current analysis, that focuses on minimizing idle-time
- keeping agents busy at one hand, and not increas-
ing the amount of computation needed on the other.
While an SP is in the assign phase (1) a single agent
is computing thus only 1/n of the agents are active
and the rest are idle (ignoring the existence of other
SPs for the moment). While an SP is on an FB phase,
taking the average case of FB estimate for a median
agent A
i
(i = n/2), a fraction α of unassigned agents
are neighbors of A
i
and thus α/(0.5·n) agents are ac-
tive at that time, and the rest are idle. Had we known
the relation between time consumptions of (1) and (2)
we could calculate the expected amount of idle agents
at a random time, and moreover calculate the amount
of concurrent SPs needed to maximize agent activ-
ity levels at all times (recall that agents are dynami-
cally ordered, thus the load is expected to be evenly
distributed). Increased system delay times obviously
lowers the system’s activity level, and therefore calls
for some more concurrent SPs to compensate.
5 EXPERIMENTAL EVALUATION
The first set of experiments, depicted in Figures 4
and 5, shows a categorical partition of algorithms into
synchronization classes and the clear correlation be-
tween synchronization level and performance, mea-
sured by non-concurrent constraint-checks(Meisels
et al., 2002) and network load. This experiment was
run over problems with 10 agents and 6 domain values
per agent. p
1
marks the probability for two agents to
share a constraint, and constraint costs are randomly
distributed in [0, 1, ..., 100]. For each p
1
value, 100
random problems have been generated and averaged.
To correlate between the class of synchro-
nization and performance level, recall that BnB-
Adopt(Gutierrez and Meseguer, 2010) was catego-
rized as a depth-first class, which is stronger than
ADOPT’s class. ADOPT could not complete the
search in this size of problems under our simula-
tion limits. Higher than BnB-Adopt in synchroniza-
tion level are backwards-consistent algorithms such
as SyncBB, which is shown to perform better as prob-
lems become more dense. The other three algorithms
ICAART2013-InternationalConferenceonAgentsandArtificialIntelligence
12