Figure 3: Transition of number of parallel updates of new
basis (par, a = 2, o = 1).
number of cycles is effectively reduced as shown in
Table 1(a) have a relatively large number in the paral-
lelism. The number of terms in a constraint is shown
in Table 1(c). The result represents that the size of
the constraints increases with the progress of the so-
lution method. Although the maximum number of the
terms is less than the number of the variables, the par-
allelism is lost as shown above. Table 1(d) shows the
number of agents related to an update of a new base.
The maximum number that equals the number of vari-
ables represents that the locality of updates was lost in
later cycles.
The transition of the number of parallel updates
of the new basis for an example problem a = 2,o = 1
is shown in Figure 3. The number of parallelism is
relatively large in the first steps and decreases in later
cycles. There are two reasons of the decrement. One
reason is that the possible new bases are eliminated
by the solution method. Another is that the number of
variables in the updated constraints increases.
5 CONCLUSIONS
In this work, we studied a framework of distributed
cooperative problem solving based on the linear pro-
gramming method. Essential processing for dis-
tributed cooperation and extracting the parallelism
are shown. While there is possibility of parallel up-
dates of the new bases in the sparse problems, the
global tantalization of the information is necessary
for the selection of new bases, and the extraction of
the parallelism. Instead of the mediator, there are
opportunities to decompose the tantalization using a
tree structure of agents. Considering the fact that
the locality of the problem is lost with the progress
of the solution method, there is the possibility of an
approach in which agents store the revealed infor-
mation and employ it to reduce distributed process-
ing. In (Burger et al., 2011), each step of the sim-
plex method is not decomposed. Instead, each agent
solves local problems and exchanges the sets of cur-
rent bases. Although we focused on the sparse prob-
lems and the more distributed solver, the possibility
of using the characteristics should be investigated to
divide columns and to avoid synchronization. De-
composition of the processing of the mediator using
a structured group of agents, applying efficient meth-
ods, and comparison/integration with related works
will be included in future works.
ACKNOWLEDGEMENTS
This work was supported in part by a Grant-in-Aid for
Young Scientists (B), 22700144.
REFERENCES
Burger, M., Notarstefano, G., Allgower, F., and Bullo, F.
(2011). A distributed simplex algorithm and the multi-
agent assignment problem. In American Control Con-
ference, pages 2639–2644.
Chvatal, V. (1983). Linear programming. W.H.Freeman
Company.
Ho, J. K. and Sundarraj, R. P. (1994). On the efficacy
of distributed simplex algorithms for linear program-
ming. Computational Optimization and Applications,
3:349–363.
Mailler, R. and Lesser, V. (2004). Solving distributed con-
straint optimization problems using cooperative me-
diation. In 3rd International Joint Conference on Au-
tonomous Agents and Multiagent Systems, pages 438–
445.
Modi, P. J., Shen, W., Tambe, M., and Yokoo, M. (2005).
Adopt: Asynchronous distributed constraint optimiza-
tion with quality guarantees. Artificial Intelligence,
161(1-2):149–180.
Petcu, A. and Faltings, B. (2005). A scalable method
for multiagent constraint optimization. In 9th Inter-
national Joint Conference on Artificial Intelligence,
pages 266–271.
Wei, E., Ozdaglar, A., and Jadbabaie, A. (2010). A dis-
tributed newton method for network utility maximiza-
tion. In 49th IEEE Conference on Decision and Con-
trol, CDC 2010, pages 1816 –1821.
Yarmish, G. and Van Slyke, R. (2009). A distributed,
scaleable simplex method. The Journal of Supercom-
puting, 49:373–381.
ANALYSIS FOR DISTRIBUTED COOPERATION BASED ON LINEAR PROGRAMMING METHOD
233