(e). After the initial plannings, the search space of
the purple agent can be seen in Figure 3 (a) (initial
search space). The green states are the states in the
closed set, and the yellow states represent the states
in the Open list. The path of the purple agent can
be easily obtained by performing a backward search
from the state (A,2,2). The path is (C,2,0) → (B,2,1)
→ (A,2,2).
(a) Initial search
space
(b) After adding
constraint
(d) Iteration-2
(e) Iteration-3
(c) Iteration-1
(f) Iteration-4
(g) Final
Figure 3: State spaces of the example run.
It can be easily seen that there is a conflict at the
state (B,2,1). The high-level search constraints one of
the agents. In the example, we assumed that the pur-
ple agent is constrained from being at (B,2,1). The
addConstraint function deletes the state (B,2,1) from
the Closed list and pushes (A,2,3) to the Open list as
it is in the Closed list and it is a child of (B,2,1). Fig-
ure 3 (b) (search space after adding constraint) rep-
resents the search space of the purple agent after the
addConstraint function is performed. The red square
indicates the constrained state.
The algorithm takes the state (A,2,2) as it has the
highest priority. The state has no parents; in other
words, (A,2,1) is not in the closed list, (B,2,1) is
constrained, and there is an obstacle at both (A,1,1)
and (A,3,1). So, the state (A,2,2) is deleted from the
closed list. None of the children of the state is in the
Closed set, so the algorithm does not add any state
to the Open set. Figure 3 (c) (iteration-1) shows the
search space after the state (A,2,2) is popped and ex-
amined.
At next iteration, (C,2,1) and (B,2,2) have the
highest priorities. Assume that (B,2,2) is chosen. The
algorithm pops it from the Open set. As it has nei-
ther a parent nor a child and it is not in the Closed set,
it is basically deleted from the Open set. Figure 3
(d) (iteration-2) represents the search space after the
iteration.
Then, the algorithm picks the state (C,2,1). It is
added to the Closed set, and its children (B,2,2) and
(C,2,2) are pushed to the Open set. Figure 3 (e)
(iteration-3) shows the search space after this itera-
tion.
After these iterations, states (B,2,2) and (A,2,3)
are popped from the Open set, respectively and their
children are added to the Open set. Figure 3 (f)
(iteration-4) shows the iterations, respectively.
So the new path of the purple agent can be found
easily by performing backward search from (A,2,3).
The resulting path is (C,2,0) → (C,2,1) → (B,2,2) →
(A,2,3). The final iteration can be seen at Figure 3 (g)
(final iteration). As the computed path of green agent
was (B,1,0) → (B,2,1) → (B,3,2), there is no conflict
in the paths so the LIMP terminates returning given
paths.
3.2 Theoretical Analysis
In this section, we will show that DLPA* is optimal.
In addition, both DLPA* and LIMP are complete. The
DLPA* has the same flow as the LPA*. As in the dis-
crete domain with the constant cost, a vertex is either
connected to a vertex, meaning that it has an edge
with this vertex with cost one, or disconnected to a
vertex, meaning that it has an edge with this vertex
with cost infinity. Assigning the cost of an edge to
infinity has the same effect as deleting the edge from
the map.
In the space-time domain, a state can have an exact
g-value (due to the time property) and an exact f value
(due to the coordinates). So, changing the cost of a
state is only possible when a state is deleted or newly
examined. The cost change in the LPA* corresponds
to the state deletion or addition in the DLPA*.
Adding a constraint in the DLPA* corresponds to
setting all the incoming edges to infinity in the LPA*.
Performing environmental change also corresponds to
setting proper edges to infinity.
Although advance function in DLPA* have no
equivalent in LPA*, adding a constraint to all states in
time t except a state s results in the same search tree
with a search rooted at (s,t). So, it does not break the
optimality or completeness of DLPA*.
As all the operations correspond to an operation in
the LPA*, the DLPA* is just a particular instantiation
of the LPA*. That implies the DLPA* is also com-
plete, sound, and optimal as the LPA* given that the
underlying heuristic is consistent. However, if there
exists no solution, the algorithm does not terminate.
The termination condition should be checked before-
hand.
CBS-D*-lite is not optimal; hence the LIMP is
also not optimal but complete.
ICAART 2022 - 14th International Conference on Agents and Artificial Intelligence
212