Table 1: User study results for with (+) and without (−)
agent assistance.
−agent +agent
Total time (sec) 300 262.2
Total query time (sec) 48.1 10.7
Query time ratio 0.16 0.04
# of moves 13.2 14.6
# of steps away from goal 6.3 3
sources. During the experiments, each human subject
was given 5 minutes of time to solve a game either
with or without the agent assistance. In the experi-
ments, total 13 games were played by 7 subjects.
Results. The results are summarized in Table 1 that
compares the user performance on two conditions:
with and without agent assistance. In the table, the
total time measured the duration of a game; the game
ended when the subject either has reached the goal or
has used up the given time. The results indicate that
the subjects without agent assistance (−agent in Ta-
ble 1) were not able to reach a goal within the given
time, whereas the subjects with the agent assistance
(+agent) achieved a goal within the time limit in 6
out of 13 games. The total query time refers to the
time that a human subject has spent for information
gathering, averaged over all the subjects under the
same condition (i.e.,with or without agent assistance),
and the query time ratio represents how much time a
subject spent for information gathering relative to the
total time. The agent assistance reduced the user’s
information-gathering time to less than
1
4
.
In this experiment, we interpret the number of
moves that the user has made during the game (# of
moves) as the user’s search space in an effort to find
a solution. On the other hand, the length of the short-
est path to the goal from the user’s ending state (# of
steps away from goal) can be considered as the qual-
ity of solution. The size of test subjects is too small
to draw a statistical conclusion. These initial results
are, however, promising since they indicate that intel-
ligent information management generally increased
the user’s search space and improved the user’s per-
formance with respect to the quality of solution.
6 CONCLUSIONS
The main contributions of this paper are the fol-
lowing. We presented an intelligent information
agent, ANTIPA, that anticipates the user’s informa-
tion needs using probabilistic plan recognition and
performs information gathering prioritized by the pre-
dicted user constraints. In contrast to reactive assis-
tive agent models, ANTIPA is designed to provide
proactive assistance by predicting the user’s time-
constrained information needs. The ANTIPA archi-
tecture allows the agent to reason about time con-
straints of its information-gathering actions; accom-
plishing equivalent behavior using a POMDP would
take an exponentially larger state space since the
state space must include the retrieval status of all in-
formation needs in the problem domain. We em-
pirically evaluated ANTIPA through a proof of con-
cept experiment in an information-intensive game set-
ting and showed promising preliminary results that
the proactive agent assistance significantly reduced
the information-gathering time and enhanced the user
performance during the games.
In this paper, we have not considered the case
where the agent has to explore and learn about an
unknown (or previously incorrectly estimated) state
space. We made a specific assumption that the agent
knows the complete state space from which the user
may explore only some subset. In real-life scenarios,
users generally work in a dynamic environment where
they must constantly collect new information regard-
ing the changes in the environment, sharing resources
and information with other users. In order to address
such special issues that arise in the dynamic settings,
in our future work we will investigate techniques for
detecting environmental changes, incorporating new
information, and alerting the user of changes in the
environment.
ACKNOWLEDGEMENTS
This research was sponsored by the U.S. Army Re-
search Laboratory and the U.K. Ministry of De-
fence and was accomplished under Agreement Num-
ber W911NF-06-3-0001. The views and conclusions
contained in this document are those of the authors
and should not be interpreted as representing the offi-
cial policies, either expressed or implied, of the U.S.
Army Research Laboratory, the U.S. Government, the
U.K. Ministry of Defence or the U.K. Government.
The U.S. and U.K. Governments are authorized to re-
produce and distribute reprints for Government pur-
poses notwithstanding any copyright notation hereon.
REFERENCES
Armentano, M. G. and Amandi, A. (2007). Plan recognition
for interface agents. Artif. Intell. Rev., 28(2):131–162.
Baker, C., Saxe, R., and Tenenbaum, J. (2009). Action un-
ICAART 2011 - 3rd International Conference on Agents and Artificial Intelligence
286