1. After having pointed at the robot on which the
user wants the robot agent to move by the right
hand, the user raises the left hand.
2. Kinect recognizes the user has performed a
gesture.
3. Kinect recognizes which robot the user is
pointing at by using the coordinates of the
markers and the coordinates of the joints of the
user.
4. Kinect sends the information about the pointed
robot to the host agent.
5. The host agent rewrites the file which has the
managing information of the display of the
virtual objects in the extended ARToolKit.
6. The extended ARToolKit projects the virtual
object which represents the mobile agent on the
robot which the robot agent is supposed to move
to.
7. The host agent moves to the PC on which the
robot agent resides.
8. The host agent passes the information to the
robot agent to which robot it should move.
9. The robot agent moves to the robot that the user
has pointed.
10. The host agent is returned to the host PC.
By the operations described above, the system
makes it possible to visually represent the software
agents that are moving between the robots and the
user directs the agents by gestures. Figure 3 shows a
user performing the gestures to direct a software
agent.
5 CONCLUSIONS AND FUTURE
DIRECTIONS
We have proposed an intelligent interface for the
mobile software agent systems that we have
developed. Through the interface, the users can
intuitively grasp the activities of the mobile agents.
In order to provide proactive inputs from user, we
have utilized the Kinect motion capture cameras to
capture the users’ will expressed by the gestures.
Since the Kinect motion capture camera is mounted
under the ceiling of the room, we currently have the
following problems.
1. This system is confined indoor. Kinect must be
placed on a place where it can monitor the robots
as well as the user. Therefore the system cannot
be used outdoors.
2. ARToolKit cannot recognize markers when a
robot leaves the scope of the Kinect. If Kinect is
about 3.0m or more away from a marker,
ARToolKit cannot detect the marker.
3. Kinect sometimes makes mistakes about the
gestures. Kinect recognizes that the user is
performing a gesture when a user raises the left
hand. Even if the user does not intend to do a
gesture, Kinect may misidentify that the user has
done a gesture.
In order to mitigate these problems, we are
extending the ARToolKit so that Kinect can
cognizes the users’ gestures more precisely as well
as looking for some other medium to capture the
scene in the open field.
REFERENCES
Abe, T., Takimoto, M. and Kambayashi, Y. (2011).
Searching Targets Using Mobile Agents in a Large
Scale Multi-robot Environment, Proceedings of the
Fifth KES International Conference on Agent and
Multi-Agent Systems: Technologies and Applications,
LNAI 6682, 221-220.
Azuma, R. (1997). Survey of Augmented Reality,
Presence: Teleoperators and Virtual Environments,
vol.6, no.4, 355-385.
Feiner, S., Maclintyre, B. and Seligmann, D. D. (1993).
Knowledge-Based Augmented Reality,
Communications of the ACM, vol. 36, no.7, 52-62.
Kambayashi, Y., Ugajin, M., Sato, O., Tsujimura, Y.,
Yamachi, H. and Takimoto, M. (2009). Integrating
Ant Colony Clustering Method to Multi-Robots Using
Mobile Agents, Industrial Engineering and
Management System, vol.8, no.3, 181-193.
Kato, H. (2002). ARToolKit: Library for Vision-based
Augmented Reality. IEIC Technical Report,vol.101,
no.652, 79‐86.
Microsoft Kinect Homepage (2012).
http://research.microsoft.com/en-us/um/redmond/
projects/kinectsdk/default.aspx.
Satoh, I. (1999). A Mobile Agent-Based Framework for
Active Networks, Proceedings of IEEE System Man
and Cybernetics Conference, 71-76.
Tomlinson, B., Yan, M. L., O’ Connell, J., Williams, K.,
Yamaoka, S. (2005). The Virtual Raft Project: A
Mobile Interface for Interacting with Communities of
Autonomous Characters, Proceedings of ACM
Conference on Human Factors in Computing Systems,
1148-1149.
ICAART2013-InternationalConferenceonAgentsandArtificialIntelligence
442