Optical vision systems using projection uses
surfaces of the real environment, where images of
the virtual objects are projected, in a way that the
result can be visualized without the need of any
auxiliary equipment. Although interesting, those
systems are very restricted to the conditions of the
real space, because of the need of projection
surfaces.
The direct vision systems are appropriate for
situations where the loss of the image can be
dangerous, as in case of a person walking through
the street, driving a car or piloting an airplane, for
example. For closed places, where the user has
control of the situation, the use of the video based
vision is suitable, because in case of loss of the
image, it can remove the helmet with safety, for
example. The direct video based vision system is
cheaper and easier of being adjusted.
2.2 The ARToolKit Software
ARToolKit (Lamb, 2007) is a free software,
indicated for the development of augmented reality
applications. It is based on video to mix the captured
real scenes with the virtual objects generated by
computer. To adjust the position of the virtual
objects in the scene, the software uses a type of
marker (plate with square frame and a symbol inside
it), working as a bar code (see Figure 2).
Figure 2: ARToolKit marker and virtual object.
The frame is used to calculate the marker
position in the space, depending on the square image
in perspective. The marker needs to be previously
registered in front of the webcam. The internal
symbol works as an identifier of the virtual object,
associated with the marker in a previous stage of the
system. When the marker enters in the field of vision
of the webcam, the software identifies its position
and the associated virtual object, generating and
positioning the virtual object on the plate. When
moving the plate, the associated virtual object is
moved together as if it was grabbed on the plate.
This behavior allows the user to manipulate the
virtual object with the hands.
ARToolKit can be used with direct video based
vision devices, such as helmets, and with video
based vision systems using monitor. With direct
video based vision, the user sees the real scene with
the virtual objects, through the video camera
adjusted on the helmet and pointing to the eyes
direction, giving the impression of a real
manipulation and promoting the immersion
sensation. With the system based on monitor, the
user will see the mixed scene on the monitor, while
he/she manipulates the plates in his/her physical
space. If the webcam, which makes the capture of
the real scene, is on top of the monitor, pointing to
the user's space, the monitor will work as a mirror,
so that when the plate approximate or go away of the
webcam, the image size of the virtual object
increases or decrease, respectively. If the webcam is
beside the user or on his head, pointing to the
physical space between him and the monitor, it will
result in an effect similar to the user's direct vision.
As the objects are placed closer to or more distant of
the user, they will appear larger or smaller in the
mixed scene, shown in the monitor.
3 COLLABORATIVE
ENVIRONMENTS WITH
AUGMENTED REALITY
Nowadays, there is research being developed toward
the use of computers in collaborative activities,
involving mainly remote participants. The area of
Computer Supported Cooperative Work (CSCW)
presents many application examples involving: chat,
audio and video-conference, virtual reality
collaborative systems, and hybrid systems
(
Billinghurst, 2002), (Billinghurst, 2003).
However, to get the collaboration supported by
computer, including the natural manipulation of
objects, innovative interfaces were developed more
recently using augmented reality. Those interfaces
include face-to-face and remote collaboration,
involving real and virtual objects.
The face-to-face collaboration with augmented
reality (Schmalstieg, 2003) is based on the sharing
of the physical environment mixed with virtual
objects and visualized through helmet or monitor.
The participants of the collaborative work act on the
real and virtual objects placed in the same
environment. Each one has his/her own vision,
depending on his/her position, using a helmet with a
COLLABORATIVE AUGMENTED REALITY ENVIRONMENT
259