of the situation, manifesting command decisions)
(Alberts, Garstka and Stein, 1999).
Increasing the number of commanded platforms,
an operator of the C2 can easily reach a state of
information overload, where information flow rate is
greater than the operator’s processing rate; this
situation could cause a wrong mental model of the
mission scenario and, consequently, the making of
wrong decisions that could lead to catastrophic
situations (Shanker and Richtel, 2011).
Thus, the HCI becomes a key factor when
developing the architecture of a C2.
The focus of this paper is on the display issues of
a HCI and how it can be improved in order to reduce
information overload and enhance the usability of
information. In particular, we evaluate usability of
an immersive synthetic environment in the
understanding of a NCW scenario.
This research is part of the LOKI Project, a
Command and Control (C2) system for Electronic
Warfare (EW) developed by Elettronica S.p.A.
(ELT). Despite this work focuses on warfare topic,
we believe that any time-pressure system operated
by a human (e.g., HCI for network Intrusion
Detection System), for network operations can
benefit from this research (Cox, Eick and He, 1996).
2 RELATED WORKS
Gaining a detailed understanding of the modern
battle space is essential for the success of any
military operation.
In these applications, the main function of a
human-computer interface is to display the current
situation and the relevant information and intentions
to the operator (e.g., location of own forces,
reconnoitered opponent troops and facilities,
commands and order from the superiors, platforms’
status); this information is generally displayed on
scaled maps with regional properties of the mission
area.
Several research groups have focused their
activities on the design and development of new
display paradigms and technologies for advanced
information visualization.
Dragon (Julier, et al., 1999) has been one of the
first research projects in formalizing requirements
for systems with the need to visualize a huge amount
of information on tactical maps for real-time
applications. A real-time situational awareness
virtual environment for battlefield visualization has
been realized with an architecture composed of
interaction devices, display platforms and
information sources.
Other solutions have been proposed by
Pettersson, Spak and Seipel (2004) and Alexander,
Renkewitz and Conradi (2006). In the former, the
proposed visualization environment is based on the
projection of four independent stereoscopic image
pairs at full resolution upon a custom designed
optical screen. This system suffers from apparent
crosstalk between stereo images pairs. The latter
presents some examples of Augmented Reality and
Virtual Reality technologies, showing benefits and
flaws, and the results of the experiments regard the
evaluation of visibility and interactivity
performances.
Kapler and Wright (2005) have developed a
novel visualization technique for displaying and
tracking events, objects and activities within a
combined temporal and geospatial display. The
events are represented within an X, Y, T coordinate
space, in which the X and Y plane shows flat
geographic space and the T-axis represents time into
the future and past. This technique is not adequate
for an immersive 3D virtual environment because it
uses an axis to describe the time evolution constrains
the spatial representation on a flat surface; the
altitude information, that is an important information
in avionic scenarios, can’t be displayed. However, it
is remarkable that the splitting-up of geographical
and logical information (e.g., health of a platform)
can enhance the usability of the system.
3 STEREOSCOPIC VISION
The stereoscopic vision can improve the
understanding of a modern battle space by providing
the depth perception and enhancing the level of
realism and the sense of presence.
Different technologies have been developed for
generating 3D stereoscopic visualization. Some of
these are related to entertainment such as cinema
(Lipton, 2007) and video games (Mahoney,
Oikonomou and Wilson, 2011), as well as to other
serious/work-related applications such as medical
interventions and telerobotics (Dey, et al., 2002;
Livatino, et al., 2010).
Stereoscopic visualization, or simply stereo, can
be active or passive (Cyganek and Siebert, 2009). In
short, passive stereo is a solution where light is
polarized differently for left and right eyes. The
polarization can be obtained in various ways; the
most known is the colour polarization, used in
cinemas in the 1950s for the first time. Nowadays,
the most used polarizations within virtual reality
VISAPP2014-InternationalConferenceonComputerVisionTheoryandApplications
250