An Augmented Environment for Command and Control Systems
Alessandro Zocco
1
, Lucio T. De Paolis
2
, Lorenzo Greco
3
and Cosimo L. Manes
2
1
Product Innovation & Advanced EW Solutions, Elettronica S.p.A., Rome, Italy
2
Department of Engineering for Innovation, University of Salento, Lecce, Italy
3
Mobile Development, Leto Ltd, London, U.K.
Keywords:
Augmented Reality, Head Mounted Display, Human Computer Interaction, Network Centric Operations,
Military Decision Support System.
Abstract:
In the information age the ability to develop high-level situational awareness is essential for the success of any
military operation. The power of network centric warfare comes from the linking of knowledgeable entities
that allows information sharing and collaboration. By increasing the number of commanded platforms, the
volume of the data that can be accessed grows exponentially. When this volume is displayed to an operator,
there is an high risk to get a state of information overload and great care must be taken to make sure that what
is provided is actually information and not noise. In this paper we propose a novel interaction environment
that leverages the augmented reality technology to provide a digitally enhanced view of a real command and
control table. The operator, equipped with an optical see-through head mounted display, controls the virtual
context remaining connected to the real world. Technical details of the system are described together with
the evaluation method. Twelve users evaluated the usability of the augmented environment comparing it with
a wall-sized stereoscopic human computer interface and with a multi-screen system. Results showed the
effectiveness of the proposed system in understanding complex electronic warfare scenarios and in supporting
the decision-making process.
1 INTRODUCTION
Gaining a detailed understanding of battlespace is
nowadays essential for the success of any military op-
eration. Network Centric Warfare (NCW) is a mili-
tary doctrine of war developed by the United States
Department of Defense in the 1990s (Alberts et al.,
1999; Braulinger, 2005). The power of NCW derives
from the effective linking of forces that are geograph-
ically or hierarchically distributed. The network al-
lows knowledgeable entities to share information and
collaborate to build up a shared situational aware-
ness.
A Network Centric Operation (NCO) is an opera-
tional situation according to the NCW. Figure 1 shows
an example of NCO: different platforms have the ca-
pability of sensing some limited areas and each one
has a personal limited awareness of its proximity. In
Figure 1(a) each platform sends collected data to a
specific platform, known as Command and Control
(C2). The C2 has the special task to fuse data, in
a manual or automatic way. In Figure 1(b) the C2
sends the fused data to the different platforms, and
Figure 1: Example of a Network Centric Operation.
in this way they share the same enhanced situational
awareness. This process is continuously repeated dur-
ing any NCO.
As it can be inferred from the Figure 1 example,
the C2 holds an important role in these networks: it is
the system devoted to the decision-making process of
the operational aspects of the warfare. Commanders
operate such systems by means of a Human Computer
Interface (HCI) in order to get access to the Common
Operational Picture (COP) and in order to manifest
decisions (e.g., plans, orders).
The COP represents a single identical display of
209
Zocco A., De Paolis L., Greco L. and Manes C..
An Augmented Environment for Command and Control Systems.
DOI: 10.5220/0005358902090214
In Proceedings of the 10th International Conference on Computer Vision Theory and Applications (VISAPP-2015), pages 209-214
ISBN: 978-989-758-091-8
Copyright
c
2015 SCITEPRESS (Science and Technology Publications, Lda.)
relevant information concerning friendly, enemy, and
neutral forces. Examples of information that can be
integrated in a COP include location (e.g., current po-
sitions, rate of movements), environment (e.g. current
weather conditions, terrain features), status (e.g., ca-
pabilities of offensive and defensive enemy weapon
systems).
Being the visualized information the result of
several inputs conveyed to the same displayed area,
a state of information overload is likely to occur
when increasing the number of commanded platforms
(Shanker and Richtel, 2011). In particular, the infor-
mation flow rate may be greater than the operator’s
processing rate, leading to the creation of a wrong
mental model of the mission scenario. This results in
making wrong decisions that may lead to catastrophic
situations.
This paper proposes a novel interaction environ-
ment that exploits the Augmented Reality (AR) to in-
crease situational awareness and reduce information
overload. The solution proposed is assessed on real-
istic Electronic Warfare (EW) scenarios.
Next section introduces the related works (Sect.
2). The proposed solution is then presented (Sect. 3)
followed by a description of our experiments (Sect.
4). Conclusions are eventually drawn (Sect. 5).
2 RELATED WORKS
Several researches have been performed to design and
develop new display paradigms and technologies for
advanced information visualization in tactical com-
mand and control.
Dragon has been one of the first research projects
in formalizing requirements for systems with the need
to visualize a huge amount of information on tactical
maps for real-time applications (Julier et al., 1999).
A virtual environment for battlefield visualization has
been realized with an architecture composed of in-
teraction devices, display platforms and information
sources.
Pettersson et al. have proposed a visualization en-
vironment based on the projection of four indepen-
dent stereoscopic image pairs at full resolution upon
a custom designed optical screen (Pettersson et al.,
2004). This system suffers from apparent crosstalk
between stereo images pairs.
Kapler and Wright have developed a novel visual-
ization technique for displaying and tracking events,
objects and activities within a combined temporal and
geospatial display (Kapler and Wright, 2004). The
events are represented within an X, Y, T coordinate
space, in which the X and Y plane shows flat geo-
graphic space and the T-axis represents time into the
future and past. This technique is not adequate for an
immersive 3D virtual environment because it uses an
axis to describe the time evolution constraining the
spatial representation on a flat surface; the altitude
information, that is important in avionic scenarios,
can’t be displayed. However, it is remarkable that the
splitting-up of geographical and logical information
(e.g., health of a platform) can enhance the usability
of the system.
In NextVC2 the main idea is to leverage the Vir-
tual Reality (VR) technology to create a shared col-
laborative environment with a customized view of the
real world objects and events (Carvalho and Ford,
2012).
Hodicky and Frantis have conducted a research
program to investigate ways to increase the level and
quality of information about the battlefield by means
of VR devices (Hodicky and Frantis, 2009). The com-
mander equipped with a HMD can operate the virtual
environment by head and body movements.
The use of a system based only on VR has the dis-
advantage that the virtual world isolates the operator
making difficult the connection with the real world.
In addition, this technology requires a complex and
expensive scene modeling and rendering to faithfully
reproduce the represented real world as well as artifi-
cial navigation methods.
Adithya has presented a paper-based augmented
map for military scope that uses the ARToolKit
marker-tracking based approach and a video see-
through Head Mounted Display (HMD) to manage in-
teraction and visualization (Adithya, 2010). The dy-
namic information of the terrain, the placing of virtual
objects and the interaction are features of the digital
world that are superimposed onto the paper map. The
main limitation of this type of system is that the field
of action is restricted to a specific area.
3 PROPOSED INVESTIGATION
This paper proposes a novel environment for modern
battlespaces visualization, which exploits AR tech-
nology to increase the understanding perception and
reduce information overload. This is expected to im-
prove operators performance in terms of both reac-
tion time and number of errors made during the exe-
cution of complex tasks.
3.1 High-level Architecture of Loki
The system proposed is part of Loki, an advanced C2
system for EW that coordinatesa set of heterogeneous
VISAPP2015-InternationalConferenceonComputerVisionTheoryandApplications
210
Figure 2: Loki architecture in the large.
platforms (air, surface, subsurface) having on board
sensors and actuators in the domain of electronic de-
fence. Figure 2 shows the high-level architectural
view of the Loki system.
The Core component continuously executesan ad-
vanced multi-sensor data fusion process on the data
retrieved from cooperating systems. Once these data
are properly fused, the system is capable to infer new
important information such as a better localization of
emitters and countermeasures strategy. These infor-
mation are transferred to the presentation layer using
a communication middleware based on Data Distribu-
tion Service (DDS) paradigm (OMG, 2007).
The Augmented Environment (AE) component
provides a digitally enhanced view of a real C2 ta-
ble configuring the visual appearance of the COP and
accepting and validating user input. Moreover, it pro-
vides a persistence mechanism to decouple the data-
access logic from the core logic.
3.2 Augmented Environment
The mission area is visualized in a new way that al-
lows to increase the situational awareness. The opera-
tor, looking through the lens of an optical see-through
HMD (NVIS nVisor ST50), sees the virtual world su-
perimposed on the real world (Figure 3). The precise
alignment is obtained through the use of an electro-
magnetic tracker (Polhemus Patriot) that detects, in
real-time, the position and the orientation of the ob-
server’s head.
The virtual environment consists of a geo-
referenced 3D map of the mission area on which the
EW scenario is positioned. The localized platforms
(characterized by latitude, longitude and altitude) are
placed on the scene faithfully reflecting their geo-
graphic coordinates and are represented according to
MIL-STD-2525C (Department of Defense, 2008). If
the Direction of Arrival (DOA) of a threat is known
with a margin of error, the uncertainty volume is
shown as a pyramid with vertex in the platform that
Figure 3: Operator equipped with a see-through HMD.
Figure 4: Geo-referenced 3D map.
has performed the detection (Figure 4).
Different kinds of views are provided through the
activation of layers normal to the C2 table. In Fig-
ure 5 the intersection of the uncertainty areas relating
to a specific threat is shown by a top view of the 3D
model. A small viewport, placed in the upper left cor-
ner of the display, provides critical information (e.g.
warning emitters detection) to the operator. This type
of visualization permits to visualize in a correct way
the different elements of the scene and to delete any
form of information overload.
Figure 5: Activation of a layer normal to the C2 table.
The users around the C2 table can collaborate
face-to-face maintaining the ability to use real-world
objects.
AnAugmentedEnvironmentforCommandandControlSystems
211
3.3 SW Design and Implementation
The AE has been designed with high modularity ap-
plying User Interface Design Patterns (UIDP). These
patterns help to ensure that key human factors con-
cepts are quickly and correctly implemented.
The SW has been developed in C++ using OGRE,
an open-source 3D engine that abstracts the details
of the underlying system libraries (e.g., OpenGL, Di-
rectX) and provides an interface based on high-level
classes (Ogre 3D, 2001; Rocha et al., 2010). Qt
Core API has been integrated to take advantage of its
powerful mechanism for object communication called
Signals & Slots and to handle, by means the QtSe-
rialPort add-on module, the serial port to which the
electromagnetic tracker is attached (Digia, 2015).
The high-definition map of the mission area has
been generated through several steps. First, the 3D
model has been generated starting from Digital Ter-
rain Elevation Data (DTED), using Autodesk Infras-
tructure Design Suite. After that, Autodesk 3DS Max
has been used to add textures, details and colors. Fi-
nally, the resultant model has been converted in a
format supported by OGRE. An important thing that
must be considered is that in this process the geo-
referencing information is lost and a mapping algo-
rithm (Bowditch, 2012) to associate a specific point
inside the map to each pair of longitude and latitude
value has been implemented.
To avoid a different spatial perception of the vir-
tual context through the HMD in contrast to the real
world, the camera view frustum has been calibrated
to the display view frustum. The calibration method
adopted (Kellner et al., 2012) requires that each user
aligns tracked real world markers with virtual target
positions and can be completed in approximately one
minute for both eyes.
4 USABILITY EVALUATION
4.1 Test Procedure
The purpose of the evaluation was to assess the usabil-
ity of the proposed AR system. The formal usability
study was carried out at the facilities of ELT in Rome
and involved 12 members of the armed forces of dif-
ferent countries. The case study required about tree
hours per participant to be completed.
The test procedure started with a brief presenta-
tion of the system and the aim of the investigation.
Each operator was involved in the understanding
of complex NCW scenarios, using the proposal AE,
a wall-sized stereoscopic HCI (Zocco et al., 2014a;
Zocco et al., 2014b) and a multi-screen system. Three
visual detection tasks of increasing complexity were
assigned to participants (i.e., identification, correla-
tion and triangulation).
To measure the operational behaviours, the num-
ber of failures and the completion time were automat-
ically acquired for each task. In addition the follow-
ing qualitative data, referring to the users’ experience,
were collected using questionnaires and interviews:
depth impression;
viewing comfort;
level of immersion;
sense of isolation;
understanding perception.
4.2 Results Analysis
The results for quantitative parameters are shown in
Figure 6 through line graphs.
Figure 6: Quantitative data related to users’ performance.
Regarding to the simplest task T1, only minor dif-
ferences have been detected between the considered
display systems. The impact of AR becomes evident
after increasing the level of task complexity. The re-
duction in the number of failures and in the comple-
tion time under AR conditions is impressive for task
T2 and T3.
The above results represent the major outcome for
our experiments. Under AR conditions users acquire
greater situational awareness and this leads to a re-
duction of both the reaction time and the number of
errors committed.
VISAPP2015-InternationalConferenceonComputerVisionTheoryandApplications
212
Figure 7: Qualitative data related to users’ experience.
The results of the questionnaires and interviews
are consistent with quantitative data (Figure 7).
Most of the participants had no doubts that the
depth impression and the level of immersion are
higher in case of stereo visualization (AE or wall-
sized stereo system). With complex EW scenarios,
when monocular depth cues are ambiguous, the stereo
viewing enhances spatial judgments: it is possible to
detect very closely spaced icons (representing plat-
forms or threats).
Positive judgments about viewing comfort were
provided on all display approaches. Many opera-
tors (42%) preferred the multi-screen system because
no wearable device (e.g., shutter glasses, HMD) is
needed.
The sense of isolation is approximately equal be-
tween AR system and wall-sized stereo system. In the
AR system the perception of the virtual world as part
of the real world, allows to relieve the seclusion that
derives from the use of an HMD.
The majority of participants (59%) perceived a
better understanding of the scenarios in the case of
AR system. This indicates that the methods of han-
dling the information overload and of reporting warn-
ing notifications have a significant impact on the
decision-making process.
5 CONCLUSION
Inadequate situational awareness in NCW may lead
an operator to make wrong choices with potential
disastrous outcomes. Situational awareness is chal-
lenged by information overload. This makes the HCI
a key element is the design and development of the
C2 system for NCO.
This paper proposed the use of a novel interaction
environment that leverages the AR technology to pro-
vide a digitally enhanced view of a real C2 table.
The formal usability study showed very clear
trends with our users performing significantly better
under AR conditions in terms of understanding per-
ception, depth impression and level of immersion. A
significant reduction of both the number of failures
and of the completion time has been obtained.
The results presented represent the initial experi-
mentation phase of continuing research into user in-
teraction for military purposes. The lack of compar-
ative evaluation with respect to other works specifi-
cally addressing NCW is due to the actual complexity
of this domain.
In the next future, a comparison between optical
and video see-through approaches will be conducted
and the gesture and motion control by means of low-
cost devices (e.g., Leap Motion, Myo) will be ex-
plored.
AnAugmentedEnvironmentforCommandandControlSystems
213
REFERENCES
Adithya, C. (2010). Augmented Reality Approach for Pa-
per Map Visualization. In INCOCCI 2010, Interna-
tional Conference on Communication and Computa-
tional Intelligence, Erode, India.
Alberts, D. S., Garstka, J. J., and Stein, F. P. (1999). Net-
work Centric Warfare: Developing and Leveraging
Information Superiority. CCRP Publication Series.
Bowditch, N. (2012). The American Practical Navigator.
National Imagery and Mapping Agency.
Braulinger, T. K. (2005). Network Centric Warfare Im-
plementation and Assessment. Master’s thesis, U.S.
Army Command and General Staff College.
Carvalho, M. M. and Ford, R. (2012). NextVC2 - A next
generation virtual world command and control. In
MILCOM 2012, IEEE Military Communications Con-
ference, Orlando, Florida, USA.
Department of Defense (2008). MIL-STD-
2525C: Common Warfighing Symbology.
http://www.mapsymbs.com/ms2525c.pdf. [Accessed
on 03 January 2015].
Digia (2015). Qt Project. http://qt-project.org. [Accessed
on 03 January 2015].
Hodicky, J. and Frantis, P. (2009). Decision Support System
for a Commander at the Operational Level. In KEOD
2009, 1st International Conference on Knowledge En-
gineering and Ontology Development, Madeira, Por-
tugal.
Julier, S., King, R., Colbert, B., Durbin, J., and Rosen-
blum, L. (1999). The Software Architecture of a
Real-Time Battlefield Visualization Virtual Environ-
ment. In IEEE VR’99, IEEE Virtual Reality Confer-
ence, Houston, Texas, USA.
Kapler, T. and Wright, W. (2004). Geotime Information
Visualization. In INFO VIS, IEEE Symposium on In-
formation Visualization, Austin, Texas, USA.
Kellner, F., Bolte, B., Bruder, G., Rautenberg, U., Steinicke,
F., Lappe, M., and Koch, R. (2012). Geometric Cali-
bration of Head-Mounted Displays and its Effects on
Distance Estimation. IEEE Transactions on Visualiza-
tion and Computer Graphics.
Ogre 3D (2001). http://www.ogre3d.org. [Accessed on 03
January 2015].
OMG (2007). Data Distribution Service Portal.
http://portals.omg.org/dds/. [Accessed on 03
January 2015].
Pettersson, L. W., Spak, U., and Seipel, S. (2004). Collab-
orative 3D Visualizations of Geo-Spatial Information
for Command and Control. In SIGRAD 2004, Con-
ference of the Swedish Eurographics Chapter, G¨avle,
Sweden.
Rocha, R. V., Rocha, R. V., and Ara´ujo, R. B. (2010). Se-
lecting the Best Open Source 3D Games Engines. In
SBGAMES 2010, Brazilian Symposium on Games and
Digital Entertainment, Florian´opolis, Santa Catarina,
Brazil.
Shanker, T. and Richtel, M. (2011). In New Military, Data
Overload Can Be Deadly. http://www.nytimes.com/.
[Accessed on 03 January 2015].
Zocco, A., Cannone, D., and De Paolis, L. T. (2014a). Ef-
fects of Stereoscopy on a Human-Computer Interface
for Network Centric Operations. In VISAPP 2014, 9th
International Conference on Computer Vision Theory
and Applications, Lisbon, Portugal.
Zocco, A., Livatino, S., and De Paolis, L. T. (2014b).
Stereoscopic-3D Vision to Improve Situational
Awareness in Military Operations. In AVR 2014, 1st
International Conference on Augmented and Virtual
Reality, volume 8853 of Lecture Notes in Computer
Science, Lecce, Italy.
VISAPP2015-InternationalConferenceonComputerVisionTheoryandApplications
214