3 NEW PRESENTATION LAYER
ARCHITECTURE
Essential applications of GFTCCS that must be used
in a new presentation layer with virtual reality
devices (PLVR) are tactical situation (TS),
identification of friendly position (FP), identification
of enemy position (EP), electronic overlays (EO).
These applications create inputs from GFTCCS into
PLVR and are used to visualize quantitative and
qualitative information about friendly and enemy
forces and other battlefield objects. The picture
bellow shows the global architecture of design of a
new presentation layer GFTCCS.
Figure 4: PLVR architecture.
Input layer is designed as a web service. It can be
implemented as Service Oriented Architecture
(SOA) for interoperability achievement in the case
of interconnection with another C2 system. This
layer provides services such as current position of
friendly units. This layer offers connection into MS
SQL Server in an implementation scope. MS SQL
Server contains data about position of units and its
code for visualization of tactical symbol in APP –
6a. MS SQL Server contains complementary data
about units and its hierarchy. This information can
be used in an aggregation function. Input layer also
provides bitmap representation of APP – 6a tactical
symbol.
Terrain database generator offers 3D model of
the area of interest that is prepared from map
sources. This digital data are shared in GFTCCS or
can be located in data storage. The generation is
power consuming operation thus runs separately on
a local computer based on 64-bit platform. The
connectivity with graphics engine is solved by SOA
implementation. VR HW input layer supports
interconnection between the data glove and a
graphics engine and also interprets its positions into
VR environment coordinate system. It is
implemented as API with local function call. This
layer also implements gesture recognition and
translation that can be used by the commander to
operate the VR environment.
VR graphic engine visualizes the area of interest
in 3D with data from input devices, input layer and
preprocessed 3D model of trees, roads, buildings,
etc. The final scene is sent into VR HW output layer
that must correct it to visualize in HMD, LCD
systems, projection system or combination of
already mentioned devices.
Figure 5: Panoramatic visualization.
4 DATA REPRESENTATION
The current state of the art in data representation is
set by the US Force XXI Battle Command Brigade
and Below (FBCB2) system and its new presentation
layer component Command and Control in 3
dimensions that renders the battlefield information
into a 3 dimensions environment in real time (“CG2
C3D”, 2007). But this solution uses neither VR
devices nor tactical symbol representation in 3D.
Data representation is based on ontology that was
designed to interpret knowledge of NEC concept for
human being (Hodicky, 2009). Topic maps method
was chosen to describe information of NEC domain.
Tactical symbols are visualized as a block or a
spatial object that is semi transparent. They have
also the APP – 6a bitmaps on the surfaces and other
important information about current status of unit.
Additional information (combat efficiency, velocity,
fuel, etc.) are visualized as bar graphs.
5 POSSIBLE ENHANCEMENTS
A new presentation layer with VR devices creates a
supplementary tool to get common operation picture
of ground forces. The main tool that support
command and control process remains the old
presentation layer. A new one enhances the
GFTCCS ability to show the common operational
picture in 3D. Information from air forces domain
can be also sent into this presentation layer. The
current communication between aircrafts and ground
DECISION SUPPORT SYSTEM FOR A COMMANDER AT THE OPERATIONAL LEVEL
361