of the paper contain a description of the procedure
setup for the active scenario realization and the data
employed to enrich the underwater scene. Finally, a
case study will be presented, where the described
method will be applied to the representation of an
underwater archaeological site, based on a data
capture campaign performed in a realistic underwater
scenario.
2 THE SYSTEM, METHODS AND
TECHNOLOGIES
A system designed for the research and dissemination
in underwater archaeology must fulfil many
requirements: it must offer an accurate reproduction
of the site and of the objects lying in it; it must let the
users to explore the scene at their own level of
education, with an interaction as natural as possible,
and possibly allow some actions (e.g. exporting 3D
models). An example of a virtual reality tool devoted
to the visualization of the underwater environment
and to the simulation of underwater robotics is
UWsim (M. Prats, 2012). Our system is a more
sophisticated tool; indeed it meets the need for an
easy but not simplistic visualization of the data
collected and available about an underwater
archaeological site.
The major features of the system, as a tool for
dissemination and research in the underwater cultural
heritage, are listed below:
⋅ Different usage by user type
⋅ Interactive, Informative and Immersive
⋅ Accurate virtual reconstruction.
In order to realize a system able to fulfil the
requirements and purposes described above, the
system has been designed and developed exploiting
the most advanced technologies and enriched with a
set of dedicated functionalities. Indeed, the system is
capable to adapt to the several needs of the user
providing different functionalities to the different
user approaches. By distinguishing two different
kinds of users and, along with them, two different
approaches, the system has been developed as a
technical tool for specialized users and as a powerful
disseminative tool for the public
The system provides a set of scenes that can be freely
explored by the user. These scenes are the results of a
processing pipeline dedicated to the 3D
reconstruction starting from raw data acquired,
exploiting equipped Autonomous Underwater
Vehicle, during dedicated campaign.
The user can interact with the scene objects and
access to additional data concerning them. These data
are:
⋅ videos captured during the ARROWS
missions or already available from pre-
existing resources;
⋅ raw data captured by different sensors
(sonograms, etc.);
⋅ the complete reconstructed 3D mesh of the
objects, displayed separately from the scene
and available for observations from multiple
points of view;
⋅ any supplementary information.
Moreover, the system is connected with a database
that manages historical information about the objects
represented in the scenes. These data describe several
features concerning the object’s dimensions, material
and history (for the archaeologist interest)
information.
The system also provides a set of functionalities
dedicated to measure the model. This tool can be very
useful to obtain further information about the
discover artefacts and, such as in case of jar, to
classify their type and/or define their functions (i.e.
distinguish between funerary, wine and food jar).
2.1 From Data to the 3D Model:
The Pipeline
As mentioned before, data about the underwater
environment are acquired by AUVs during dedicated
mission. The AUVs are equipped by several sensors
(such as optical camera, side scan sonar, etc.) which
data contribute to the reconstruction of the scene. The
processing pipeline in charge of 3D reconstruction is
described in the following.
Once the mission is completed, the data are
downloaded from the AUVs internal memories. The
downloaded data are then processed with algorithm
devoted to the detection of artefacts (D. Moroni 2013
and D. Moroni 2013). System operators can select the
most interesting scenes in order to perform further
accurate analysis and to reconstruct from them the
artefact 3D models. The sequences are analysed
frame by frame. The system automatically calibrates
the video frames, balances the colours and rectifies
other aberration occurred during the acquisition phase
(R. Prados, 2014). Exploiting advanced
photogrammetry algorithms and tools, the system
correlates each extracted frame with the others and
generates a point cloud (stored as a .ply file, a
standard file format of point clouds as well as meshes
containing also RGB information for every point) (O.
IMTA-52015-5thInternationalWorkshoponImageMining.TheoryandApplications
54