Authors:
Nerea Aranjuelo
1
;
2
;
Jorge García
2
;
Luis Unzueta
2
;
Sara García
2
;
Unai Elordi
1
;
2
and
Oihana Otaegui
2
Affiliations:
1
Basque Country University (UPV/EHU), San Sebastian, Spain
;
2
Vicomtech, Basque Research and Technology Alliance (BRTA), San Sebastian, Spain
Keyword(s):
Simulated Environments, Synthetic Data, Deep Neural Networks, Object Detection, Video Surveillance.
Abstract:
Synthetic simulated environments are gaining popularity in the Deep Learning Era, as they can alleviate the effort and cost of two critical tasks to build multi-camera systems for surveillance applications: setting up the camera system to cover the use cases and generating the labeled dataset to train the required Deep Neural Networks (DNNs). However, there are no simulated environments ready to solve them for all kind of scenarios and use cases. Typically, ‘ad hoc’ environments are built, which cannot be easily applied to other contexts. In this work we present a methodology to build synthetic simulated environments with sufficient generality to be usable in different contexts, with little effort. Our methodology tackles the challenges of the appropriate parameterization of scene configurations, the strategies to generate randomly a wide and balanced range of situations of interest for training DNNs with synthetic data, and the quick image capturing from virtual cameras considering
the rendering bottlenecks. We show a practical implementation example for the detection of incorrectly placed luggage in aircraft cabins, including the qualitative and quantitative analysis of the data generation process and its influence in a DNN training, and the required modifications to adapt it to other surveillance contexts.
(More)