false positives accounted for just 4.5% of predictions
and contained only images of forest, most of them of
low quality, which may evidence some overfitting of
the detection model. Yet, given its AUC of 94.8%,
we conclude that the model is able to identify very
well the generated images of synthetic fire, further re-
inforcing the quality of the generator.
Integrated into the simulation platform, this mod-
ule raises a number of questions, in particular con-
cerning the generation procedure, because it is com-
putationally more expensive than detection. An equi-
librium was found to ensure its usability, but more can
be done to improve it.
The implemented Fire Module interoperates with
the external Bing Maps REST Services by means
of HTTP requests. This communication may suffer
from overheads, mostly because of the variable la-
tency with the respective servers, which may also be
overloaded and thus subject to longer response times.
To tackle this problem it is essential to reduce the
number of requests issued by creating caches to hold
tiles of frequently used routes or by acquiring tiles
of larger resolutions. The latter cover a larger surface
area which can be segmented in order to match the on-
board camera’s field of view at the aircraft’s position.
Prefetching, a mechanism where tiles are retrieved in
advance according to the predefined trajectory, could
also prove beneficial.
The insertion of fire into bird’s eye type of frames
is currently subject to an internal functionality of the
Bing Maps API which allows to perform the drawing
of polygons on demand to be thereafter manually ex-
tracted. This implies that every drawing on the bird’s
eye perspective corresponds to an additional request
to the external tiles provider, which is unfeasible. This
issue should be resolved and considered for all other
solutions that are subsequently integrated.
Since image generation proved to perform differ-
ently according to the environment, further develop-
ments could also separate the classification task into
two specific models, training one of them on forestry
while the other is trained on urban scenarios.
The incorporated lightweight detector based on
ERNet portrays very promising results and opens up
the opportunity to generalise the pipeline concept to
other types of disturbances. It should help to identify,
for example, building collapses, floods or traffic in-
cidents already targeted by the detector. This would
enable the comparison of different multi-vehicular ap-
proaches and help acquiring a deeper understanding
on which works best for each case. One could there-
fore invest in studying the catastrophic scenarios from
the air in order to define a sequence of priority actions
to be carried out by the formation of aircraft.
REFERENCES
Almeida, J. (2017). Simulation and Management of Envi-
ronmental Disturbances in Flight Simulator X. Mas-
ter’s thesis, University of Porto, Faculty of Engineer-
ing, Porto, Portugal.
Arcidiacono, C. (2018). An empirical study on syn-
thetic image generation techniques for object detec-
tors. Master’s thesis, KTH, School of Electrical En-
gineering and Computer Science (EECS), Stockholm,
Sweden.
Borji, A. (2019). Pros and cons of GAN evaluation mea-
sures. Computer Vision and Image Understanding,
179:41–65. DOI:10.1016/j.cviu.2018.10.009.
Damasceno, R. (2020). Co-Simulation Architecture for En-
vironmental Disturbances. Master’s thesis, University
of Porto, Faculty of Engineering, Porto, Portugal.
Dwibedi, D., Misra, I., and Hebert, M. (2017). Cut, Paste
and Learn: Surprisingly Easy Synthesis for Instance
Detection. In Proceedings of 2017 IEEE Interna-
tional Conference on Computer Vision (ICCV), pages
1310–1319, Venice, Italy. IEEE Computer Society.
DOI:10.1109/ICCV.2017.146.
El Harrouss, O., Almaadeed, N., Al-ma’adeed, S., and
Akbari, Y. (2020). Image Inpainting: A Re-
view. Neural Processing Letters, 51:2007–2028.
DOI:10.1007/s11063-019-10163-0.
Filippi, J. B., Bosseur, F., and Grandi, D. (2014). Fore-
Fire: open-source code for wildland fire spread mod-
els, pages 275–282. Advances in Forest Fire Research.
Imprensa da Universidade de Coimbra, Coimbra, Por-
tugal. DOI:10.14195/978-989-26-0884-6 29.
Goodfellow, I. J., Pouget-Abadie, J., Mirza, M., Xu, B.,
Warde-Farley, D., Ozair, S., Courville, A., and Ben-
gio, Y. (2014). Generative Adversarial Nets. In
Proceedings of The 27th International Conference on
Neural Information Processing Systems - Volume 2,
NIPS’14, page 2672–2680, Montreal, Canada. MIT
Press. DOI:10.1145/3422622.
Hartin, E. (2008). Fire Development and Fire Behavior In-
dicators. Technical report, Compartment Fire Behav-
ior Training.
He, K., Zhang, X., Ren, S., and Sun, J. (2016). Deep
Residual Learning for Image Recognition. In Pro-
ceedings of 2016 IEEE Conference on Computer Vi-
sion and Pattern Recognition (CVPR), pages 770–
778, Las Vegas, Nevada, USA. IEEE Computer So-
ciety. DOI:10.1109/CVPR.2016.90.
Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., and
Hochreiter, S. (2017). GANs Trained by a Two Time-
Scale Update Rule Converge to a Local Nash Equilib-
rium. In Proceedings of the 31st International Con-
ference on Neural Information Processing Systems,
NIPS’17, page 6629–6640, Long Beach, California,
USA. Curran Associates Inc.
Hollosi, J. and Ballagi, A. (2019). Training Neu-
ral Networks with Computer Generated Images.
In Proceedings of 2019 IEEE 15th International
Scientific Conference on Informatics, pages 155–
160, Poprad, Slovakia. IEEE Computer Society.
DOI:10.1109/Informatics47936.2019.9119273.
Aerial Fire Image Synthesis and Detection
283