ages included in the Bonn Dataset (Chebrolu et al.,
2017) using a modified version of the RGB SegNet
as the convolutional neural network (CNN) with a
synthetic generated dataset of RGB images resized to
480x360 pixels (Badrinarayanan et al., 2015).
6 CONCLUSIONS AND FUTURE
WORK
The main contribution of this paper is the addition
of the near infrared sensor in a simulation environ-
ment to generate datasets of plant-weed images that
include red, green, blue and near infrared data. The
results were tested against real data with a convolu-
tional neural network (CNN) designed for plant-weed
segmentation classification. The classification is eval-
uated using the mean intersection over union (IoU)
and the accuracy: both values are increased by adding
the near infrared data, with the most prominent im-
provement obtained by using the synthetic data.
The results obtained suggests this method to be
bases for the simulation of other type of plants and
weeds with the purpose of providing a reliable source
of data to generate datasets for CNN training. Simu-
lation parameters as size of plants, illumination inten-
sity and color variations can be adjusted to fit a spe-
cific environmental condition, and thus achieve simi-
lar classification results. Additional type of plants and
weeds can be added as well by adding new textures or
new 3D meshes to depending on their complexity. If
the new plant general shape is the same, it would re-
quire only additional textures and simple adjustments
to the 3D meshes. The developed simulator gener-
ates sugar beets leaves using a simple rectangular bent
mesh which then take the shape of the leaf through a
transparency shader that only renders the pixels with
leaf pixels. Then, carrots, for example, could be eas-
ily added since they have also a bent stem with leaves
that follow to some degree the pattern of the bent
stem. For some studies simple new textures could be
enough and if more detail is required then extra bent
meshes could be spawned on top of the main stem to
have more precise leave shapes. Plants with consid-
erable different shapes like sunflowers could also be
added by adding appropriate meshes that follow the
general shape of the plant and their required textures.
The simulator could also be modified to include
more points of views representing sensors mounted in
different type of robots like ground robots or UAVs.
These robots could also be added in the simulator to
perform inspection patterns and collect data for ma-
chine learning training and testing.
REFERENCES
Acker, O. V., Lachish, O., and Burnett, G. (2017). Cel-
lular automata simulation on FPGA for training neu-
ral networks with virtual world imagery. volume
abs/1711.07951.
Akiyama, R., Araujo, T., Chagas, P., Miranda, B., Santos,
C., Morais, J., and Meiguins, B. (2018). Synthetic
chart image generator: An application for generating
chart image datasets. In 2018 22nd International Con-
ference Information Visualisation (IV). IEEE.
Badrinarayanan, V., Kendall, A., and Cipolla, R.
(2015). Segnet: A deep convolutional encoder-
decoder architecture for image segmentation. CoRR,
abs/1511.00561.
Bah, M., Hafiane, A., and Canals, R. (2018). Deep learn-
ing with unsupervised data labeling for weed detec-
tion in line crops in UAV images. Remote Sensing,
10(11):1690.
Carbone, C., Garibaldi, O., and Kurt, Z. (2018). Swarm
robotics as a solution to crops inspection for precision
agriculture. KnE Engineering, 3(1):552.
Carvajal, J. A., Romero, D. G., and Sappa, A. D. (2017).
Fine-tuning based deep convolutional networks for
lepidopterous genus recognition. In Progress in Pat-
tern Recognition, Image Analysis, Computer Vision,
and Applications, pages 467–475. Springer Interna-
tional Publishing.
Chebrolu, N., Lottes, P., Schaefer, A., Winterhalter, W.,
Burgard, W., and Stachniss, C. (2017). Agricultural
robot dataset for plant classification, localization and
mapping on sugar beet fields. The International Jour-
nal of Robotics Research, 36(10):1045–1052.
Cicco, M. D., Potena, C., Grisetti, G., and Pretto, A.
(2016). Automatic model based dataset generation for
fast and accurate crop and weeds detection. volume
abs/1612.03019.
Deng, W., Zhao, C., and Wang, X. (2014). Discrimina-
tion of crop and weeds on visible and visible/near-
infrared spectrums using support vector machine, ar-
tificial neural network and decision tree. Sensors &
Transducers, 26:26–34.
Duhan, J. S., Kumar, R., Kumar, N., Kaur, P., Nehra, K., and
Duhan, S. (2017). Nanotechnology: The new perspec-
tive in precision agriculture. Biotechnology Reports,
15:11–23.
Fawakherji, M., Youssef, A., Bloisi, D., Pretto, A., and
Nardi, D. (2019). Crop and weeds classification for
precision agriculture using context-independent pixel-
wise segmentation. In 2019 Third IEEE International
Conference on Robotic Computing (IRC). IEEE.
Hattori, H., Boddeti, V. N., Kitani, K., and Kanade, T.
(2015). Learning scene-specific pedestrian detectors
without real data. In 2015 IEEE Conference on Com-
puter Vision and Pattern Recognition (CVPR), pages
3819–3827.
ISPA (2020). Home | international society of precision agri-
culture.
Juliani, A., Berges, V., Vckay, E., Gao, Y., Henry, H., Mat-
Simulation of near Infrared Sensor in Unity for Plant-weed Segmentation Classification
89