Also, for satisfactory training and inference results,
the dataset must be divided into small tiles to reduce
the number of classes present in each image.
The theoretical and experimental study of solu-
tions for implementing a classifier using unbalanced
data have great importance for future work, since
most environmental monitoring applications rely on
disproportionate class data. It is intended to apply
the concepts and experiences learned in new datasets
with more classes and more data. Also, for future
work, pixel-wise semantic segmentation deep learn-
ing models (Badrinarayanan et al., 2015; Chen et al.,
2018; Ronneberger et al., 2015) may be used to clas-
sify the plants species which makes it possible to clas-
sify whole images containing multiple classes at the
same time and without being necessary to crop them
into small tiles.
ACKNOWLEDGEMENT
This work was partially funded by a Masters Scholar-
ship supported by the National Council for Scientific
and Technological Development (CNPq) at the Pon-
tifical University Catholic of Rio de Janeiro, Brazil.
REFERENCES
Aitkenhead, M., Dalgetty, I., Mullins, C., McDonald, A.
J. S., and Strachan, N. J. C. (2003). Weed and crop dis-
crimination using image analysis and artificial intelli-
gence methods. Computers and electronics in Agri-
culture, 39(3):157–171.
Badrinarayanan, V., Kendall, A., and Cipolla, R. (2015).
Segnet: A deep convolutional encoder-decoder ar-
chitecture for image segmentation. arXiv preprint
arXiv:1511.00561.
Chawla, N. V., Bowyer, K. W., Hall, L. O., and Kegelmeyer,
W. P. (2002). Smote: synthetic minority over-
sampling technique. Journal of artificial intelligence
research, 16:321–357.
Chen, L.-C., Papandreou, G., Kokkinos, I., Murphy, K., and
Yuille, A. L. (2018). Deeplab: Semantic image seg-
mentation with deep convolutional nets, atrous convo-
lution, and fully connected crfs. IEEE transactions on
pattern analysis and machine intelligence, 40(4):834–
848.
Cires¸an, D. C., Meier, U., Gambardella, L. M., and Schmid-
huber, J. (2010). Deep, big, simple neural nets for
handwritten digit recognition. Neural computation,
22(12):3207–3220.
Goyal, P., Doll
´
ar, P., Girshick, R., Noordhuis, P.,
Wesolowski, L., Kyrola, A., Tulloch, A., Jia, Y., and
He, K. (2018). Accurate, large minibatch sgd: training
imagenet in 1 hour. arXiv preprint arXiv:1706.02677.
Gu, J., Wang, Z., Kuen, J., Ma, L., Shahroudy, A., Shuai, B.,
Liu, T., Wang, X., Wang, G., Cai, J., et al. (2018). Re-
cent advances in convolutional neural networks. Pat-
tern Recognition, 77:354–377.
He, K. and Sun, J. (2015). Convolutional neural networks
at constrained time cost. In Proceedings of the IEEE
conference on computer vision and pattern recogni-
tion, pages 5353–5360.
He, K., Zhang, X., Ren, S., and Sun, J. (2016). Deep resid-
ual learning for image recognition. In Proceedings of
the IEEE conference on computer vision and pattern
recognition, pages 770–778.
Horler, D., DOCKRAY, M., and Barber, J. (1983). The red
edge of plant leaf reflectance. International Journal
of Remote Sensing, 4(2):273–288.
Jolliffe, I. (2011). Principal component analysis. In In-
ternational encyclopedia of statistical science, pages
1094–1096. Springer.
Krizhevsky, A., Sutskever, I., and Hinton, G. E. (2012). Im-
agenet classification with deep convolutional neural
networks. In Advances in neural information process-
ing systems, pages 1097–1105.
Liu, G. and Gifford, D. (2017). Visualizing feature maps in
deep neural networks using deepresolve a genomics
case study. ICML Visualization Workshop.
Maaten, L. v. d. and Hinton, G. (2008). Visualizing data
using t-sne. Journal of machine learning research,
9(Nov):2579–2605.
Nogueira, K., Dos Santos, J. A., Fornazari, T., Silva, T. S. F.,
Morellato, L. P., and Torres, R. d. S. (2016). Towards
vegetation species discrimination by using data-driven
descriptors. In Pattern Recogniton in Remote Sens-
ing (PRRS), 2016 9th IAPR Workshop on, pages 1–6.
IEEE.
Pass, G., Zabih, R., and Miller, J. (1997). Comparing im-
ages using color coherence vectors. In Proceedings of
the fourth ACM international conference on Multime-
dia, pages 65–73. ACM.
Refaeilzadeh, P., Tang, L., and Liu, H. (2009). Cross-
validation. In Encyclopedia of database systems,
pages 532–538. Springer.
Ronneberger, O., Fischer, P., and Brox, T. (2015). U-net:
Convolutional networks for biomedical image seg-
mentation. In International Conference on Medical
image computing and computer-assisted intervention,
pages 234–241. Springer.
Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S.,
Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bern-
stein, M., et al. (2015). Imagenet large scale visual
recognition challenge. International Journal of Com-
puter Vision, 115(3):211–252.
Simonyan, K., Vedaldi, A., and Zisserman, A. (2013).
Deep inside convolutional networks: Visualising im-
age classification models and saliency maps. arXiv
preprint arXiv:1312.6034.
Stehling, R. O., Nascimento, M. A., and Falc
˜
ao, A. X.
(2002). A compact and efficient image retrieval ap-
proach based on border/interior pixel classification. In
Proceedings of the eleventh international conference
ICAART 2019 - 11th International Conference on Agents and Artificial Intelligence
182