networks to learn how to recognize sorghum plants.
They describe the rather involved preprocessing and
formatting steps that are necessary before learning
can take place. They also had to develop a technique
to increase the number of labelled training images.
Learning itself took between 50,000 and 500,000 it-
erations which entails a very heavy computing load.
They obtained a Mean Absolute Percentage Error of
6,7%. It is not possible to know if the data sets used
included overlapping plants or not.
The density-based approach is illustrated in
(Gn
¨
adinger and Schmidhalter, 2017). They first elim-
inate what can be presumed to be weeds and para-
sitic signals using a clustering method. Then they set
thresholds on different wavelengths in order to clas-
sify pixels as belonging to plants or not. This requires
some fine tuning. They obtain error rates around 5%
with fairly large standard deviations. Here too, plant
overlapping leads to a deterioration in performance.
5 CONCLUSIONS
With the generalization of devices for taking images,
it is increasingly critical to develop reliable and trans-
parent image vision systems (Olszewska, 2019). This
paper has introduced a new method to count objects
while satisfying these constraints. It is applicable
when objects are spatially organized according to a
regular pattern. The method first detects the pattern
and then uses it to seed agents in a MAS. The method
is simple, requiring no complex fine tuning of param-
eters, the tricky definition of templates or costly learn-
ing. In fact, it requires very modest computing re-
sources. In a series of extensive experiments on con-
trolled data sets and real aerial images of crop fields,
the method yielded state of the art or better perfor-
mance when the objects are well-separated and ex-
ceeded the best known performances when the objects
overlap. For future work, we plan to test the method
on other other object counting problems with differ-
ent geometries such as counting people in stadiums
or performance halls or vehicles in parking lots.
ACKNOWLEDGEMENTS
We thank Terres Inovia for sharing their dataset of
crop fields images captured with a UAV.
REFERENCES
Garc
´
ıa-Mart
´
ınez, H., Flores-Magdaleno, H., Khalil-
Gardezi, A., Ascencio-Hern
´
andez, R., Tijerina-
Ch
´
avez, L., V
´
azquez-Pe
˜
na, M. A., and Mancilla-Villa,
O. R. (2020). Digital count of corn plants using im-
ages taken by unmanned aerial vehicles and cross cor-
relation of templates. Agronomy, 10(4):469.
Gn
¨
adinger, F. and Schmidhalter, U. (2017). Digital counts
of maize plants by unmanned aerial vehicles (uavs).
Remote sensing, 9(6):544.
Guerrero, J. M., Pajares, G., Montalvo, M., Romeo, J., and
Guijarro, M. (2012). Support vector machines for
crop/weeds identification in maize fields. Expert Sys-
tems with Applications, 39(12):11149–11155.
Guijarro, M., Pajares, G., Riomoros, I., Herrera, P., Burgos-
Artizzu, X., and Ribeiro, A. (2011). Automatic seg-
mentation of relevant textures in agricultural images.
Computers and Electronics in Agriculture, 75(1):75–
83.
Han, S., Zhang, Q., Ni, B., and Reid, J. (2004). A guid-
ance directrix approach to vision-based vehicle guid-
ance systems. Computers and Electronics in Agricul-
ture, 43(3):179–195.
Hofmann., P. (2019). Multi-agent systems in remote sens-
ing image analysis. In Proceedings of the 11th In-
ternational Conference on Agents and Artificial Intel-
ligence - Volume 1: ICAART 2019, pages 178–185.
INSTICC, SciTePress.
Olszewska, J. (2019). Designing transparent and au-
tonomous intelligent vision systems. In Proceed-
ings of the 11th International Conference on Agents
and Artificial Intelligence - Volume 2: ICAART 2019,
pages 850–856. INSTICC, SciTePress.
Otsu, N. (1979). A Threshold Selection Method from
Gray-Level Histograms. IEEE Transactions on Sys-
tems, Man, and Cybernetics, 9(1):62–66. Conference
Name: IEEE Transactions on Systems, Man, and Cy-
bernetics.
P
´
erez-Ortiz, M., Pe
˜
na, J. M., Guti
´
errez, P. A., Torres-
S
´
anchez, J., Herv
´
as-Mart
´
ınez, C., and L
´
opez-
Granados, F. (2016). Selecting patterns and features
for between-and within-crop-row weed mapping us-
ing uav-imagery. Expert Systems with Applications,
47:85–94.
Perlin, K. (1985). An image synthesizer. ACM SIG-
GRAPH Computer Graphics, 19(3):287–296.
Ribera, J., Chen, Y., Boomsma, C., and Delp, E. J. (2017).
Counting plants using deep learning. In 2017 IEEE
global conference on signal and information process-
ing (GlobalSIP), pages 1344–1348. IEEE.
Technologies, U. (2020). Unity 2019.4.1.
Zou, Z., Shi, Z., Guo, Y., and Ye, J. (2019). Object Detec-
tion in 20 Years: A Survey. arXiv:1905.05055 [cs].
arXiv: 1905.05055.
Using Agents and Unsupervised Learning for Counting Objects in Images with Spatial Organization
697