
different characteristics, it can suggest reasonable
model choices in other domains.
In future works, it is important to expand the anal-
ysis by including other image datasets to make the
analysis more comprehensive. Besides that, the in-
vestigation can also be expanded to include more pre-
trained models that eventually were not considered in
the scope of this work. Furthermore, future works
could also investigate the relationship between the un-
derlying principles of each architecture, the properties
of the datasets used in the pre-training of these mod-
els, and the properties of the target datasets in which
the pre-trained models are applied to extract features.
This investigation can reveal insights into what makes
the pre-trained model best suited for each task.
ACKNOWLEDGMENTS
The authors would like to thank the Brazilian Na-
tional Council for Scientific and Technological Devel-
opment (CNPq) and Petrobras for the financial sup-
port of this work.
REFERENCES
Abel, M., Gastal, E. S. L., Michelin, C. R. L., Maggi, L. G.,
Firnkes, B. E., Pachas, F. E. H., and dos Santos Al-
varenga, R. (2019). A knowledge organization system
for image classification and retrieval in petroleum ex-
ploration domain. In ONTOBRAS.
Abou Baker, N., Zengeler, N., and Handmann (2022).
Uwe. a transfer learning evaluation of deep neural net-
works for image classification. Machine Learning and
Knowledge Extraction, 4(1):22–41.
Alzubaidi, L. et al. (2021). Review of deep learning: Con-
cepts, cnn architectures, challenges, applications, fu-
ture directions. Journal of big Data, 8(1):1–74.
Arslan, Y., Allix, K., Veiber, L., Lothritz, C., Bissyand
´
e,
T. F., Klein, J., and Goujon, A. (2021). A compari-
son of pre-trained language models for multi-class text
classification in the financial domain. In Companion
Proceedings of the Web Conference 2021, pages 260–
268.
Coates, A., Ng, A., and Lee, H. (2011). An analy-
sis of single-layer networks in unsupervised feature
learning. In Proceedings of the fourteenth interna-
tional conference on artificial intelligence and statis-
tics, pages 215–223. JMLR Workshop and Confer-
ence Proceedings.
Cohen, I., Huang, Y., Chen, J., Benesty, J., Benesty, J.,
Chen, J., Huang, Y., and Cohen, I. (2009). Pearson
correlation coefficient. Noise reduction in speech pro-
cessing, pages 1–4.
De Lima, R. P. et al. (2019). Deep convolutional neural
networks as a geological image classification tool. The
Sedimentary Record, 17(2):4–9.
Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., and Fei-
Fei, L. (2009). Imagenet: A large-scale hierarchical
image database. In 2009 IEEE conference on com-
puter vision and pattern recognition, pages 248–255.
Ieee.
Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn,
D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer,
M., Heigold, G., Gelly, S., et al. (2020). An image is
worth 16x16 words: Transformers for image recogni-
tion at scale. arXiv preprint arXiv:2010.11929.
Fawaz, H. I. et al. (2018). Transfer learning for time series
classification. In 2018 IEEE international conference
on big data (Big Data), pages 1367–1376. IEEE p.
He, K. et al. (2016). Deep residual learning for image
recognition. In Proceedings of the IEEE conference
on computer vision and pattern recognition p, pages
770–778.
Hollink, L., Schreiber, G., Wielemaker, J., Wielinga, B.,
et al. (2003). Semantic annotation of image collec-
tions. In Knowledge capture, volume 2.
Howard, A., Sandler, M., Chu, G., Chen, L.-C., Chen, B.,
Tan, M., Wang, W., Zhu, Y., Pang, R., Vasudevan, V.,
et al. (2019). Searching for mobilenetv3. In Pro-
ceedings of the IEEE/CVF international conference
on computer vision, pages 1314–1324.
Huang, G., Liu, Z., Van Der Maaten, L., and Weinberger,
K. Q. (2017). Densely connected convolutional net-
works. In Proceedings of the IEEE conference on
computer vision and pattern recognition, pages 4700–
4708.
Iandola, F. N., Han, S., Moskewicz, M. W., Ashraf, K.,
Dally, W. J., and Keutzer, K. (2016). Squeezenet:
Alexnet-level accuracy with 50x fewer parame-
ters and¡ 0.5 mb model size. arXiv preprint
arXiv:1602.07360.
Karpatne, A. et al. (2018). Machine learning for the
geosciences: Challenges and opportunities. IEEE
Transactions on Knowledge and Data Engineering,
31(8):1544–1554.
Kieffer, B., Babaie, M., Kalra, S., and Tizhoosh,
H. R. (2017). Convolutional neural networks for
histopathology image classification: Training vs. us-
ing pre-trained networks. In 2017 Seventh Interna-
tional Conference on Image Processing Theory, Tools
and Applications (IPTA), pages 1–6. IEEE.
Krause, J., Stark, M., Deng, J., and Fei-Fei, L. (2013). 3d
object representations for fine-grained categorization.
In 4th International IEEE Workshop on 3D Represen-
tation and Recognition (3dRR-13), Sydney, Australia.
Krizhevsky, A. (2014). One weird trick for paralleliz-
ing convolutional neural networks. arXiv preprint
arXiv:1404.5997.
Krizhevsky, A., Hinton, G., et al. (2009). Learning multiple
layers of features from tiny images.
Kumar, J. S., Anuar, S., and Hassan, N. H. (2022). Trans-
fer learning based performance comparison of the pre-
trained deep neural networks. International Journal of
Advanced Computer Science and Applications, 13:1.
An Evaluation of Pre-Trained Models for Feature Extraction in Image Classification
475