In future work, we will investigate further the pos-
sible biases present in medical imaging analysis prob-
lems. We will extend the evaluation to other datasets,
employing interpretability techniques to aid us in cat-
egorizing existing biases in data. Additionally, we
will investigate which preprocessing techniques are
viable to reduce the impact of noise and acquisition
artifacts of images on model performance.
ACKNOWLEDGMENTS
This research was supported by S
˜
ao Paulo Research
Foundation (FAPESP) [grant numbers 2015/11937-
9, 2017/12646-3 and 2017/21957-2], and the Na-
tional Council for Scientific and Technological De-
velopment (CNPq) [grant numbers 140929/2021-5,
161015/2021-2 and 304380/2018-0].
REFERENCES
Deepak, S. and Ameer, P. (2019). Brain tumor classification
using deep cnn features via transfer learning. Comput-
ers in Biology and Medicine, 111:103345.
Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., and Fei-Fei,
L. (2009). ImageNet: A large-scale hierarchical image
database. In IEEE International Conference on Com-
puter Vision and Pattern Recognition (CVPR), pages
248–255.
He, K., Zhang, X., Ren, S., and Sun, J. (2016). Deep resid-
ual learning for image recognition. In IEEE Inter-
national Conference on Computer Vision and Pattern
Recognition (CVPR), pages 770–778.
Howard, A. G., Zhu, M., Chen, B., Kalenichenko,
D., Wang, W., Weyand, T., Andreetto, M., and
Adam, H. (2017). MobileNets: Efficient convolu-
tional neural networks for mobile vision applications.
arXiv:1704.04861.
Hussain, M., Bird, J. J., and Faria, D. R. (2018). A Study on
CNN Transfer Learning for Image Classification. In
UK Workshop on Computational Intelligence (UKCI),
pages 191–202. Springer.
Kermany, D., Zhang, K., and Goldbaum, M. (2018a).
Labeled Optical Coherence Tomography (OCT) and
Chest X-Ray Images for Classification. Mendeley
Data, 2(2).
Kermany, D. S., Goldbaum, M., Cai, W., Valentim, C. C.,
Liang, H., Baxter, S. L., McKeown, A., Yang, G., Wu,
X., Yan, F., Dong, J. D., Prasadha, M. K., Pei, J., Ting,
M. Y., Zhu, J., Li, C., Hewett, S., Dong, J., Ziyar, I.,
Shi, A., Zhang, R., Zheng, L., Hou, R., Shi, W., Fu,
X., Duan, Y., Huu, V. A., Wen, C., Zhang, E. D. Z.,
Zhang, C. L., Li, O., Wang, X., Singer, M. A., Sun, X.,
Xu, J., Tafreshi, A., Lewis, M. A., Xia, H., and Zhang,
K. (2018b). Identifying Medical Diagnoses and Treat-
able Diseases by Image-Based Deep Learning. Cell,
172(5):1122–1131.
Litjens, G., Kooi, T., Bejnordi, B. E., Setio, A. A. A.,
Ciompi, F., Ghafoorian, M., Van Der Laak, J. A.,
Van Ginneken, B., and S
´
anchez, C. I. (2017). A survey
on deep learning in medical image analysis. Medical
Image Analysis, 42:60–88.
Oliveira, G., Padilha, R., Dorte, A., Cereda, L., Miyazaki,
L., Lopes, M., and Dias, Z. (2020). COVID-19 X-
ray Image Diagnostic with Deep Neural Networks. In
2020 Brazilian Symposium on Bioinformatics (BSB),
pages 57–68. Springer.
Roberts, M., Driggs, D., Thorpe, M., Gilbey, J., Yeung,
M., Ursprung, S., Aviles-Rivero, A. I., Etmann, C.,
McCague, C., Beer, L., et al. (2021). Common pitfalls
and recommendations for using machine learning to
detect and prognosticate for covid-19 using chest ra-
diographs and ct scans. Nature Machine Intelligence,
3(3):199–217.
Selvaraju, R. R., Cogswell, M., Das, A., Vedantam, R.,
Parikh, D., and Batra, D. (2017). Grad-cam: Visual
explanations from deep networks via gradient-based
localization. In IEEE International Conference on
Computer Vision (ICCV), pages 618–626.
Shi, F., Wang, J., Shi, J., Wu, Z., Wang, Q., Tang, Z., He, K.,
Shi, Y., and Shen, D. (2020). Review of artificial in-
telligence techniques in imaging data acquisition, seg-
mentation, and diagnosis for COVID-19. IEEE Re-
views in Biomedical Engineering, 14:4–15.
Simonyan, K. and Zisserman, A. (2014). Very deep con-
volutional networks for large-scale image recognition.
arXiv:1409.1556, pages 1–14.
Swanson, E. A. and Fujimoto, J. G. (2017). The ecosystem
that powered the translation of oct from fundamental
research to clinical and commercial impact. Biomedi-
cal Optics Express, 8(3):1638–1664.
Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., and Wojna,
Z. (2016). Rethinking the inception architecture for
computer vision. In IEEE International Conference
on Computer Vision and Pattern Recognition (CVPR),
pages 2818–2826.
Tan, C., Sun, F., Kong, T., Zhang, W., Yang, C., and Liu, C.
(2018). A Survey on Deep Transfer Learning. In In-
ternational Conference on Artificial Neural Networks
(ICANN), pages 270–279. Springer.
Tan, M. and Le, Q. (2019). Efficientnet: Rethinking model
scaling for convolutional neural networks. In IEEE In-
ternational Conference on Machine Learning (ICML),
pages 6105–6114.
Wu, J., Ruan, S., Lian, C., Mutic, S., Anastasio, M. A.,
and Li, H. (2018). Active learning with noise mod-
eling for medical image annotation. In 15th Inter-
national Symposium on Biomedical Imaging (ISBI),
pages 298–301. IEEE.
Yosinski, J., Clune, J., Bengio, Y., and Lipson, H. (2014).
How transferable are features in deep neural net-
works? In Advances in Neural Information Process-
ing Systems (NIPS), pages 3320–3328.
Zhuang, F., Qi, Z., Duan, K., Xi, D., Zhu, Y., Zhu, H.,
Xiong, H., and He, Q. (2021). A Comprehensive Sur-
vey on Transfer Learning. Proceedings of the IEEE,
109(1):43–76.
ICAART 2022 - 14th International Conference on Agents and Artificial Intelligence
580