
This would help to ensure that anatomical features of
diverse scales do not influence the accuracy of deep
learning-based systems, thereby improving the relia-
bility of diagnostic inference in clinical practice.
REFERENCES
Alkassar, S., Jebur, B. A., Abdullah, M. A., Al-Khalidy,
J. H., and Chambers, J. A. (2021). Going deeper:
magnification-invariant approach for breast cancer
classification using histopathological images. IET
Computer Vision, 15(2):151–164.
Chakraborty, S. and Mali, K. (2023). An overview of
biomedical image analysis from the deep learning per-
spective. Research Anthology on Improving Medi-
cal Imaging Techniques for Analysis and Intervention,
pages 43–59.
Chan, H.-P., Samala, R. K., Hadjiiski, L. M., and Zhou,
C. (2020). Deep learning in medical image analy-
sis. Deep Learning in Medical Image Analysis: Chal-
lenges and Applications, pages 3–21.
Cherian Kurian, N., Sethi, A., Reddy Konduru, A., Maha-
jan, A., and Rane, S. U. (2021). A 2021 update on
cancer image analytics with deep learning. Wiley In-
terdisciplinary Reviews: Data Mining and Knowledge
Discovery, 11(4):e1410.
Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn,
D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer,
M., Heigold, G., Gelly, S., Uszkoreit, J., and Houlsby,
N. (2021). An image is worth 16x16 words: Trans-
formers for image recognition at scale. In Interna-
tional Conference on Learning Representations.
Duncan, J. S. and Ayache, N. (2000). Medical image anal-
ysis: Progress over two decades and the challenges
ahead. IEEE transactions on pattern analysis and ma-
chine intelligence, 22(1):85–106.
Gupta, R. K., Nandgaonkar, S., Kurian, N. C., Rane, S.,
and Sethi, A. (2022). Egfr mutation prediction of lung
biopsy images using deep learning. arXiv preprint
arXiv:2208.12506.
Gupta, V. and Bhavsar, A. (2017). Breast cancer histopatho-
logical image classification: is magnification impor-
tant? In Proceedings of the IEEE conference on com-
puter vision and pattern recognition workshops, pages
17–24.
Hassani, A., Walton, S., Shah, N., Abuduweili, A., Li, J.,
and Shi, H. (2021). Escaping the big data paradigm
with compact transformers.
He, K., Zhang, X., Ren, S., and Sun, J. (2016). Deep resid-
ual learning for image recognition. In Proceedings of
the IEEE conference on computer vision and pattern
recognition, pages 770–778.
Howard, A. G., Zhu, M., Chen, B., Kalenichenko, D.,
Wang, W., Weyand, T., Andreetto, M., and Adam, H.
(2017). MobileNets: Efficient convolutional neural
networks for mobile vision applications.
Jeevan, P., Viswanathan, K., S, A. A., and Sethi, A. (2023).
Wavemix: A resource-efficient neural network for im-
age analysis.
Lee-Thorp, J., Ainslie, J., Eckstein, I., and Ontanon, S.
(2021). Fnet: Mixing tokens with fourier transforms.
arXiv preprint arXiv:2105.03824.
Li, Q., Cai, W., Wang, X., Zhou, Y., Feng, D. D., and Chen,
M. (2014). Medical image classification with convo-
lutional neural network. In 2014 13th international
conference on control automation robotics & vision
(ICARCV), pages 844–848. IEEE.
Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin,
S., and Guo, B. (2021). Swin transformer: Hierar-
chical vision transformer using shifted windows. In
Proceedings of the IEEE/CVF international confer-
ence on computer vision, pages 10012–10022.
Spanhol, F. A., Oliveira, L. S., Petitjean, C., and Heutte, L.
(2015). A dataset for breast cancer histopathological
image classification. Ieee transactions on biomedical
engineering, 63(7):1455–1462.
Tolstikhin, I. O., Houlsby, N., Kolesnikov, A., Beyer, L.,
Zhai, X., Unterthiner, T., Yung, J., Steiner, A., Key-
sers, D., Uszkoreit, J., et al. (2021). Mlp-mixer: An
all-mlp architecture for vision. Advances in neural in-
formation processing systems, 34:24261–24272.
Trockman, A. and Kolter, J. Z. (2022). Patches are all you
need? arXiv preprint arXiv:2201.09792.
Wightman, R. (2019). Pytorch image models. https://gith
ub.com/rwightman/pytorch-image-models.
Yu, W., Luo, M., Zhou, P., Si, C., Zhou, Y., Wang, X., Feng,
J., and Yan, S. (2022). Metaformer is actually what
you need for vision. In Proceedings of the IEEE/CVF
conference on computer vision and pattern recogni-
tion, pages 10819–10829.
BIOIMAGING 2024 - 11th International Conference on Bioimaging
222