
ACKNOWLEDGEMENTS
This work was funded by FH-KOOP funding Project
of Weiden Erlangen Cooperation for Sparse AI in
Life Sensing under internal grant from Fraunhofer
gesellschaft and supported by Fraunhofer Institute for
Integrated Circuits (IIS) by providing infrastructure to
carry out the research work.
REFERENCES
Akkus, Z., Galimzianova, A., Hoogi, A., Rubin, D. L., and
Erickson, B. J. (2017). Deep learning for brain MRI
segmentation: state of the art and future directions. J
Digit Imag, 30(4):449–459.
Al-Masni, M. A. and Kim, D.-H. (2021). Cmm-net: Con-
textual multi-scale multi-level network for efficient
biomedical image segmentation. Scientific Reports,
11(1):1–18.
Badrinarayanan, V., Kendall, A., and Cipolla, R. (2017).
Segnet: A deep convolutional encoder-decoder ar-
chitecture for image segmentation. IEEE Transac-
tions on Pattern Analysis and Machine Intelligence,
39(12):2481–2495.
Beheshti, N. and Johnsson, L. (2020). Squeeze u-net: A
memory and energy efficient image segmentation net-
work. In 2020 IEEE/CVF Conference on Computer
Vision and Pattern Recognition Workshops (CVPRW),
pages 1495–1504.
Cao, Y., Liu, S., Peng, Y., and Li, J. (2020). Denseunet:
densely connected unet for electron microscopy image
segmentation. IET Image Processing, 14(12):2682–
2689.
Chen, L.-C., Papandreou, G., Kokkinos, I., Murphy, K., and
Yuille, A. L. (2018). Deeplab: Semantic image seg-
mentation with deep convolutional nets, atrous con-
volution, and fully connected crfs. IEEE Transac-
tions on Pattern Analysis and Machine Intelligence,
40(4):834–848.
El-Assiouti, H. S., El-Saadawy, H., Al-Berry, M. N., and
Tolba, M. F. (2023). Lite-srgan and lite-unet: Toward
fast and accurate image super-resolution, segmenta-
tion, and localization for plant leaf diseases. IEEE
Access, 11:67498–67517.
He, K., Zhang, X., Ren, S., and Sun, J. (2016). Deep resid-
ual learning for image recognition. In 2016 IEEE Con-
ference on Computer Vision and Pattern Recognition
(CVPR), pages 770–778.
Li, X., Wang, Y., Tang, Q., Fan, Z., and Wu, J. (2019). Dual
U-Net for the segmentation of overlapping glioma nu-
clei. IEEE Access, 7:84040–84052.
Long, J., Shelhamer, E., and Darrell, T. (2015). Fully con-
volutional networks for semantic segmentation. In
IEEE Conf Comp Vision and Pattern Recog, pages
3431–3440. IEEE.
Mahbod, A., Polak, C., Feldmann, K., Khan, R., Gelles, K.,
Dorffner, G., Woitek, R., Hatamikia, S., and Ellinger,
I. (2024). Nuinsseg: A fully annotated dataset for nu-
clei instance segmentation in h&e-stained histological
images. Scientific Data, 11(1):1–7.
Meng, X., Yang, Y., Wang, L., Wang, T., Li, R., and Zhang,
C. (2022). Class-guided swin transformer for seman-
tic segmentation of remote sensing imagery. IEEE
Geoscience and Remote Sensing Letters, 19:1–5.
Milletari, F., Navab, N., and Ahmadi, S.-A. (2016). V-
Net: Fully convolutional neural networks for volumet-
ric medical image segmentation. In Fourth IEEE Int
Conf 3D Vision, pages 565–571, Stanford, CA, USA.
IEEE.
Nam, M., Oh, S., and Lee, J. (2024). Quantization of u-
net model for self-driving. In 2024 10th International
Conference on Applied System Innovation (ICASI),
pages 1–3.
Nawaratne, R., Alahakoon, D., De Silva, D., and Yu, X.
(2020). Spatiotemporal anomaly detection using deep
learning for real-time video surveillance. IEEE Trans-
actions on Industrial Informatics, 16(1):393–402.
Ronneberger, O., Fischer, P., and Brox, T. (2015). MIC-
CAI2015, chapter U-Net: Convolutional networks for
biomedical image segmentation. Springer.
Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., and
Chen, L.-C. (2019). Mobilenetv2: Inverted residuals
and linear bottlenecks.
Sawant, S. S., Bauer, J., Erick, F. X., Ingaleshwar, S.,
Holzer, N., Ramming, A., Lang, E. W., and G
¨
otz,
T. (2022a). An optimal-score-based filter pruning for
deep convolutional neural networks. Applied Intelli-
gence, 52:17557–17579.
Sawant, S. S., Erick, F. X., G
¨
ob, S., Holzer, N., Lang,
E. W., and G
¨
otz, T. (2023). An adaptive binary par-
ticle swarm optimization for solving multi-objective
convolutional filter pruning problem. Journal of Su-
percomputing, 79:13287–13306.
Sawant, S. S., Wiedmann, M., G
¨
ob, S., Holzer, N., Lang,
E. W., and G
¨
otz, T. (2022b). Compression of
deep convolutional neural network using additional
importance-weight-based filter pruning approach. Ap-
plied Sciences, 12(21).
Su, R., Zhang, D., Liu, J., and Cheng, C. (2021). MSU-Net:
Multi-scale U-Net for 2D medical image segmenta-
tion. Frontiers in Genetics, 12:639930.
Vagollari, A., Hirschbeck, M., and Gerstacker, W. (2023).
An end-to-end deep learning framework for wideband
signal recognition. IEEE Access, 11:52899–52922.
Vaze, S., Xie, W., and Namburete, A. I. L. (2020). Low-
memory cnns enabling real-time ultrasound segmen-
tation towards mobile deployment. IEEE Journal
of Biomedical and Health Informatics, 24(4):1059–
1069.
Zhang, J., Zhu, H., Wang, P., and Ling, X. (2021). Att
squeeze u-net: A lightweight network for forest fire
detection and recognition. IEEE Access, 9:10858–
10870.
Zhao, P., Li, Z., You, Z., Chen, Z., Huang, T., Guo, K., and
Li, D. (2024). Se-u-lite: Milling tool wear segmenta-
tion based on lightweight u-net model with squeeze-
and-excitation module. IEEE Transactions on Instru-
mentation and Measurement, 73:1–8.
Zhou, Z., Siddiquee, M. M. R., Tajbakhsh, N., and Liang, J.
(2020). Unet++: redesigning skip connections to ex-
ploit multiscale features in image segmentation. IEEE
Trans Med Imaging, 39:1856–1867.
ICAART 2025 - 17th International Conference on Agents and Artificial Intelligence
734