
Fang, Y., Zhu, H., Zeng, Y., Ma, K., and Wang, Z. (2020).
Perceptual quality assessment of smartphone photog-
raphy. In IEEE Conference on Computer Vision and
Pattern Recognition, pages 3674–3683.
Gankhuyag, G., Yoon, K., Park, J., Son, H. S., and Min, K.
(2023). Lightweight real-time image super-resolution
network for 4k images. In IEEE Conference on Com-
puter Vision and Pattern Recognition, pages 1746–
1755.
Gu, J., Cai, H., Chen, H., Ye, X., Ren, J., and Dong, C.
(2020). Pipal: a large-scale image quality assessment
dataset for perceptual image restoration. In European
Conference on Computer Vision, pages 633–651.
Gupta, A., Anpalagan, A., Guan, L., and Khwaja, A. S.
(2021). Deep learning for object detection and scene
perception in self-driving cars: Survey, challenges,
and open issues. Array, 10:100057.
Ha, Y., Du, Z., and Tian, J. (2022). Fine-grained in-
teractive attention learning for semi-supervised white
blood cell classification. Biomedical Signal Process-
ing and Control, 75:103611.
Han, Z., Zhai, G., Liu, Y., Gu, K., and Zhang, X. (2016).
A reduced-reference quality assessment scheme for
blurred images. In IEEE Visual Communications and
Image Processing, pages 1–4.
He, K., Zhang, X., Ren, S., and Sun, J. (2016). Deep
residual learning for image recognition. In IEEE Con-
ference on Computer Vision and Pattern Recognition,
pages 770–778.
Hosu, V., Lin, H., Sziranyi, T., and Saupe, D. (2020).
KonIQ-10k: An ecologically valid database for deep
learning of blind image quality assessment. IEEE
Transactions on Image Processing, 29:4041–4056.
Hsu, W.-Y. and Chen, P.-C. (2022). Pedestrian detec-
tion using stationary wavelet dilated residual super-
resolution. IEEE Transactions on Instrumentation and
Measurement, 71:1–11.
Jackson, P. T., Bonner, S., Jia, N., Holder, C., Stonehouse,
J., and Obara, B. (2021). Camera bias in a fine grained
classification task. In IEEE International Joint Con-
ference on Neural Networks, pages 1–8.
Ke, J., Wang, Q., Wang, Y., Milanfar, P., and Yang, F.
(2021). MUSIQ: Multi-scale smage suality trans-
former. In IEEE International Conference on Com-
puter Vision, pages 5128–5137.
Krizhevsky, A., Sutskever, I., and Hinton, G. E. (2012). Im-
agenet classification with deep convolutional neural
networks. In Advances in Neural Information Pro-
cessing Systems, pages 1106–1114.
Lyerly, S. B. (1952). The average spearman rank correlation
coefficient. Psychometrika, 17(4):421–428.
Mittal, A., Moorthy, A. K., and Bovik, A. C. (2012).
No-reference image quality assessment in the spatial
domain. IEEE Transactions on Image Processing,
21:4695–4708.
Mittal, A., Soundararajan, R., and Bovik, A. C. (2013).
Making a “completely blind” image quality analyzer.
IEEE Signal Processing Letters, 20(3):209–212.
Nayar, S. K. and Nakagawa, Y. (1994). Shape from focus.
IEEE Transactions on Pattern Analysis and Machine
Intelligence, 16(8):824–831.
Pech-Pacheco, J., Cristobal, G., Chamorro-Martinez, J., and
Fernandez-Valdivia, J. (2000). Diatom autofocusing
in brightfield microscopy: a comparative study. In In-
ternational Conference on Pattern Recognition, pages
314–317.
Peng, X., Hoffman, J., Stella, X. Y., and Saenko, K. (2016).
Fine-to-coarse knowledge transfer for low-res image
classification. In IEEE International Conference on
Image Processing, pages 3683–3687.
Pertuz, S., Puig, D., and Garcia, M. A. (2013). Analysis of
focus measure operators for shape-from-focus. Pat-
tern Recognition, 46(5):1415–1432.
Sabbatini, L., Palma, L., Belli, A., Sini, F., and Pierleoni, P.
(2021). A computer vision system for staff gauge in
river flood monitoring. Inventions, 6(4):79.
Sara, U., Akter, M., and Uddin, M. S. (2019). Image quality
assessment through FSIM, SSIM, MSE and PSNR—a
comparative study. Journal of Computer and Commu-
nications, 7(3):8–18.
Sharif Razavian, A., Azizpour, H., Sullivan, J., and Carls-
son, S. (2014). Cnn features off-the-shelf: an astound-
ing baseline for recognition. In IEEE Conference on
Computer Vision and Pattern Recognition workshops,
pages 806–813.
Simonyan, K. and Zisserman, A. (2015). Very deep con-
volutional networks for large-scale image recognition.
In International Conference on Learning Representa-
tions, pages 1–14.
Stepien, I. and Oszust, M. (2022). A brief survey on
no-reference image quality assessment methods for
magnetic resonance images. Journal of Imaging,
8(6):160–178.
Su, S., Yan, Q., Zhu, Y., Zhang, C., Ge, X., Sun, J., and
Zhang, Y. (2020). Blindly assess image quality in the
wild guided by a self-adaptive hyper network. In IEEE
Conference on Computer Vision and Pattern Recogni-
tion, pages 3664–3673.
Varga, D. (2022). No-reference image quality assessment
with convolutional neural networks and decision fu-
sion. Applied Sciences, 12(1):101–118.
Wang, M., Zhao, P., Lu, X., Min, F., and Wang, X.
(2023). Fine-grained visual categorization: A spatial–
frequency feature fusion perspective. IEEE Transac-
tions on Circuits and Systems for Video Technology,
33(6):2798–2812.
Wang, Y., Cao, Y., Zha, Z.-J., Zhang, J., and Xiong, Z.
(2020). Deep degradation prior for low-quality im-
age classification. In IEEE Conference on Computer
Vision and Pattern Recognition, pages 11049–11058.
Xu, Y., Wei, M., and Kamruzzaman, M. (2021). Inter/intra-
category discriminative features for aerial image clas-
sification: A quality-aware selection model. Future
Generation Computer Systems, 119:77–83.
Yang, G. and Nelson, B. J. (2003). Wavelet-based autofo-
cusing and unsupervised segmentation of microscopic
images. In IEEE/RSJ International Conference on In-
telligent Robots and Systems, volume 3, pages 2143–
2148.
VISAPP 2024 - 19th International Conference on Computer Vision Theory and Applications
456