
Clay, V., K
¨
onig, P., and Koenig, S. (2019). Eye tracking
in virtual reality. Journal of Eye Movement Research,
12(1).
Franke, L., Fink, L., Martschinke, J., Selgrad, K., and Stam-
minger, M. (2021). Time-warped foveated rendering
for virtual reality headsets. Computer Graphics Fo-
rum, 40(1):110–123.
Geisler, W. S. and Perry, J. S. (2008). Space Variant
Imaging System (SVIS). https://svi.cps.utexas.edu/
svistoolbox-1.0.5.zip.
Hoffman, D., Meraz, Z., and Turner, E. (2018). Limits of
peripheral acuity and implications for vr system de-
sign. Journal of the Society for Information Display,
26(8):483–495.
Hussain, R., Chessa, M., and Solari, F. (2020). Modelling
foveated depth-of-field blur for improving depth per-
ception in virtual reality. In 4th IEEE International
Conference on Image Processing, Applications and
Systems, pages 71–76.
Hussain, R., Chessa, M., and Solari, F. (2021). Mitigat-
ing cybersickness in virtual reality systems through
foveated depth-of-field blur. Sensors, 21(12).
Hussain, R., Chessa, M., and Solari, F. (2023). Improv-
ing depth perception in immersive media devices by
addressing vergence-accommodation conflict. IEEE
Transactions on Visualization and Computer Graph-
ics, pages 1–13.
Hussain, R., Solari, F., and Chessa, M. (2019). Simulated
foveated depth-of-field blur for virtual reality systems.
In 16th ACM SIGGRAPH European Conference on Vi-
sual Media Production, London, United Kingdom.
Jabbireddy, S., Sun, X., Meng, X., and Varshney, A. (2022).
Foveated rendering: Motivation, taxonomy, and re-
search directions. arXiv preprint arXiv:2205.04529.
Jin, Y., Chen, M., Bell, T. G., Wan, Z., and Bovik, A.
(2020). Study of 2D foveated video quality in virtual
reality. In Tescher, A. G. and Ebrahimi, T., editors,
Applications of Digital Image Processing XLIII, vol-
ume 11510, page 1151007. International Society for
Optics and Photonics, SPIE.
Jin, Y., Chen, M., Goodall, T., Patney, A., and Bovik,
A. C. (2021). Subjective and objective quality assess-
ment of 2d and 3d foveated video compression in vir-
tual reality. IEEE Transactions on Image Processing,
30:5905–5919.
Lin, Y.-X., Venkatakrishnan, R., Venkatakrishnan, R.,
Ebrahimi, E., Lin, W.-C., and Babu, S. V. (2020). How
the presence and size of static peripheral blur affects
cybersickness in virtual reality. ACM Transactions on
Applied Perception, 17(4):1–18.
Maiello, G., Chessa, M., Bex, P. J., and Solari, F. (2020).
Near-optimal combination of disparity across a log-
polar scaled visual field. PLOS Computational Biol-
ogy, 16(4):1–28.
Mantiuk, R. K., Denes, G., Chapiro, A., Kaplanyan, A.,
Rufo, G., Bachy, R., Lian, T., and Patney, A. (2021).
Fovvideovdp: A visible difference predictor for wide
field-of-view video. ACM Transactions on Graphics,
40(4).
Meng, X., Du, R., Zwicker, M., and Varshney, A. (2018).
Kernel foveated rendering. Proceedings of ACM on
Computer Graphics and Interactive Techniques, 1(1).
Mittal, A., Moorthy, A. K., and Bovik, A. C. (2011).
Blind/referenceless image spatial quality evaluator. In
45th ASILOMAR Conference on Signals, Systems and
Computers, pages 723–727. IEEE.
Mittal, A., Soundararajan, R., and Bovik, A. C. (2012).
Making a “completely blind” image quality analyzer.
IEEE Signal Processing Letters, 20(3):209–212.
Mohanto, B., Islam, A. T., Gobbetti, E., and Staadt, O.
(2022). An integrative view of foveated rendering.
Computers & Graphics, 102:474–501.
Patney, A., Salvi, M., Kim, J., Kaplanyan, A., Wyman, C.,
Benty, N., Luebke, D., and Lefohn, A. (2016). To-
wards foveated rendering for gaze-tracked virtual re-
ality. ACM Transactions on Graphics, 35(6).
Romero-Rond
´
on, M. F., Sassatelli, L., Precioso, F., and
Aparicio-Pardo, R. (2018). Foveated streaming of vir-
tual reality videos. In 9th ACM Multimedia Systems
Conference, MMSys ’18, page 494–497, New York,
NY, USA. Association for Computing Machinery.
Roth, T., Weier, M., Hinkenjann, A., Li, Y., and Slusallek,
P. (2017). A quality-centered analysis of eye tracking
data in foveated rendering. Journal of Eye Movement
Research, 10(5).
Sheikh, H. and Bovik, A. (2006). Image information and vi-
sual quality. IEEE Transactions on Image Processing,
15(2):430–444.
Solari, F., Chessa, M., and Sabatini, S. P. (2012). Design
strategies for direct multi-scale and multi-orientation
feature extraction in the log-polar domain. Pattern
Recognition Letters, 33(1):41–51.
Tariq, T., Tursun, C., and Didyk, P. (2022). Noise-based en-
hancement for foveated rendering. ACM Transactions
on Graphics, 41(4).
Tursun, O. T., Arabadzhiyska-Koleva, E., Wernikowski,
M., Mantiuk, R., Seidel, H.-P., Myszkowski, K., and
Didyk, P. (2019). Luminance-contrast-aware foveated
rendering. ACM Transactions on Graphics, 38(4).
Venkatanath, N., Praneeth, D., Maruthi Chandrasekhar, B.,
Channappayya, S. S., and Medasani, S. S. (2015).
Blind image quality evaluation using perception based
features. In 21st National Conference on Communica-
tions, pages 1–6.
Wang, Z., Bovik, A., Sheikh, H., and Simoncelli, E. (2004).
Image quality assessment: from error visibility to
structural similarity. IEEE Transactions on Image
Processing, 13(4):600–612.
Weier, M., Roth, T., Hinkenjann, A., and Slusallek, P.
(2018). Foveated depth-of-field filtering in head-
mounted displays. ACM Transactions on Applied Per-
ception, 15(4):1–14.
GRAPP 2024 - 19th International Conference on Computer Graphics Theory and Applications
328