REFERENCES
Barnes, C., et al. (2009). “Patch-Match: a randomized
correspondence algorithm for structural image editing,”
ACM Tr. on Graphics, vol. 28, no. 3.
Bertalmio, M., et al. (2000). “Image inpainting,”
SIGGRAPH 2000, New Orlean, USA.
Boissonade, P., Jung, J. (2018). “Proposition of new
sequences for Windowed-6DoF experiments on
compression, synthesis, and depth estimation,” Doc.
ISO/IEC JTC1/SC29/WG11 MPEG/M43318.
Boyce, J., et al. (2021). “MPEG Immersive Video Coding
Standard,” Proc. IEEE 119 (9), pp. 1521-1536.
Buyssens, P., et al. (2017). “Depth-guided disocclusion
inpainting of synthesized RGB-D images,” IEEE Tr. on
Image Proc., vol. 26, no. 2, pp. 525-538.
Cho, J.H., et al. (2017). “Hole filling method for depth
image based rendering based on boundary decision,”
IEEE Signal Proc. Letters 24 (3), pp. 329-333.
Criminisi, A., et al. (2004). “Region filling and object
removal by exemplar-based image inpainting,” IEEE
Tr. on Image Proc. 13 (9), pp. 1200-1212.
Damelin, S.B., Hoang, N. (2018). “On Surface Completion
and Image Inpainting by Biharmonic Functions:
Numerical Aspects,” Int. Journal of Mathematics and
Mathematical Sciences, 2018 (3950312).
Daribo, I., et al. (2010). “Depth-aided image inpainting for
novel view synthesis,” MMSP, Saint-Malo, France.
Domański, M. et al. (2016). “Multiview test video
sequences for free navigation exploration obtained
using pairs of cameras,” Doc. ISO/IEC
JTC1/SC29/WG11, MPEG M38247.
Doré, R. (2018). “Technicolor 3DoF+ test materials,”
ISO/IEC JTC1/SC29/WG11 MPEG, M42349, San
Diego, CA, USA.
Doré, R., et al. (2020). “InterdigitalFan0 content proposal
for MIV,” Doc. ISO/IEC JTC1/SC29/ WG04 MPEG
VC/ M54732, Online.
Doyen, D., et al. (2018). “[MPEG-I Visual] New Version
of the Pseudo-Rectified Technicolor painter Content,”
Doc. ISO/IEC JTC1/SC29/WG11 MPEG/M43366.
Dziembowski, A., et al. (2016). “Multiview Synthesis –
improved view synthesis for virtual navigation,” PCS
2016, Nuremberg, Germany.
Dziembowski, A., Domański, M. (2018). “Adaptive color
correction in virtual view synthesis,” 3DTV Conf.
2018, Stockohlm – Helsinki.
Dziembowski, A., Stankowski, J. (2018). “Real-time CPU-
based virtual view synthesis,” 2018 ICSES Conf.,
Kraków, Poland.
Dziembowski, A., et al. (2019). “Virtual view synthesis for
3DoF+ video,” PCS 2019, Ningbo, China.
Dziembowski, A., et al. (2022). “IV-PSNR—The Objective
Quality Metric for Immersive Video Applications,”
IEEE T. Circ. & Syst. V. Tech. 32 (11).
Fachada, S., et al. (2018). “Depth image based view
synthesis with multiple reference views for virtual
reality,” 3DTV-Conf, Helsinki, Finland.
Fujii, T., et al. (2006). “Multipoint measuring system for
video and sound – 100-camera and microphone
system,” IEEE Int. Conf. on Mult. and Expo.
Huang, H., et al. (2019). “System and VLSI implementation
of phase-based view synthesis,” 2019 ICASSP
Conference, Brighton, UK.
ISO. (2018). “Reference View Synthesizer (RVS) manual,”
Doc. ISO/IEC JTC1/SC29/WG11 MPEG, N18068.
ISO. (2023). “Common test conditions for MPEG
immersive video,” ISO/IEC JTC1/SC29/WG04 MPEG
VC, N0332, Antalya, Turkey.
Jeong, J.Y., et al. (2021). “[MIV] ERP Content Proposal for
MIV ver.1 Verification Test,” ISO/IEC JTC1/
SC29/WG04 MPEG VC, M58433, Online.
Khatiullin, A., et al. (2018). “Fast occlusion filling method
for multiview video generation,” 3DTV Conf. 2018,
Stockholm, Sweden.
Kroon, B. (2018). “3DoF+ test sequence ClassroomVideo,”
ISO/IEC JTC1/SC29/WG11 MPEG, M42415, San
Diego, CA, USA.
Lai, Y., et al. (2017). “Three-dimensional video inpainting
using gradient fusion and clustering,” ICNC-FSKD
Conf. 2017, Guilin, China.
Levin, A., et al. (2003). “Learning how to inpaint from
global image statistics,” 9th Int. Conf. on Computer
Vision, Nice, France.
Li, Y., et al. (2019). “A real-time high-quality complete
system for depth image-based rendering on FPGA,”
IEEE T. Circ&Sys. for V. Tech. 29(4), pp. 1179-1193.
Liu, H., et al. (2012). “Global-background based view
synthesis approach for multi-view video,” 3DTV Conf.
2012, Zurich, Switzerland.
Luo, G., Zhu, Y. (2017). “Foreground removal approach
for hole filling in 3D video and FVV synthesis,” IEEE
Tr. Circ. & Syst. Vid. Tech. 27 (10), pp. 2118-2131.
Mao, Y., et al. (2014). “Image interpolation for DIBR view
synthesis using graph Fourier transform,” 3DTV Conf.
2014, Budapest, Hungary.
Microsoft Developer Network Library. (2020). Acquiring
high-resolution time stamps. https://msdn.microsoft.
com/enus/library/windows/desktop/dn553408.
Mieloch, D., et al. (2020). “Depth Map Estimation for Free-
Viewpoint Television and Virtual Navigation,” IEEE
Access, vol. 8, pp. 5760-5776.
Mieloch, D., et al. (2020). “[MPEG-I Visual] Natural
Outdoor Test Sequences,” Doc. ISO/IEC JTC1/
SC29/WG11 MPEG/M51598, Brussels.
Mieloch, D., et al. (2023). “[MIV] New natural content –
MartialArts,” ISO/IEC JTC1/SC29/WG04 MPEG VC,
M61949, Online.
Müller, K., et al. (2011). “3-D Video Representation Using
Depth Maps,” Proc. IEEE 99 (4), pp. 643-656.
Nonaka, K., et al. (2018). “Fast plane-based free-viewpoint
synthesis for real-time live streaming,” 2018 VCIP
Conf., Taichung, Taiwan, pp. 1-4.
Oh, K.J., et al. (2009). “Hole filling method using depth
based inpainting for view synthesis in free viewpoint
television and 3-D video,” PCS 2009, Chicago.