loading
Papers Papers/2022 Papers Papers/2022

Research.Publish.Connect.

Paper

Authors: Saem Park 1 ; 2 ; Donghoon Han 2 and Nojun Kwak 2

Affiliations: 1 LG Electronics, Seoul, South Korea ; 2 Seoul National University, Seoul, South Korea

Keyword(s): Video frame interpolation, Opticalflow free, Invertible network, U-Net based, Generative Flow

Abstract: Video frame interpolation is the task of creating an interframe between two adjacent frames along the time axis. So, instead of simply averaging two adjacent frames to create an intermediate image, this operation should maintain semantic continuity with the adjacent frames. Most conventional methods use optical flow, and various tools such as occlusion handling and object smoothing are indispensable. Since the use of these various tools leads to complex problems, we tried to tackle the video interframe generation problem without using problematic optical flow. To enable this, we have tried to use a deep neural network with an invertible structure, and developed an U-Net based Generative Flow which is a modified normalizing flow. In addition, we propose a learning method with a new consistency loss in the latent space to maintain semantic temporal consistency between frames. The resolution of the generated image is guaranteed to be identical to that of the original images by using an invertible network. Furthermore, as it is not a random image like the ones by generative models, our network guarantees stable outputs without flicker. Through experiments, we confirmed the feasibility of the proposed algorithm and would like to suggest the U-Net based Generative Flow as a new possibility for baseline in video frame interpolation. This paper is meaningful in that it is the new attempt to use invertible networks instead of optical flows for video interpolation. (More)

CC BY-NC-ND 4.0

Sign In Guest: Register as new SciTePress user now for free.

Sign In SciTePress user: please login.

PDF ImageMy Papers

You are not signed in, therefore limits apply to your IP address 3.135.184.27

In the current month:
Recent papers: 100 available of 100 total
2+ years older papers: 200 available of 200 total

Paper citation in several formats:
Park, S.; Han, D. and Kwak, N. (2022). The U-Net based GLOW for Optical-Flow-Free Video Interframe Generation. In Proceedings of the 11th International Conference on Pattern Recognition Applications and Methods - ICPRAM; ISBN 978-989-758-549-4; ISSN 2184-4313, SciTePress, pages 80-89. DOI: 10.5220/0010869400003122

@conference{icpram22,
author={Saem Park. and Donghoon Han. and Nojun Kwak.},
title={The U-Net based GLOW for Optical-Flow-Free Video Interframe Generation},
booktitle={Proceedings of the 11th International Conference on Pattern Recognition Applications and Methods - ICPRAM},
year={2022},
pages={80-89},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0010869400003122},
isbn={978-989-758-549-4},
issn={2184-4313},
}

TY - CONF

JO - Proceedings of the 11th International Conference on Pattern Recognition Applications and Methods - ICPRAM
TI - The U-Net based GLOW for Optical-Flow-Free Video Interframe Generation
SN - 978-989-758-549-4
IS - 2184-4313
AU - Park, S.
AU - Han, D.
AU - Kwak, N.
PY - 2022
SP - 80
EP - 89
DO - 10.5220/0010869400003122
PB - SciTePress