Inverse Rendering Based on Compressed Spatiotemporal Infomation by Neural Networks
Eito Itonaga, Fumihiko Sakaue, Jun Sato
2023
Abstract
This paper proposes a method for simultaneous estimation of time variation of the light source distribution, and object shape of a target object from time-series images. This method focuses on the representational capability of neural networks, which can represent arbitrarily complex functions, and efficiently represent light source distribution, object shape, and reflection characteristics using neural networks. Using this method, we show how to stably estimate the time variation of light source distribution, and object shape simultaneously.
DownloadPaper Citation
in Harvard Style
Itonaga E., Sakaue F. and Sato J. (2023). Inverse Rendering Based on Compressed Spatiotemporal Infomation by Neural Networks. In Proceedings of the 18th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2023) - Volume 4: VISAPP; ISBN 978-989-758-634-7, SciTePress, pages 467-474. DOI: 10.5220/0011792200003417
in Bibtex Style
@conference{visapp23,
author={Eito Itonaga and Fumihiko Sakaue and Jun Sato},
title={Inverse Rendering Based on Compressed Spatiotemporal Infomation by Neural Networks},
booktitle={Proceedings of the 18th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2023) - Volume 4: VISAPP},
year={2023},
pages={467-474},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0011792200003417},
isbn={978-989-758-634-7},
}
in EndNote Style
TY - CONF
JO - Proceedings of the 18th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2023) - Volume 4: VISAPP
TI - Inverse Rendering Based on Compressed Spatiotemporal Infomation by Neural Networks
SN - 978-989-758-634-7
AU - Itonaga E.
AU - Sakaue F.
AU - Sato J.
PY - 2023
SP - 467
EP - 474
DO - 10.5220/0011792200003417
PB - SciTePress