
Cao, R., Galor, D., Kohli, A., Yates, J. L., and Waller, L.
(2024). Noise2image: Noise-enabled static scene re-
covery for event cameras.
Chen, A., Xu, Z., Geiger, A., Yu, J., and Su, H. (2022).
Tensorf: Tensorial radiance fields. In European Con-
ference on Computer Vision (ECCV).
Garc
´
ıa, G. P., Camilleri, P., Liu, Q., and Furber, S. (2016).
pydvs: An extensible, real-time dynamic vision sensor
emulator using off-the-shelf hardware. In IEEE Sym-
posium Series on Computational Intelligence, pages
1–7.
Gehrig, D., Gehrig, M., Hidalgo-Carri
´
o, J., and Scara-
muzza, D. (2020). Video to events: Recycling video
datasets for event cameras. In IEEE/CVF Conference
on Computer Vision and Pattcalern Recognition.
Guo, S. and Delbruck, T. (2023). Low cost and latency
event camera background activity denoising. IEEE
Transactions on Pattern Analysis and Machine Intel-
ligence, 45(1):785–795.
Hor
´
e, A. and Ziou, D. (2010). Image quality metrics: Psnr
vs. ssim. In In International Conference on Pattern
Recognition, pages 2366–2369.
Hu, Y., Liu, S. C., and Delbruck, T. (2021). v2e: From
video frames to realistic DVS events. In IEEE/CVF
Conference on Computer Vision and Pattern Recogni-
tion Workshops (CVPRW). IEEE.
Hwang, I., Kim, J., and Kim, Y. M. (2023). Ev-nerf:
Event based neural radiance field. In Proceedings of
the IEEE/CVF Winter Conference on Applications of
Computer Vision (WACV), pages 837–847.
Joubert, D., Marcireau, A., Ralph, N., Jolley, A., van
Schaik, A., and Cohen, G. (2021). Event camera
simulator improvements via characterized parameters.
Frontiers in Neuroscience, 15.
Kerbl, B., Kopanas, G., Leimk
¨
uhler, T., and Drettakis,
G. (2023). 3d gaussian splatting for real-time radi-
ance field rendering. ACM Transactions on Graphics,
42(4).
Klenk, S., Koestler, L., Scaramuzza, D., and Cremers, D.
(2023). E-nerf: Neural radiance fields from a moving
event camera. IEEE Robotics and Automation Letters.
Li, W., Saeedi, S., McCormac, J., Clark, R., Tzoumanikas,
D., Ye, Q., Huang, Y., Tang, R., and Leutenegger, S.
(2018). Interiornet: Mega-scale multi-sensor photo-
realistic indoor scenes dataset. In British Machine Vi-
sion Conference.
Mildenhall, B., Srinivasan, P. P., Tancik, M., Barron, J. T.,
Ramamoorthi, R., and Ng, R. (2020). Nerf: Repre-
senting scenes as neural radiance fields for view syn-
thesis. In ECCV.
Mueggler, E., Rebecq, H., Gallego, G., Delbruck, T., and D.
(2017). Scaramuzza: The event-camera dataset and
simulator: Event-based data for pose estimation, vi-
sual odometry, and SLAM. International Journal of
Robotics Research, 36:142–149.
M
¨
uller, T., Evans, A., Schied, C., and Keller, A. (2022).
Instant neural graphics primitives with a multiresolu-
tion hash encoding. ACM Trans. Graph., 41(4):102:1–
102:15.
Rebecq, H., Gehrig, D., and Scaramuzza, D. (2018). ESIM:
an open event camera simulator. Conference on Robot
Learning (CoRL).
Rudnev, V., Elgharib, M., Theobalt, C., and Golyanik, V.
(2023). Eventnerf: Neural radiance fields from a sin-
gle colour event camera. In Computer Vision and Pat-
tern Recognition (CVPR).
Sch
¨
onberger, J. L. and Frahm, J.-M. (2016). Structure-
from-motion revisited. In IEEE Conference on Com-
puter Vision and Pattern Recognition.
Tagliasacchi, A. and Mildenhall, B. (2022). Volume ren-
dering digest (for NeRF). arXiv:2209. 02417 [cs],
3:02417.
Union, I. T. (2011). Recommendation itu-r bt.601-7: Studio
encoding parameters of digital television for standard
4:3 and wide-screen 16:9 aspect ratios. https://www.
itu.int/rec/R-REC-BT.601.
Zhang, Z., Cui, S., Chai, K., Yu, H., Dasgupta, S., Mahbub,
U., and Rahman, T. (2024). V2ce: Video to continu-
ous events simulator.
Zhu, A. Z., Wang, Z., Khant, K., and Daniilidis, K. (2019).
Eventgan: Leveraging large scale image datasets for
event cameras. arXiv preprint arXiv:1912.01584.
VISAPP 2025 - 20th International Conference on Computer Vision Theory and Applications
780