loading
Papers Papers/2022 Papers Papers/2022

Research.Publish.Connect.

Paper

Authors: Tobias Bolten 1 ; Felix Lentzen 1 ; Regina Pohle-Fröhlich 1 and Klaus D. Tönnies 2

Affiliations: 1 Institute of Pattern Recognition, Niederrhein University of Applied Sciences, Reinarzstr. 49, Krefeld, Germany ; 2 Department of Simulation and Graphics, University of Magdeburg, Universitätsplatz 2, Magdeburg, Germany

Keyword(s): Semantic Segmentation, 3D Space-time Event Cloud, PointNet++, Dynamic Vision Sensor.

Abstract: Dynamic Vision Sensors are neuromorphic inspired cameras with pixels that operate independently and asynchronously from each other triggered by illumination changes within the scene. The output of these sensors is a stream with a sparse spatial but high temporal representation of triggered events occurring at a variable rate. Many prior approaches convert the stream into other representations, such as classic 2D frames, to adapt known computer vision techniques. However, the sensor output is natively and directly interpretable as a 3D space-time event cloud without this lossy conversion. Therefore, we propose the processing utilizing 3D point cloud approaches. We provide an evaluation of different deep neural network structures for semantic segmentation of these 3D space-time point clouds, based on PointNet++(Qi et al., 2017b) and three published successor variants. This evaluation on a publicly available dataset includes experiments in terms of different data preprocessing, the opti mization of network meta-parameters and a comparison to the results obtained by a 2D frame-conversion based CNN-baseline. In summary, the 3D-based processing achieves better results in terms of quality, network size and required runtime. (More)

CC BY-NC-ND 4.0

Sign In Guest: Register as new SciTePress user now for free.

Sign In SciTePress user: please login.

PDF ImageMy Papers

You are not signed in, therefore limits apply to your IP address 18.119.125.61

In the current month:
Recent papers: 100 available of 100 total
2+ years older papers: 200 available of 200 total

Paper citation in several formats:
Bolten, T.; Lentzen, F.; Pohle-Fröhlich, R. and Tönnies, K. (2022). Evaluation of Deep Learning based 3D-Point-Cloud Processing Techniques for Semantic Segmentation of Neuromorphic Vision Sensor Event-streams. In Proceedings of the 17th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2022) - Volume 4: VISAPP; ISBN 978-989-758-555-5; ISSN 2184-4321, SciTePress, pages 168-179. DOI: 10.5220/0010864700003124

@conference{visapp22,
author={Tobias Bolten. and Felix Lentzen. and Regina Pohle{-}Fröhlich. and Klaus D. Tönnies.},
title={Evaluation of Deep Learning based 3D-Point-Cloud Processing Techniques for Semantic Segmentation of Neuromorphic Vision Sensor Event-streams},
booktitle={Proceedings of the 17th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2022) - Volume 4: VISAPP},
year={2022},
pages={168-179},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0010864700003124},
isbn={978-989-758-555-5},
issn={2184-4321},
}

TY - CONF

JO - Proceedings of the 17th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2022) - Volume 4: VISAPP
TI - Evaluation of Deep Learning based 3D-Point-Cloud Processing Techniques for Semantic Segmentation of Neuromorphic Vision Sensor Event-streams
SN - 978-989-758-555-5
IS - 2184-4321
AU - Bolten, T.
AU - Lentzen, F.
AU - Pohle-Fröhlich, R.
AU - Tönnies, K.
PY - 2022
SP - 168
EP - 179
DO - 10.5220/0010864700003124
PB - SciTePress