loading
Papers Papers/2022 Papers Papers/2022

Research.Publish.Connect.

Paper

Authors: Tobias Bolten 1 ; Regina Pohle-Fröhlich 1 and Klaus Tönnies 2

Affiliations: 1 Institute for Pattern Recognition, Hochschule Niederrhein, Krefeld, Germany ; 2 Department of Simulation and Graphics, University of Magdeburg, Germany

Keyword(s): Dynamic Vision Sensor, Semantic Segmentation, PointNet++, UNet.

Abstract: Neuromorphic Vision Sensors, which are also called Dynamic Vision Sensors, are bio-inspired optical sensors which have a completely different output paradigm compared to classic frame-based sensors. Each pixel of these sensors operates independently and asynchronously, detecting only local changes in brightness. The output of such a sensor is a spatially sparse stream of events, which has a high temporal resolution. However, the novel output paradigm raises challenges for processing in computer vision applications, as standard methods are not directly applicable on the sensor output without conversion. Therefore, we consider different event representations by converting the sensor output into classical 2D frames, highly multichannel frames, 3D voxel grids as well as a native 3D space-time event cloud representation. Using PointNet++ and UNet, these representations and processing approaches are systematically evaluated to generate a semantic segmentation of the sensor output stream. This involves experiments on two different publicly available datasets within different application contexts (urban monitoring and autonomous driving). In summary, PointNet++ based processing has been found advantageous over a UNet approach on lower resolution recordings with a comparatively lower event count. On the other hand, for recordings with ego-motion of the sensor and a resulting higher event count, UNet-based processing is advantageous. (More)

CC BY-NC-ND 4.0

Sign In Guest: Register as new SciTePress user now for free.

Sign In SciTePress user: please login.

PDF ImageMy Papers

You are not signed in, therefore limits apply to your IP address 3.144.17.181

In the current month:
Recent papers: 100 available of 100 total
2+ years older papers: 200 available of 200 total

Paper citation in several formats:
Bolten, T.; Pohle-Fröhlich, R. and Tönnies, K. (2023). Semantic Segmentation on Neuromorphic Vision Sensor Event-Streams Using PointNet++ and UNet Based Processing Approaches. In Proceedings of the 18th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2023) - Volume 4: VISAPP; ISBN 978-989-758-634-7; ISSN 2184-4321, SciTePress, pages 168-178. DOI: 10.5220/0011622700003417

@conference{visapp23,
author={Tobias Bolten. and Regina Pohle{-}Fröhlich. and Klaus Tönnies.},
title={Semantic Segmentation on Neuromorphic Vision Sensor Event-Streams Using PointNet++ and UNet Based Processing Approaches},
booktitle={Proceedings of the 18th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2023) - Volume 4: VISAPP},
year={2023},
pages={168-178},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0011622700003417},
isbn={978-989-758-634-7},
issn={2184-4321},
}

TY - CONF

JO - Proceedings of the 18th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2023) - Volume 4: VISAPP
TI - Semantic Segmentation on Neuromorphic Vision Sensor Event-Streams Using PointNet++ and UNet Based Processing Approaches
SN - 978-989-758-634-7
IS - 2184-4321
AU - Bolten, T.
AU - Pohle-Fröhlich, R.
AU - Tönnies, K.
PY - 2023
SP - 168
EP - 178
DO - 10.5220/0011622700003417
PB - SciTePress