Authors:
Amélie Gruel
1
;
Jean Martinet
1
;
Teresa Serrano-Gotarredona
2
and
Bernabé Linares-Barranco
2
Affiliations:
1
Université Côte d’Azur, CNRS, I3S, France
;
2
Instituto de Microelectrónica de Sevilla IMSE-CNM, Sevilla, Spain
Keyword(s):
Event Cameras, Computer Vision, Data Reduction, Preprocessing, Visualisation.
Abstract:
Event cameras (or silicon retinas) represent a new kind of sensor that measure pixel-wise changes in brightness and output asynchronous events accordingly. This novel technology allows for a sparse and energy-efficient recording and storage of visual information. While this type of data is sparse by definition, the event flow can be very high, up to 25M events per second, which requires significant processing resources to handle and therefore impedes embedded applications. Neuromorphic computer vision and event sensor based applications are receiving an increasing interest from the computer vision community (classification, detection, tracking, segmentation, etc.), especially for robotics or autonomous driving scenarios. Downscaling event data is an important feature in a system, especially if embedded, so as to be able to adjust the complexity of data to the available resources such as processing capability and power consumption. To the best of our knowledge, this works is the first
attempt to formalize event data downscaling. In order to study the impact of spatial resolution downscaling, we compare several features of the resulting data, such as the total number of events, event density, information entropy, computation time and optical consistency as assessment criteria. Our code is available online at https://github.com/amygruel/EvVisu.
(More)