loading
Papers Papers/2022 Papers Papers/2022

Research.Publish.Connect.

Paper

Authors: Laure Acin 1 ; Pierre Jacob 2 ; Camille Simon-Chane 1 and Aymeric Histace 1

Affiliations: 1 ETIS UMR 8051, CY Cergy Paris University, ENSEA, CNRS, F-95000, Cergy, France ; 2 Univ. Bordeaux, CNRS, Bordeaux INP, LaBRI, UMR 5800, F-33400 Talence, France

Keyword(s): Event-based Camera, Event-based Vision, Asynchronous Camera, Machine Learning, Time-surface, Recognition.

Abstract: Event-based cameras are a recent non-conventional sensor which offer a new movement perception with low latency, high power efficiency, high dynamic range and high-temporal resolution. However, event data is asynchronous and sparse thus standard machine learning and deep learning tools are not optimal for this data format. A first step of event-based processing often consists in generating image-like representations from events, such as time-surfaces. Such event representations are proposed with specific applications. These event representations and learning algorithms are most often evaluated together. Furthermore, these methods are often evaluated in a non-rigorous way (i.e. by performing the validation on the testing set). We propose a generic event representation for multiple applications: a trainable extension of Speed Invariant Time Surface, coined VK-SITS. This speed and spatial-invariant framework is computationally fast and GPU-friendly. A second contribution is a new benchm ark based on 10-Fold cross-validation to better evaluate event-based representation of DVS128 Gesture and N-Caltech101 recognition datasets. Our VK-SITS event-based representation improves recognition performance of state-of-art methods. (More)

CC BY-NC-ND 4.0

Sign In Guest: Register as new SciTePress user now for free.

Sign In SciTePress user: please login.

PDF ImageMy Papers

You are not signed in, therefore limits apply to your IP address 3.15.31.27

In the current month:
Recent papers: 100 available of 100 total
2+ years older papers: 200 available of 200 total

Paper citation in several formats:
Acin, L.; Jacob, P.; Simon-Chane, C. and Histace, A. (2023). VK-SITS: Variable Kernel Speed Invariant Time Surface for Event-Based Recognition. In Proceedings of the 18th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2023) - Volume 5: VISAPP; ISBN 978-989-758-634-7; ISSN 2184-4321, SciTePress, pages 754-761. DOI: 10.5220/0011779400003417

@conference{visapp23,
author={Laure Acin. and Pierre Jacob. and Camille Simon{-}Chane. and Aymeric Histace.},
title={VK-SITS: Variable Kernel Speed Invariant Time Surface for Event-Based Recognition},
booktitle={Proceedings of the 18th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2023) - Volume 5: VISAPP},
year={2023},
pages={754-761},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0011779400003417},
isbn={978-989-758-634-7},
issn={2184-4321},
}

TY - CONF

JO - Proceedings of the 18th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2023) - Volume 5: VISAPP
TI - VK-SITS: Variable Kernel Speed Invariant Time Surface for Event-Based Recognition
SN - 978-989-758-634-7
IS - 2184-4321
AU - Acin, L.
AU - Jacob, P.
AU - Simon-Chane, C.
AU - Histace, A.
PY - 2023
SP - 754
EP - 761
DO - 10.5220/0011779400003417
PB - SciTePress