Authors:
Veís Oudjail
and
Jean Martinet
Affiliation:
Univ. Lille, CNRS, Centrale Lille, UMR 9189 – CRIStAL and France
Keyword(s):
Video Analysis, Spiking Neural Networks, Motion Analysis, Address-event Representation.
Related
Ontology
Subjects/Areas/Topics:
Computer Vision, Visualization and Computer Graphics
;
Early and Biologically-Inspired Vision
;
Image and Video Analysis
Abstract:
This paper presents an original approach to analyze the motion of a moving pattern with a Spiking Neural Network, using visual data encoded in the Address-Event Representation. Our objective is to identify a minimal network structure able to recognize the motion direction of a simple binary pattern. For this purpose, we generated synthetic data of 3 different patterns moving in 4 directions, and we designed several variants of a one-layer fully-connected feed-forward spiking neural network with varying number of neurons in the output layer. The networks are trained in an unsupervised manner by presenting the synthetic temporal data to the network for a few seconds. The experimental results show that such networks quickly converged to a state where input classes can be successfully distinguished for 2 of the considered patterns, no network configuration did converge for the third pattern. In the convergence cases, the network proved a remarkable stability for several output layer size
s. We also show that the sequential order of presentation of classes impacts the ability of the network to learn the input.
(More)