capturing, transmission and processing with single
pixel information (Higgins and Koch, 2000) (Chi
et al., 2007) (Lichtsteiner et al., 2008) (Camunas-
Mesa et al., 2010).
2 SCD SYSTEM AND
PROCESSING
A SCD system has as a main element a SCD camera,
and with this purpose, a new visual sensor which im-
plements the SCD behavior has been developed. The
sensor, built in CMOS technology, has a resolution
of 32x32 pixels and, although we assume this res-
olution is low for many applications, it is sufficient
for demonstration purposes and can even be useful in
some cases.
In the SCD sensor every pixel works indepen-
dently of the others. A capacitor is charged, simul-
taneously for each pixel, to a voltage during an inte-
gration time. Every pixel has an analogue memory
with the last read-out value. The absolute difference
between the current and the stored value is compared
for all pixels in the sensor; the pixel that differs most
is selected using a Winner-Take-All circuit (WTA)
(Zuccarello et al., 2010) and its illumination level and
coordinates are read out for processing. It is impor-
tant to note that the concept of snapshot at instant t
can be kept, taking into account that all the photodi-
ode charges begin and finish at the same time. The
subsequent read-out order is performed by the WTA
circuit. All the sensor control signals are generated
with a 32-bit PIC microcontroller running at 80 MHz
which is connected to a computer through a USB link.
Further details about this new sensor, the camera and
its use in limited-resources systems can be found in
(Pardo et al., 2011).
2.1 SCD Algorithms
The design of image processing algorithms within the
SCD formalism requires a change in the way of think-
ing about how the programming instructions are ap-
plied to data. A more detailed explanation of how
to develop SCD algorithms can be found in (Boluda
et al., 2011).
A generic motion analysis algorithm can, many
times, be modeled as a pipeline of successive differ-
ent transformations to the image flow: filtering, fea-
ture extraction, etc. Between these processing stages
intermediate values are stored. Most of them can be
understood as intermediate images, but others can not
be viewable in a straight forward way, these images
being called intermediate images. Thus, each stage of
the image processing pipeline (except the first which
has as input the initial pixel flow) has as input full in-
termediate images and also produces full intermediate
images as output. No matter whether or not the initial
pixels or intermediates results have changed. All the
instructions at each stage are inevitably applied to the
data, even if they do not generate any change.
The SCD execution flow is related with data-flow
architectures: each new pixel fires all the instructions
related with this new data. If there are no data changes
no instructions are fired (and neither time nor energy
is consumed). Initially the SCD camera delivers those
pixels that have changed (gray level and coordinates).
Then the first stage updates the contribution of this
new pixel value to its output intermediate images.
Following this idea all the stages do the same. When
new input data arrives at any intermediate stage, then
all the related instructions are fired, updating the out-
put intermediate images.
The SCD sensor, as already mentioned, allows a
special way of ´non-accurate
´
functioning. As the pix-
els are read-out by the magnitude of their change, it
could be possible not to process all the changing pix-
els, but only the first received pixels, which are those
that offer a greater variation. This behavior could be
desirable if there were computational time restrictions
and less accurate algorithm results could be accept-
able. Some experiments were been performed follow-
ing these ideas, simulating a SCD camera and apply-
ing this strategy to differential algorithms.
2.1.1 Linear Spatial Operators
Linear spatial operators are very common transfor-
mations used for preprocessing or feature extraction.
Spatial operators can be expressed as the systematic
application of a convolution mask to all the pixels of
the image. Let’s say that G is the result image of ap-
plying the M × M convolution mask w
i, j
to the image
I taken at instant t, then each G
x,y
pixel can be calcu-
lated as:
G
x,y
=
M−1
2
∑
i=
1−M
2
M−1
2
∑
j=
1−M
2
w
i, j
I
x+i,y+ j
(1)
where I
x,y
is a pixel of the image I.
With a SCD sensor, the way of computing the fil-
tered image G is different. There will be a set of n
′
changing pixels that have been taken at the same time,
not a full input image. The G image must be changed
only with the contributionof the n
′
pixels. An individ-
ual I
x,y
pixel taken at instantt+1 contributesto M×M
pixels of G, thus this input must be updated adding the
new value and removing the old one. Figure 1 shows
OntheDesignofChange-drivenData-flowAlgorithmsandArchitecturesforHigh-speedMotionAnalysis
549