Neuromorphic Encoding / Decoding of Data-Event Streams Based on
the Poisson Point Process Model
Viacheslav Antsiperov
a
Kotelnikov Institute of Radioengineering and Electronics of RAS, Mokhovaya 11-7, Moscow, 125009, Russia
Keywords: Neuromorphic Computing, Data-Event Streams, Poisson Counts, Sampling Representation, Receptive Fields,
Most Powerful Unbiased Test, Center / Surround Inhibition, Perceptual Coding, Marr’s Image Primal Sketch.
Abstract: The work is devoted to a new approach to neuromorphic encoding of streaming data. An essential starting
point of the proposed approach is a special (sampling) representation of input data in the form of a stream of
discrete events (counts), modeling the firing events of biological neurons. Considering the specifics of the
sampling representation, we have formed a generative model for the primary processing of the count stream.
That model was also motivated by known neurophysiological facts about the structure of receptive fields of
sensory systems of living organisms that implement universal mechanisms (including central-circumferential
inhibition) of biological neural networks, particularly the brain. To list the main ideas and consolidate the
notations used, the article provides a brief overview of the features and most essential provisions of the
proposed approach. The new results obtained within the framework of the approach, related to the analysis of
neuromorphic encoding (with distortions) of streaming data, are discussed. The issues of possible
decoding/restoration of the original data are discussed in the context of what Marr called the primary sketch.
The results of computer modelling of the developed encoding/decoding procedures are presented,
approximate numerical characteristics of their quality are given.
1 INTRODUCTION
The widespread use of computers in (Big) data
processing tasks has shifted the focus from issues of
fitting data to known statistical models to issues of
developing adequate (generative) models based on
the characteristics of the data themselves. The most
successful here have been artificial neural networks
(ANN), capable of automatically (machine-aided)
learning on data without explicit additional
programming of the systems. Since the effectiveness
of machine learning (ML) depends primarily on the
volume of data, very high requirements for
performance, available resources, and the data
exchange capacity of computers are important here.
The rapidly developing technologies of deep learning
(DL) are the most critical to such requirements
(Dargan, 2020). It is deep learning that has enabled
the development of more efficient, intelligent and
scalable solutions for many information tasks over the
past decades, including recognition and synthesis of
text, speech, images, as well as for such real-world
a
https://orcid.org/0000-0002-6770-1317
tasks as market segmentation, customer consulting,
self-driving cars, etc.
Unfortunately, today we understand that the
progress achieved in the field of information
technology, due to successes in the development of
the element base of computers, will not be able to
continue forever. The main problem here is that
existing computers are oriented towards the von
Neumann architecture. The latter assumes a
continuous, intensive exchange of information
between the memory and the processor via a common
bus. The presence in modern computers of a limited
bus bandwidth, due to fundamentally physical
(thermodynamic) principles, will eventually lead to a
slowdown in the progress observed today. Neither
Moore's doubling law nor Dennard's scaling law will
save us from the inevitable crisis.
A promising direction for solving this problem
seems to be the transition to neuromorphic computing
based on several neurobiological principles of the
human brain (Christensen, 2022). A typical
information technology that can make maximum use
Antsiperov, V.
Neuromorphic Encoding / Decoding of Data-Event Streams Based on the Poisson Point Process Model.
DOI: 10.5220/0013015500003886
Paper published under CC license (CC BY-NC-ND 4.0)
In Proceedings of the 1st International Conference on Explainable AI for Neural and Symbolic Methods (EXPLAINS 2024), pages 139-146
ISBN: 978-989-758-720-7
Proceedings Copyright © 2024 by SCITEPRESS Science and Technology Publications, Lda.
139
of the advantages of neuromorphic data processing
are systems for controlling and monitoring objects
based on a stream of recorded images. In this regard,
we note such a new area of information technology as
neuromorphic vision (NV) (Wang, 2023). NV
involves recording images using neuromorphic
cameras and processing them using spiking neural
networks (SNNs). The difference between NV and
traditional computer vision lies primarily in the way
of image formation by the data registered. Traditional
computer vision involves the accumulation of data
over a certain registration time frame, treating the
result of the accumulation as an image.
Neuromorphic vision, in contrast, is focused on
presenting data in the form of a continuous stream of
discrete events (counts), recorded by neuromorphic
cameras (Al-Obaidi, 2021). Accordingly,
calculations in NV must necessarily be neuromorphic
event-driven, as, for example, in SNN networks.
Thus, neuromorphic technologies open new
horizons that allow us not only to focus on digital
computing, but also to rethink the use of analog,
approximate and mixed data computing, typical for
biological neurons. At the same time, neuromorphic
computing will require a radical change in the
programming paradigm. This may be why
neuromorphic computing has yet to find widespread
market adoption to date, there are only a few
publicly discussed prototypes, the result of initiatives
from a few leading universities and academic centres.
With this in mind, we have recently attempted to
develop some methods for processing data streams
based on procedures that would be based on the
neuromorphic-like computing (Antsiperov V., 2024).
An essential starting point is a special (sampling)
representation of input data in the form of a stream of
discrete events (counts), like firing events of
biological receptors. Considering the specifics of the
sampling representation, we have formed a generative
model based on known neurophysiological facts
about the system of receptive fields (RF) of the living
sensory systems, which implement universal
mechanisms (including center-surround inhibition) of
the biological neural networks. To recall the main
ideas and to fix the notations used, the next section
provides a short review of the features and most
essential provisions of the approach. The following
sections discuss new results related to the analysis of
neuromorphic coding of data and the formation on its
basis of what Marr called a primary sketch (Marr,
1980), i.e. a procedure of neuromorphic data primary
reconstruction. We note in this regard that Marr's
concept of a primary sketch is today considered as a
first step towards Gestalt synthesis (Zhu, 2023).
2 MAIN FEATURES OF
NEUROMORPHIC
COMPUTING BASED ON A
POISSON STREAMS OF
DISCRETE EVENTS
Let's start the discussion with the main provisions of
the approach we are developing to neuromorphic
computing. Since our approach was largely formed on
the way of modeling neural structure and functions of
sensory systems of living organisms, the architecture
and concepts of neuromorphic systems used in the
approach are discussed in terminology like that used in
neurobiology. Terms and concepts from the
neurophysiology of the most complex and universal
sensory system, the human visual system (HVS), are
widely used. Due to the known similarity of the neuro-
mechanisms of most biological sensory systems
(touch, hearing, vision or smell) (Masland, 2020), the
HVS terminology can be well adapted to each of them
and can also be successfully used in the case of
artificial neuromorphic systems that model biological.
As noted above, a feature of approach proposed is
its special form of input data representation. Our
approach doesn’t assume input data in traditional
form of a continuous distribution of stimuli intensity
 over a certain parametric space 
in the case of HVS the intensity of the incident on
the retina
radiation, but in the form of a
stream of random, discrete events
that result from the process of detecting such intensity
in the case of HVS by retinal receptors
depolarizations the so-called (photo) counts. The
process of registering random events itself is assumed
to be as simple as possible: the probability of
registering an event in a small element of the
parametric space is assumed to be
proportional to the power of the recorded data:

; the probability of registering two or
more events in the same element is considered
negligible in comparison with
; and events in
separated elements are considered as statistically
independent (dependent only on ). It is known
that the listed properties (orderliness and
independence) are “almost necessary” for the
corresponding event stream to be a dimensional
inhomogeneous Poisson point process (PPP)
(Kingman, 1993) with a pointcount rate 
proportional to the intensity  . A detailed
discussion of numerous issues, approximations and
applications of PPP to the event stream modeling can
be also found in the books (Streit, 2010) and (Barrett,
EXPLAINS 2024 - 1st International Conference on Explainable AI for Neural and Symbolic Methods
140
2004). A statistical description of such а
representation can also be obtained using the concepts
of an ideal recording device and an ideal image,
proposed in our work (Antsiperov, 2023).
From the statistical point of view, the
representation of the event stream
,
 , by PPP implies the identification of
registered event parameters
with the PPP random
point having the same coordinates in the same
parameter space
. Accordingly, a complete
statistical description of the events could be given by
the joint distribution density of PPP points. Here,
however, it should be noted, that the number of points
in PPP is potentially infinite, but the number of
actually recorded events is always finite. To get
around this problem, for a given region , one can
specify the (consistent) set of joint finite-dimensional
distributions for all  . Such description of
events is traditionally called as the preset-time form
(Barrett, 2004). But one can fix and consider the
representation
as a subsample of size
from the general population of all PPP points. This
description is called the preset-counts form (Barrett,
2004). The latter representation was used in most of
our works and was defined as a sampling
representation (Antsiperov, 2023). Under the
assumption of independence of counts, the sampling
representation joint probability distribution density
decomposes into the product of individual count
densities:



,
where the density of the individual count

coincides with the normalized (to region ) intensity
(Antsiperov, 2023):





.
(1)
Note that the given description of the event stream
(1) is very convenient for both theoretical analysis
and numerical simulation. Indeed, factorization of the
joint distribution density into the product of
individual count densities

is the basis for
many well-developed statistical approaches and is
assumed by a few ML methods, including naive
Bayes learning (Murphy, 2012). Namely, if the
intensity  is known at least approximately, it is
possible using (1) to carry out complex calculations
with

basing on the Monte Carlo
methods (Robert, 2004).
To illustrate this thesis, Figure 1 shows the result
of count stream modelling for the intensity 
specified by the pixels of the PNG image
GRAY_OR_400x400_056.png” of size  
  pixels, color depth bits from the
TESTIMAGES database (Asuni, 2014). The set

of  random counts was
generated by the Monte-Carlo acceptance-rejection
sampling method (Robert, 2004) with a uniform
auxiliary distribution

and an auxiliary
constant
, details can be found, for example,
in (Antsiperov, 2023).
Figure 1: Illustration of event stream represented by
Poisson counts (sampling representation), generated by
Monte Carlo acceptance-rejection sampling. On the left
side is the approximate intensity  given by the pixels in
the grayscale image “GRAY_OR_400x400_056.png”
(Asuni, 2014). On the right its sampling representation of
size  counts.
The advantage of event stream description in the
form (1) is also its universal character, allowing to
transition from detailed (ideal) fine-scale
consideration at the level of individual events (points)
to a more coarse, large-scale analysis in the form of
the number of events in any area of parametric
space. A similar transformation occurs in the retina of
the eye, which contains ~ 10
8
receptors (rods and
cones), transmitting registered data to the visual
cortex only through ~ 10
6
axons of output neurons
(RGCs - retinal ganglion cells) constituting the optic
nerve (Frisby, 2010). As a result, the average ratio of
the number of receptors to nerve fibers is about 100:1,
which approximately corresponds to the compression
of recorded data by interneurons in intermediate
layers of the retina. Moreover, it is well known that
compression by interneurons (horizontal, amacrine
and other cells) is carried out by summing and
aggregating the counts of special groups of receptors,
that make up the receptive fields (RF) of the
corresponding RGCs (Masland, 2020). Since we will
need this aggregated representation of event streams
below, let’s briefly look at how it is derived from the
sampling representation (1). Let us denote by 
a small region in parametric space. The probability
that some event
from will be in can be
calculated according (1) as:




.
(2)
Accordingly, the probability that some of
(independent) events from
will fall into is
Neuromorphic Encoding / Decoding of Data-Event Streams Based on the Poisson Point Process Model
141
determined by the binomial distribution 
,
for which we immediately write out the asymptotic
for large :



  







 




,
(3)
where it is assumed that together with also

, so that



, from which it follows that

is the portion
of the total power of the recorded
per count. The
symbol in (3) denotes the area of the region , the
value
denotes the average
intensity  on , thus

. As a result, the probability distribution of the
number of events (3) turns out to be Poisson (similar
to the preset-time representation, but on , not on ),
for which the average (as well as its variance) is
equal to
. Distribution (3) does not
depend on the details of on , but only on the
average integral characteristic
, which, in fact,
implies a coarsening of the description. To simplify
the notation, it is advisable to introduce instead of
a proportional measure of the average number of
events
, which completely
determines the distribution of events number on .
Note that , in turn, is an unbiased estimate of
(with the minimum possible variance which is equal
to the reciprocal of the Fisher information in ).
The derivation of distribution (3) can be easily
extended to disjoint regions
 ,
 , with the numbers of events 
. As a result,
the set 
will be a collection of independent
Poisson random variables with a joint distribution (in
asymptotic

) of the form:






.
(4)
In the case when the union of similar disjoint areas

covers a significant part of the event stream area
(i.e., it partitions the latter), the set 
together with its statistical description (4) can be
considered as coarsened (to a scale of ~ ) stream
representation. In (Antsiperov, 2023) it was called the
occupation-numberrepresentation. The occupation
number representation (4) is related to the sampling
representation (1) in the same way as in statistical
physics the canonical ensemble is related to the
microcanonical one.
As noted above, the numbers
can be interpreted
as unbiased estimates of the means
. If we assume that all regions
,
located in different places of , are similar to each
other in shape (the shape of a typical region ), then

will be the output of a linear filter with a sliding
window from the input
. Moving average
filters are well known in computer science and are a
classic tool for simple denoising (encoding) of signals
of various origins. In this sense, the representation

does not contain anything fundamentally
new and is widespread with the only note that its noise
is assumed not to be additive Gaussian, but
Poissonian (or quantum noise, or shot noise (Barrett,
2004)). To illustrate such a representation (by
occupancy numbers) Figure 2 shows the
representation 
for rectangular lattice of
  regions over the .
Figure 2: Image sampling representation coarsened by the
rectangular lattice. On the left is a sampling representation
of size 10 000 000 counts of the grayscale image from
Figure 1, covered with a   lattice. On the right
smoothed coarsened representation 
, obtained by а
rectangular lattice of   square regions.
3 POISSON STREAMS
NEUROMORPHIC ENCODING
BY THE SYSTEM OF THE
RECEPTIVE FIELDS
Unfortunately, the above simple occupancy number
representation 
, along with the advantage of
simplicity of encoding (  pixels  
matrix), has a number of significant disadvantages. This
problem is well known in the field of image coding
(Zhang, 2021). The problem is that moving average
filters are low-pass filters and, therefore, while
eliminating the redundancy associated with
uncorrelated noise, they also suppress significant fine
details in data. The latter leads to blurring of contrasts,
EXPLAINS 2024 - 1st International Conference on Explainable AI for Neural and Symbolic Methods
142
destruction of boundaries, smoothing of important
texture fragments., etc., i.e. all those characteristics that
are extremely important for a human when analyzing
the data (Masland, 2020). Moreover, linear filters are
no longer effective at suppressing noise in cases where
the latter depends on the signal, as in the case of
Poisson noise. To combat these circumstances, various
nonlinear modifications of filters have been proposed,
in particular, based on anisotropic filtering, total
variation, SUSAN filter, empirical Wiener filter,
thresholding wavelet-shrinkage, bilateral filter, mean-
shift filter, etc., see (Buades, 2005). Many of these
nonlinear filters solve some specific problems of linear
processing, but none of them have proven to be
universal. Thus the natural guiding principle for the
search for universal solutions became the study and
modeling of the biological sensory systems
neuromechanisms, that possess the required
universality. As result, the new direction of research
and development of coding / filtering methods focused
on human perception has emerged (Antsiperov V. E.,
2024).
Most coding methods aimed at human perception
are based on the Retinex concept (Land, 1971), which
allows separating the reflective radiation from objects
from the smoothly changing general illumination of
the scene by highlighting or even enhancing the local
contrast of image intensity. Among the algorithms
based on Retinex, the widely used principle is
center/surround inhibition (Jobson, 1997). It
estimates a smoothed version of the image
(illuminance) and subtracts it from the original image
to produce reflectance. Some center/surround
algorithms differ in the types of filters that are used to
smooth the image. So, SSR (Single Scale Retinex)
and MSR (Multi Scale Retinex) algorithms use a
Gaussian filter / set of filters. Further development of
these ideas led to the creation of a bilateral filter,
whose weighting coefficients are a combination of
both the spatial proximity of the pixels and the
similarity of their values (Elad, 2005). Practice has
confirmed that the bilateral filter under smooth
lighting well preserves edges and avoids the
appearance of associated halos.
The successes of the perceptual coding reflect
successful solutions in modeling the internal structure
of disjoint areas 
covering (partitioning) the
event stream area , and on the base of which the
occupation number representation 
(4) is
constructed. Such centreantagonistically structured
areas are called retinal receptive fields (RF) (Lisani,
2020). Quite unexpected is the fact that the relatively
simple organization of the RF in the form of
center/surround structure allows, among other things,
to transmit significant information about intensity
contrasts to the brain (Masland, 2001).
The lateral inhibition associated with RFs become
today the canon of ideas about the basic
neurophysiological mechanisms of the perceptual
coding. We owe these discoveries primarily to the
famous Harvard School, led by Kuffler (Katz, 1982).
According to Kuffler the structure of the RF consists
of two concentric parts: a central region that receives
data directly from the retinal receptors and called the
RF center, and an enclosing it (antagonistic) region
that receives data from the horizontal cells and called
the surround. It is usually believed that the ratio of the
center size to the size of the RF (size of the surround)
is on average ~ 1:2 (Marr, 1980). Based on the listed
neurobiological data, a number of formal models of
the center/surround RF have recently been proposed,
which, with varying degrees of generality, explain the
mechanisms of lateral inhibition using cascades of
linear/nonlinear procedures (LN, LNLNmodels)
(Zapp, 2022), based on standard elements of ANN.
We have also proposed a centre-lateral threshold
filtering approach (of NLN type), which was initially
focused on processing event streams in neuromorphic
systems (Antsiperov V.E., 2024). The features of our
approach that distinguish it from those noted above
can be found in the article (Antsiperov V., 2024), here
we briefly describe only its main points.
Let us denote for some typical RF  of area
by
the region of its center of area
and,
accordingly, by
the concentric surround of area
. Assuming that the center and surround do not
intersect
 
and
 
, we say, that
and
perform a partition of RF . Note, that in
this case
 
. Let us also denote by ,
and
the count numbers in RF , in its centre
and
surround
:
 
. As discussed above, these
random numbers have Poisson statistical models with
probability distributions following from (3):






,
(5)
Since
and
correspond to non-intersecting
regions
 
, they are statistically
independent, and their joint distribution can be
written, according (5), as:

 



 
.
(6)
In order to obtain some conclusions about the
behaviour of the intensity  in the RF region ,
basing only on the recorded numbers ,
and
,
regardless of the directly unobservable measures
Neuromorphic Encoding / Decoding of Data-Event Streams Based on the Poisson Point Process Model
143
and
it is
necessary to move from conditional distributions (6)
(for given
and 
) to unconditional
distributions of observable
and
. Adhering to the
Bayesian point of view, this can be done by choosing
a certain prior distribution for
and 
,
forming on this basis a generative model of all data

and obtaining from their joint
distribution marginal distributions for recorded
numbers. We carry out this plan for two different
hypotheses: hypothesis
, which assumes the hard
dependence of the average intensities
and
and alternative
, which assumes that
and
are independent. Obviously,
corresponds to the absence of contrast  on , and
makes the contrast expectable.
Let the a priori distribution of the average
intensity
in any region of be given by the density

so, that
is not dependent on which RF region
or the averaging occurs over. Then, for
hypotheses
and
we can write the following
forms of their joint distributions:



 


(7)
where  is Dirac’s delta-function .
Multiplying (6) and (7) we obtain the joint
distributions of all observable / hidden data

(generative model), integrating
them over
we obtain unconditional
distributions of observables
under the
assumptions of hypothesis
or its alternative
and
. Taking the ratio of these
distributions, we obtain the classical likelihood ratio

of the hypothesis
to the alternative
.
Skipping a number of transformations and
simplifications (details can be found in (Antsiperov
V., 2024)), we present bellow only the final, easily
interpreted expression for the likelihood ratio:




,
(8)
where
 
,
 
 and
is a priori average number of counts on typical
RF,
is a priori average intensity in any region of
characteristic scale of a priori density


.
Using the uniformly most powerful unbiased
(UMP) test (Young, 2005), we can now compare the
goodness of fit of hypotheses
and
to the
available data . Namely, according to the
NeymanPearson criterion we should accept
hypothesis of the coincidence
if

and reject
, implying
hypothesis of
the existing differenceе between
and
, in
opposite case

. The positive constant
used here depends on the value of the size of the
test. The size of the test, in turn, can be defined as the
probability of falling data into the critical region


:

.
From (8) we can obtain the explicit form of
:



,
(9)
from where we can relate the threshold
to the test
size :




,
(10)
where 



is standard
complementary error function. Thus there is no need
to obtain
via the constant
if is given. In
accordance with (10),
is equal to th quantile of
error function 
, which is well tabulated. After
is fixed, the criterion for rejecting the hypothesis
the hypothesis of the coincidence
and assumptions about a possible jump in intensity
, contrast on takes the following final form:

.
(11)
An important conclusion follows from the above
discussion: if the “occupancy number” representation
code 
is supplemented with the “contrast
fields” data, for which residual 
exceeds the
threshold specified on the right side of (11), then the
resulting code 
will have significantly
higher quality, at least in the perceptual sense (for
more detailed analysis see (Antsiperov V., 2024)).
Figure 3 demonstrates the code 
for the
same sampling representation partitioned by the
lattice of   square RFs as in Figure 2.
Figure 3: Illustration of the encoding results 
on
a а rectangular lattice of   RFs for a sampling
representation of size 10 000 000 counts from Figures 1, 2.
On the left is the sampling representation (Figure 1, right),
on the right are RFs with notable values
:
) in
white,
 in black (
is equal to unity).
EXPLAINS 2024 - 1st International Conference on Explainable AI for Neural and Symbolic Methods
144
4 DECODING POISSON
STREAMS ENCODED BY THE
RECEPTIVE FIELDS SYSTEM
As can be seen from Figure 3, the coding procedure
(11) outlines the contrast edges in the image with two
chains of non-zero RFs one chain with positive
values

, and the other with negative
values

. This fact is not accidental.
In reality, there is a very close connection (see
(Antsiperov V., 2024)) between the values of
and
the Laplacian of Gaussian (LoG) filter output, which
Marr proposed to detect the edges in digital images
(Marr, 1980). Namely, to detect the points of such
edges filter zero-crossings, Map proposed to analyze
pairs of points with the maximum and minimum of
LoG output values, which, as he supposed, correspond
to pairs of neighboring RFs with positive and negative
responses. Moreover, Marr associated such points with
ON- and OFF- receptive fields, as was done from the
very beginning in our approach.
Figure 4: Results of constructing chains of ON- and OFF-
field pairs based on 
code for sampling
representation of size 10 000 000 counts from Figures 1, 2.
On the left is the sampling representation (Figure 1, right),
on the right corresponding chains of ON- and OFF-field
pairs, obtained by analysis areas with size of 5×5 RFs.
Thus, following Marr's concept (Marr, 1980), it is
possible to develop procedure for reconstructing
(decoding) poisson streams by restoring, in addition to
the smoothed intensity, also the edges of contrasts, as
discussed above. In fact, the difficult part of this
problem is to develop such sub procedure, that selects
from the set of all RFs those chains of pairs of ON- and
OFF- fields that actually follow along some zero-
crossing lines and reject those non-zero
RFs that are
caused by the random fluctuations and do not
determine zero-crossing lines (see Fig. 3 (right)). If this
problem is solved and, in addition, the order of the
fields in the selected chains is found (see Fig. 4 right),
then there are many ways to smoothly interpolate such
broken zigzag-shaped sequences with smooth
contours, for example, using Bezier curves (De Boor,
1978), B-splines (Grove, 2011), Laplace smoothing of
chains (Vollmer, 1999), etc (see Fig. 5 right).
Figure 5: Results of chains smoothing, for sampling
representation of size 10 000 000 counts from Figures 1, 2.
On the left is the sampling representation (Figure 1, right),
on the right Laplace smoothed chain. Note: chains shorter
than three segments were censored.
5 CONCLUSIONS
As follows from the above, the work proposes a new
approach to the problems of neuromorphic coding of
data-event streams. Within the framework of the
proposed approach, it was possible to carry out
explicit modeling of the mechanisms of primary
neuro-processing of video data in the periphery of the
visual system. As a result, it was possible to develop
a constructive method of neuromorphic type of event
streams coding. Moreover, the experience of
numerical testing and optimization of the developed
procedures (algorithms) has shown that based on the
concept central to the proposed approach sampling
representations it is possible, on the one hand, to
avoid computational problems associated with
processing massive data, and, on the other hand, to
adapt the approach to modern neural network
problems like the one considered.
In terms of technical implementation, a feature of
the proposed method is the widespread use of the
neurobiological concept of receptive fields.
Structuring data based on a system of receptive fields
allows one to effectively circumvent the known
difficulties of many numerical algorithms (for
example, EM) that process mixtures with many
components. This conclusion follows, among other
things, from the existing experience in computer
implementation of the method. All illustrative
materials presented in the work were obtained as part
of computational experiments. Experiments
confirmed the effectiveness of the method in terms of
memory resources/computation time.
In general, based on the results obtained, the
Neuromorphic Encoding / Decoding of Data-Event Streams Based on the Poisson Point Process Model
145
author expresses the hope that the approach proposed
in the work and the procedures developed will find
both their further theoretical development and fruitful
use in applied problems.
REFERENCES
Al-Obaidi, S., Al-Khafaji, H. and Abhayaratne C. (2021).
Making sense of neuromorphic event data for human
action recognition. In IEEE Access, V. 9, P. 82686
82700. doi: 10.1109/ACCESS.2021.3085708.
Amri, E., Felk, Y., Stucki, D., Ma, J., Fossum, E. (2016).
Quantum Random Number Generation Using a Quanta
Image Sensor. In Sensors, V. 16(7), P. 1002. doi:
10.3390/s16071002.
Antsiperov, V. (2023). New Centre/Surround Retinex-like
Method for Low-Count Image Reconstruction. In
Proceedings of the 12th International Conference on
Pattern Recognition Applications and Methods
(ICPRAM 2023), SCITEPRESS, P. 517528. doi:
10.5220/0011792800003411
Antsiperov, V. (2024). Neuromorphic Encoding /
Reconstruction of Images Represented by Poisson
Counts. In Proceedings of the 13th International
Conference on Pattern Recognition Applications and
Methods (ICPRAM 2024), SCITEPRESS. P. 485493.
doi: 10.5220/0012574100003654.
Antsiperov, V. E. (2024). Adaptive Filtering of Distributed
Data Based on Modeling the Perception Mechanisms of
Living Sensory Systems. In: Vlachos, D. (ed)
Mathematical Modeling in Physical Sciences.
ICMSQUARE 2023. Springer Proceedings in
Mathematics & Statistics, V. 446, P. 1931. doi:
10.1007/978-3-031-52965-8_2
Asuni, N., Giachetti, A. (2014). TESTIMAGES: a large-
scale archive for testing visual devices and basic image
processing algorithms. In: STAG: Smart Tools & Apps
for Graphics, A. Giachetti (Editor).
Barrett, H. H. and Myers, K. J. (2004). Foundations of Image
Science, John Wiley and Sons, Hoboken.
Buades, A., Coll, B., Morel, J. M. A. (2005). Review of
Image Denoising Algorithms, with a New One.
Multiscale Modeling & Simulation, V. 4(2), P. 490530.
Doi: 10.1137/040616024.
Christensen, D.V., et al. (2022). 2022 roadmap on
neuromorphic computing and engineering. In
Neuromorph. Comput. Eng., V. 2, P. 022501 doi:
10.1088/2634-4386/ac4a83.
Dargan, S., Kumar, M., Ayyagari, M. R. et al. (2020) A
Survey of Deep Learning and Its Applications: A New
Paradigm to Machine Learning. In Archives of Comput.
Methods in Eng., V. 27, P. 10711092.
https://doi.org/10.1007/s11831-019-09344-w.
De Boor, C. (1978) A practical guide to splines. Springer-
Verlag, New York; Berlin.
Elad, M. (2005) Retinex by Two Bilateral Filters. In
Lecture notes in computer science, Springer, Berlin,
Heidelberg, P. 217229. doi: 10.1007/11408031-19.
Frisby, J. P., Stone, J. V. (2010). Seeing: The computational
approach to biological vision. MIT Press.
Grove, O., Rajab, K., Piegl, et.al. (2011) From CT to
NURBS: Contour Fitting with B-spline Curves. In
Computer-Aided Design and Applications, V. 8(1), P.
321. doi: 10.3722/cadaps.2011.3-21.
Jobson, D. J., Rahman, Z., Woodell, G. A. (1997)
Properties and performance of a center/surround
retinex. In IEEE Transactions on Image Processing, V.
6(3), P. 451462. doi: 10.1109/83.557356
Katz, B. (1982) Stephen William Kuffler, 24 August 1913
- 11 October 1980. In Biographical memoirs of fellows
of the Royal Society, V. 28, P. 225259. doi:
10.1098/rsbm.1982.0011.
Kingman, J. (1993) Poisson processes. Clarendon Press.
Land, E. H., McCann, J. (1971). Lightness and retinex
theory. In Journal of the Optical Society of America, V.
61(1), P. 111. doi: 10.1364/JOSA.61.000001.
Lisani, J.-L., Morel, J.-M., Petro, A.-B., Sbert, C. (2020).
Analyzing center/surround retinex. In Information
sciences, V. 512, P. 741759. doi: 10.1016/
j.ins.2019.10.009.
Marr, D., Hildreth, E. (1980). Theory of edge detection. In
Proceedings of the Royal Society of London. Series B.
Biological Sciences, V. 207(1167), P. 187217. doi:
10.1098/rspb.1980.0020.
Masland, R. H. (2001) The fundamental plan of the retina.
In Nature Neuroscience, V. 4(9), P. 877886. doi:
10.1038/nn0901-877.
Masland, R. (2020). We know it when we see it: what the
neurobiology of vision tells us about how we think.
Basic Books, New York.
Murphy, K. P. (2012) Machine Learning: A Probabilistic
Perspective, 1st ed., MIT Press, Cam-bridge.
Robert, C. P., Casella, G. (2004). Monte Carlo Statistical
Methods, 2nd ed. Springer, New York. doi:
10.1007/978-1-4757-4145-2.
Streit, R. L. (2010). Poisson Point Processes Imaging,
Tracking, and Sensing. Springer. doi: 10.1007/978-1-
4419-6923-1.
Vollmer, J., Mencl, R. and Müller, H. (1999) Improved
Laplacian Smoothing of Noisy Surface Meshes. In
Computer graphics forum, V. 18(3), P. 131138.
https://doi.org/10.1111/1467-8659.00334
Wang, Y. K., Wang S.E. and Wu, P. H. (2023). Spike-Event
Object Detection for Neuromorphic Vision. In IEEE
Access, V. 11, P. 52155230. doi: 10.1109/ACCESS.
2023.3236800.
Young, G. A., Smith R. L. (2005). Essentials of statistical
inference. Cambridge University Press, Cambridge.
Zapp, S. J., Nitsche, S., Gollisch, T. (2022). Retinal
receptive-field substructure: scaffolding for coding and
computation. In Trends Neurosci (Regular ed.) V.
45(6), P. 430445. doi: 10. 1016/j.tins.2022.03.005.
Zhang, F., and Bull, D. (2021). Measuring and managing
picture quality, In Intelligent Image and Video
Compression 2nd ed. Elsevier Science & Technology.
Zhu S. C. and Wu, Y. N. (2023). Computer Vision
Statistical Models for Marr's Paradigm, 1st ed. Cham:
Springer.
EXPLAINS 2024 - 1st International Conference on Explainable AI for Neural and Symbolic Methods
146