THE CONFORMAL CAMERA IN MODELING VISUAL
INFORMATION DURING EYES MOVEMENTS
Jacek Turski
Department of Computer and Mathematical Sciences, University of Houston-Downtown, One Main Street, Houston, U.S.A.
Keywords:
The conformal camera, Projective Fourier transform, Retinotopy, Saccades, Perisaccadic perception, Pursuit.
Abstract:
The conformal camera and its related projective Fourier transform that provide image representation well
adapted to projective transformations and retinotopic mappings of the brain visual pathways are reviewed. The
conformal camera’s non-Euclidean geometry effectiveness in intermediate-level vision is discussed, the algo-
rithmic steps in modeling visual information during saccadic eye movements are outlined, and the research-in-
progress on modeling perception during pursuit eye movements is described. It is concluded that the conformal
camera may provide a computational framework needed for developing tools for processing visual information
during the exploratory movements of the camera with a silicon retina, used in autonomous mobile robots.
1 INTRODUCTION
The light carrying visual information about the exter-
nal world enters the primate vision system through
the eyeball pupil, strikes photoreceptors where the
transduction into electrical impulses takes place, and
passes through the multi-layered neuronal circuitry of
the retina where it undergoes initial processing. The
retinal output is conveyed to numerous downstream
brain areas for further processing, and, when com-
bined with other sensory information, results in our
understanding of the 3D world that guides our actions.
This problem, which the brain must solve in real-
time, is immensely complex. One of the reasons for
complexity is the fact that primates see clearly only
the central two degrees of the visual field projected
on the central fovea consisting mainly of a high den-
sity cone cells, the color-selective type of photore-
ceptors for a sharp daylight vision. The visual acu-
ity decreases rapidly away from the fovea because the
distance between cones increases with eccentricity as
they are outnumbered by rode cells, photoreceptors
for a low acuity black-and-white night vision. More-
over, there is an increased convergence of the pho-
toreceptors on the ganglion cells whose axons send
out retinal information to the brain areas in precise
retinotopic arrangements. To overcome this acuity
limitation, the brain executes a scanning eye move-
ment consisting of a sequence of saccades that repo-
sition the fovea on the objects of interest, interlaced
with fixations, during which, the visual information
is acquired. Usually humans make about three sac-
cades per second at the eyeball’s maximum speed of
700 deg/sec, producing about 200, 000 saccades per
day. This sequence of saccades, fixations, and, some-
times also smooth-pursuit eye movements that keep
the fovea focused on a slowly (up to 100 deg/sec)
moving object for detailed analysis, is the most basic
feature underpinning primate visual perception.
Although, there has been great progress made in
understanding the neural processes underlying our
clear and stable perception in spite of limited acuity
and incessant eye movements, see (Klier and Ange-
laki, 2008; Wurtz, 2008) for reviews, the involved
mechanisms are still not fully understood. Converg-
ing evidence from psychophysics, functional neu-
roimaging, and primate neurophysiology supports the
current view that the most attractive neural basis that
underlies visual stability are the mechanisms par-
tially suppressing visual sensitivity during saccades
and causing visual and visuo-motor cells in various
brain areas to respond to stimuli before the eyes move
their receptive fields there, commonly referred to as
the shifting receptive fields mechanism (Duhamel et
al., 1992; Melcher and Colby, 2007). This shift of re-
ceptive fields, starting 50 ms before a saccade onset
and ending 50 ms after the saccade landing, is hy-
pothesized to update (or remap) the retinotopic maps
in the anticipation of each upcoming saccade.
From the above review of visual neuroscience,
one should not be surprised that, in spite of the sus-
tained efforts over many decades that have resulted
in significant advances in the application of robotics
to industry, medicine, the military, space and under-
257
Turski J..
THE CONFORMAL CAMERA IN MODELING VISUAL INFORMATION DURING EYES MOVEMENTS.
DOI: 10.5220/0003568602570263
In Proceedings of the 8th International Conference on Informatics in Control, Automation and Robotics (ICINCO-2011), pages 257-263
ISBN: 978-989-8425-75-1
Copyright
c
2011 SCITEPRESS (Science and Technology Publications, Lda.)
water explorations, humanoid robots are still far in
the future. On the other hand, with the recent pro-
posed research program on trans-saccadic perception
(Melcher and Colby, 2007), it is becoming now im-
portant to propose biologically-mediated engineering
approaches in modeling visual information during the
exploratory eye movements. This has been the main
goal of our recent work (Turski, 2010) which we re-
view in this article.
We model the eyes’ imaging functions with the
conformal camera we have developed for robotic vi-
sion. Remarkably, the conformal camera possesses
it own projective Fourier transform (PFT), providing
efficient image representation well adapted to image
projective transformations and the retinotopic map-
ping of the brain visual and oculomotor pathways.
Thus, the conformal camera integrates the head, eyes
and retinotopy into a single computational system
that allows algorithmic modeling of visual informa-
tion during exploratory eye movements. In particular,
we demonstrate that the image representation in terms
of PFT may efficiently model the receptive fields
shift that remaps cortical retinotopy in the anticipa-
tion of each saccade and the related phenomenon of
perisaccadic perceptual space compression observed
by human subjects in laboratory experiments (Ross
et al., 1997). Notably, this system may model a
new emerging role of the retina circuitry in com-
putations of the anticipatory aspects of eye-tracking
movement and a partial suppression of visual sensi-
tivity during saccadic eye motion (Gollisch and Meis-
ter, 2010). Finally we describe our ongoing work on
modeling smooth-pursuit eye movements and catch-
up saccades. Relations to other work are mentioned
in the last section.
2 MATHEMATICAL
BACKGROUND
The conformal camera and related projective Fourier
analysis were first discussed in (Turski, 2000), and
later, a full mathematical formulation was presented
in (Turski, 2004; Turski, 2005).
2.1 The Conformal Camera
In the conformal camera, points (x
1
,x
2
,x
3
) of a 3D
scene are projected under the mapping j(x
1
,x
2
,x
3
) =
(x
3
+ ix
1
)/x
2
into the image plane x
2
= 1 with com-
plex coordinates z = x
3
+ ix
1
. The basic image trans-
formations in the conformal camera, shown in Fig 1,
are of two types.
Figure 1: (a) The image transformation of a planar ogject
translated relative to a fixed-gaze camera. (b) The image
transformation resulting from the gaze change. Only 2D
cross-sections are shown.
1. The Camera Maintains a Fixed Gaze. Image
transformation resulting from a planar object’s trans-
lational movement is given by the h-transformation in
which an image is translated out of the image plane
by
b = (b
1,
b
2
,b
3
) and then projected by j back to
the image plane (Fig 1 (a)),
h(b
1,
b
2
,b
3
) · z =
δ 0
γ
δ
1
δ
· z =
1
δ
z+
γ
δ
δ
(1)
where δ = (1+ b
2
)
1/2
and γ = b
3
+ ib
1
.
2. The Line of Sight of the Camera is Rotated. Image
transformation of a planar stationary object is given
by the hk-composition of the h-transformation (1) and
the k-transformation in which an image projected by
j
1
into the unit sphere S
2
(0,1,0)
centered at (0,1,0) is
rotated by the Euler angles (ψ,φ,ψ
) and projected by
j back to the image plane,
k(ψ,φ, ψ
) · z =
α β
β α
· z =
αz+ β
βz+ α
(2)
where α = e
i(ψ+ψ
)/2
cos(φ/2) and β =
ie
i(ψψ
)/2
sin(φ/2). The hk-transformation is
shown in Fig 1 (b). The finite iterations of (1) and
(2) transformations generate (see (Turski, 2004)) the
action
z 7−
a b
c d
· z =
dz+ c
bz+ a
(3)
ICINCO 2011 - 8th International Conference on Informatics in Control, Automation and Robotics
258
of the group
SL(2,C) =

a b
c d
: a,b,c, d C, ad bc = 1
with added point at infinity such that a/b is
mapped to . Thus, if f (z) is the intensity function
of an image, its transformations f
g
1
· z
are given
by the following mappings: if g PSL(2,C), then
a b
c d
= g 7− f
g
1
· z
= f
az c
bz+ d
. (4)
We must take the quotient
PSL(2,C) = SL(2,C)/Id),
where Id is the identity, to identify matrices ±g be-
cause g· z = (g) · z.
The conformal camera combines geometric and
analytic (numerical) structures since PSL(2,C) is the
group of holomorphic automorphisms of the Rie-
mann sphere
b
C = C {} (Jones and Singerman,
1987) that preserves the projective geometry imposed
by complex structure, known as M¨obius geometry
(Henle, 1997). Further, there is fully understood
Fourier analysis on the group PSL(2,C) and its ho-
mogeneous spaces (Knapp, 1986).
2.2 Projective Fourier Analysis
We constructed the projective Fourier analysis by re-
stricting geometric Fourier analysis of SL(2, C)—a
direction in the representation theory of the semisim-
ple Lie groups (Knapp, 1986)—to the image plane
of the conformal camera (see Section 5.1 in (Turski,
2005)). The resulting projective Fourier transform
(PFT) of a given image intensity function f is the
following
b
f(s,k) =
i
2
Z
f(z)|z|
is1
z
|z|
k
dzdz (5)
where (s,k) R × Z, and, if z = x
3
+ ix
1
, then
i
2
dzdz = dx
3
dx
1
. In log-polar coordinates (u,θ) given
by lnre
iθ
= lnr+ iθ = u+ iθ, (5) takes on the form of
the standard Fourier integral
b
f(s,k) =
Z Z
f(e
u+iθ
)e
u
e
i(us+θk)
dudθ. (6)
Inverting it, we obtain the representation of the image
intensity function in the (u,θ)-coordinates,
e
u
f(u,θ) =
1
(2π)
2
k=
Z
b
f(s,k)e
i(us+θk)
ds,
where f(u,θ) = f(e
u+iθ
). We stress that, although
f(e
u+iθ
) and f(u,θ) are numerically equal, they are
given on different spaces.
We note that in spite of logarithmic singularity of
log-polar coordinates, an image f that is integrable on
C
= C\{0} has finite PFT
b
f(s,k)
Z
2π
0
Z
u
1
f(e
u+iθ
)e
u
dudθ
=
Z
2π
0
Z
r
1
0
f(re
iθ
)drdθ < . (7)
This observation is crucial in constructing the discrete
PFT.
2.3 Discrete Projective Fourier
Transform
It follows from (7) that we can remove a disk |z|
r
a
in order to regularize f such that the support of
f(u,θ) is contained within (lnr
a
,lnr
b
) × [0, 2π) and
approximate the integral in (6) by a double Riemann
sum with equally spaced partition points
(u
k
,θ
l
) = (lnr
a
+ kδ,lγ), (8)
where 0 k M 1, 0 l N 1, δ = T/M with
T = ln(r
b
/r
a
), and γ = 2π/N. We obtain the discrete
projective Fourier transform (DPFT) ,
b
f
m,n
=
M1
k=0
N1
l=0
f
k,l
e
u
k
e
i2πmk/M
e
i2πnl/N
(9)
and its inverse (IDPFT) ,
f
k,l
=
1
MN
M1
m=0
N1
n=0
b
f
m,n
e
u
k
e
i2πmk/M
e
i2πnl/N
, (10)
where f
k,l
= (2πT/MN) f(e
u
k
e
iθ
l
) and f
k,l
=
(2πT/MN)f(u
k
,θ
l
). Both expressions (9) and (10)
can be computed efficiently by FFT .
2.4 Image Projective Transformations
Under a projective transformation by g SL(2, C),
the retinal pixels z
k,l
= e
u
k
e
iθ
l
of an image f are trans-
formed by
z
k,l
= g
1
· z
k,l
= e
u
k,l
e
iθ
k,l
.
For example, if the camera rotates by an angle φ about
the vertical axis,
z
m,n
= k(0,2φ,0)·z
m,n
=
z
m,n
cosφ isinφ
iz
m,n
sinφ + cosφ
, (11)
then the log-polar pixels (u
m
,θ
n
) (recall (8)) are trans-
formed into non-uniformly spaced points (u
m,n
,θ
m,n
)
with the coordinates given by the equations
e
2u
m,n
=
e
2u
m
cos
2
φ+ sin
2
φ e
u
m
sin2φsinθ
n
e
2u
m
sin
2
φ+ cos
2
φ+ e
u
m
sin2φsinθ
n
(12)
THE CONFORMAL CAMERA IN MODELING VISUAL INFORMATION DURING EYES MOVEMENTS
259
and
tanθ
m,n
=
1/2(e
2u
m
1)sin2φ+ e
u
m
sinθ
n
cos2φ
e
u
m
cosθ
n
.
(13)
Computer simulations of these image projective
transformations were presented in (Turski, 2003;
Turski, 2005). The projectively adapted characteris-
tics are expressed by the resulting IDPFT (see Sec-
tion 9 in (Turski, 2004) for details)
f
m,n
=
1
MN
M1
k=0
N1
l=0
b
f
k,l
e
u
m,n
e
i2πu
m,n
k/T
e
iθ
m,n
l
, (14)
f
k,l
= (2πT/MN)f(u
k,l
,θ
k,l
). Thus, one can render
image projective transformations in terms of the PFT
b
f
k,l
of the original image.
3 IMAGING WITH THE
CONFORMAL CAMERA
The conformal camera provides a mathematical rep-
resentation of visual information that efficiently sup-
ports retinal hard-wired eccentricity-dependent visual
resolution and the processes of stereoscopic depth
perception (Turski, 2006). In this section, we re-
view the effectiveness of the conformal camera’s non-
Euclidean geometry in the intermediate-level vision
problems of grouping local elements into individual
objects of natural scenes and in the front-end mod-
eling of neural mechanisms that may contribute to
the continuity and stability of perisaccadic perception
(Turski, 2010).
3.1 Intermediate-level Vision,
Retinotopy and Peripheral Vision
Intermediate-level Vision. The image projective
transformations (3) are conformal mappings with the
fundamental property of mapping circles and lines ei-
ther to a circle or a line (Jones and Singerman, 1987).
In (Turski, 2010), we showed that these image pro-
jective transformations are relevant to the psycholog-
ical and computational aspects of natural scene un-
derstanding. In fact, humans effortlessly and un-
ambiguously group the extracted by the retina lo-
cal changes in contrast representing fragmented con-
tours (edges of occluded objects) into coherent, global
shapes (intermediate-levelvision). Evidence accumu-
lated in psychological and physiological studies sug-
gests that the human visual system utilizes a local
grouping process with two simple rules: collinearity
(receptive fields aligned along a line) and cocircular-
ity (receptive fields aligned along a circle) (Sigman
et al., 2001). Further, it was suggested that cocir-
cularity is the critical factor in the perception of tex-
ture regions (Motoyoshi and Kingdom, 2010). Now,
the circles and lines are preserved under image pro-
jective transformations, thus preserving the grouping
process.
Peripheral Vision. The conformal camera with its
own DPFT provides an image representation com-
putable by FFT in log-polar coordinates which simul-
taneously approximate retinotopy. Although the rep-
resentation reduces the number of pixels more than
100 times (see Example 1 in Section 5 in (Turski,
2010)) the central region corresponding to the high-
acuity foveal region has to be removed to regular-
ize the logarithmic singularity. Without the corre-
sponding reduction (recall discussion in Section 1),
the brain would run off recourses when processing
all of the incoming visual information. Further, this
image representation in terms of DPFT numerically
integrates both the projective image transformations
produced by the conformal camera gaze changes and
dependent log-polar coordinates, or retinotopic maps,
providing computational framework for modeling vi-
sual information during eye movements such as sac-
cades. It is supported by the fact that for processing
visual information during saccades, only peripheral
vision is important.
3.2 Processing Visual Information
During Saccades
In (Turski, 2010), we developed a model, first sug-
gested in (VanRullen, 2004), for visual information
processing during saccadic eye movements. This
model is based on the perisaccadic activities in which
the shifts of stimuli preseaccadic receptive fields to
their future postsaccadic locations is thought to under-
lie the scene remapping of the current foveal frame to
the frame at the upcoming saccade target (Duhamel et
al., 1992). This remapping uses the motor command
of the impending saccade and may help maintain sta-
bility of primate perception in spite of incessant inter-
ruptions by saccades. A brief description of the model
shown in Fig 2 is as follows. The eye initially fixated
at F is starting a horizontal saccade to the target lo-
cated at T (Fig 2, The scene). The scene with the fixa-
tion at F is projected into the retina (Fig 2 (a), (b)) and
sampled by the photoreceptor/ganglion cells to give
the set of samples f
m,n
= (2πT/MN) f(e
u
m
e
iθ
n
) (Fig 2
(c)). Next, DPFT
b
f
k,l
is computed by FFT in log-polar
coordinates (u
k
,θ
l
), where u
k
= lnr
k
. The inverse
DPFT, computed again by FFT, renders the image
cortical representation f
m,n
= (2πT/MN)f(u
m
,θ
n
).
The log-polar coordinates’ singularity is regularized
ICINCO 2011 - 8th International Conference on Informatics in Control, Automation and Robotics
260
Figure 2: (a) The projection of four probes flashed around
the upcoming saccade’s target T in ’The scene’. (b) The
probes ’retinal’ images. (c) Shifts of ’cortical’ receptive
fields of probes using the shift property of Fourier trans-
form. (d) Remmaped ’cortical’ receptive fields. Also,
the resulted probes’ illusory perceptual compression refer-
enced with arrows from the true positions are shown in ’The
scene’.
by removing the (re-scaled) disk of radius 1 represent-
ing the fovea. A short time before the saccade onset
and during the saccade movement redirecting the gaze
line from F to T, log-polar coordinates (retinotopic
maps) are remapped by shifting the frame centered at
the receptive field of T to its future foveal location.
This neural process is modeled by the shift properties
of the inverse DPFT
f
m+h,n j
=
1
MN
M1
k=0
N1
l=0
e
i2πhk/M
e
i2π jl/N
b
f
k,l
e
(u
k
+hδ)
e
i2πmk/M
e
i2πnl/N
, (15)
which can be computed by FFT (Fig 2 (b), (d)). We
note that if the cortical image pixel f
m,n
is translated
past the fovea, its translation involves both u- and θ-
directions; in (15) the image is translated by h pix-
els in the u-coordinate and by j pixels in the θ-
direction. Otherwise, it involves translation in the u-
direction only, see Fig 2 (c). The perisaccadic com-
pression observed in laboratory experiments (Ross et
al., 1997) is obtained by decoding the cortical image
representation to the visual field representation
f
mh,n+ j
= (2πT/MN)f(u
m
+ hδ,θ
n
jγ)
= (2πT/MN) f(e
u
m
+hδ
e
i(θ
n
jγ)
)
= (2πT/MN) f(e
hδ
r
m
e
i(θ
n
jγ
),
We see that the original position r
m
e
iθ
n
is transformed
to e
hδ
r
m
e
i(θ
n
jγ)
, resulting in the compression (Fig
2, The scene).
Importantly, the shift (h, j) in terms of cortical
pixels can be taken as a function of time to account
for the very tight time course followed by perisac-
cadic compression with a duration of about 130 ms
and the maximum mislocalization immediately before
the saccade.
3.3 Research in Progress: Smooth
Pursuit and Catch-up Saccades
During the tracking of predictably moving targets, the
eyes pursuit is initially driven by the target image mo-
tion across the retina during latency. Later, when the
target is almost perfectly stabilized on the fovea (zero
velocity error), extraretinal mechanisms anticipating
the sensory outcome of this smooth-pursuit eye move-
ment rapidly take over and provide the neural drive to
keep the eyes moving (Lisberger et al., 1987). Be-
havioral experiments have shown that during pursuit
of unpredictable or fast moving targets, the saccadic
system uses velocity error in addition to position er-
ror to generate estimates of future target position to
program and trigger catch-up saccades (De Brouwer
et al., 2002). Therefore, the orientation of the visual
axis in space requires the coordination of smooth pur-
suit with catch-up saccades (Erkelens, 2006).
We assume the horizontal gaze change, given by
k(0, 2φ,0) and
b = (b
1,
b
2
,0), shown in Fig 1 (b).
We let g = k(0, 2φ,0)h(k
1
,k
2
,0) be the composition
g = g
1
g
0
where
g
0
= k(0, 2φ
0
,0)h(b
1
,b
2
,0)
describes an initial catch-up saccade followed by a
tracking movement
g
1
= k(0, 2φ
1
(t),0)h(c
1
(t),c
2
(t),0).
We solve the equation g = g
1
g
0
for h
1
(c
1
,c
2
,0),
h
1
(c
1
,c
2
,0) = k(0,2φ
0
,0)h(b
1
,b
2
,0)
h
1
(k
1
,k
2
,0)k(0,2φ
0
,0),
which has the matrix form
(1+ c
2
)
1/2
0
ic
1
(1+ c
2
)
1/2
(1+ c
2
)
1/2
=
α
1
α
2
α
3
α
4
.
Then, from α
2
= 0 we get
(k
2
b
2
)cosφ
0
= (k
1
b
1
)sinφ
0
, (16)
which simplifies the other matrix elements,
α
1
= α
1
4
=
1+ k
2
1+ b
2
1/2
= (1+ c
2
)
1/2
THE CONFORMAL CAMERA IN MODELING VISUAL INFORMATION DURING EYES MOVEMENTS
261
α
3
= i
k
1
b
1
1+ b
2
1+ k
2
1+ b
2
1/2
= ic
1
(1+ c
2
)
1/2
.
Now α
1
, α
3
and (16) lead to the relations between
the vectors
c = (c
1
,c
2
,0),
b = (b
1
,b
2
,0), and
k =
(k
1
,k
2
,0) and the angle φ
0
summarized in Table 1.
Table 1: All solutions of the composition of two gaze-
changes. Here, + or - means that the value of the quantity in
this column is positive or negative, respectively. the other
choices of + or - results in contradictions.
Here the saccadic gaze rotation problem is different
then the one discussed in Section 3.2 since the target
is moving. We show that the vector parameters of the
smooth pursuit
c are linked to the saccadic eye rota-
tion φ
0
. Since the smooth pursuit rotation angle φ
1
(t)
can be considered known (efference copy or anticipa-
tory mechanisms), only one internal parameter c
1
(t)
is needed to describe the retinal image transformation
during the smooth-pursuit movement of the confor-
mal camera. Thus, the results we present in this sec-
tion should support the fact that smooth pursuit and
saccades are not independent (Erkelens, 2006). We
intend to use the compositions of gaze changes we
presented in this section to model the eye movement
sequences, see (Quaia et al., 2010) for example.
4 CONCLUSIONS
The conformal camera provides a computational
framework that has the unique capability of develop-
ing algorithms for processing visual information dur-
ing the exploratory motion of a camera with anthro-
pomorphic visual sensors that resemble a sequence of
saccades interlaced with fixations. In particular, we
presented algorithmic steps based on the front-end
neural processes of the perisaccadic perception that
are using the oculomotor command of the impending
saccade to shift stimuli receptive fields in cortical ar-
eas to their future postsaccadic locations. This shift
is thought to underlie the scene remapping of the cur-
rent foveal frame to the frame at the upcoming sac-
cade target and may help acquire the visual informa-
tion without repeating, afresh, the whole process at
each fixation and maintain stability of primate percep-
tion is spite of about 200, 000 saccades produced each
day. Because the shift occurs in log-polar coordinates,
it explains perisaccadic spatial distortion. Also, the
conformal camera seems to be able to model smooth-
pursuit eye movements, as our preliminary results are
suggesting. Of course, much more work has to be
done before the system could be tested.
In addition to our model of perisaccadic percep-
tion (Section 3.2), there is one another elaborate com-
putational modeling that assumes that the flashed
stimuli receptive fields in cortical areas dynamically
change position toward the saccade target receptive
field as the result of the gain of feedback of the retino-
topically organized activity hill of the saccade target
in the oculomotor superior colliculus layer (Hamker
et al., 2008). The perceived spatial distortion of stim-
uli is the result of the cortical magnification factor of
the retinotopic mapping when the position of each
stimulus is decoded from activity of the neural en-
semble. What sets apart our modeling from the other
model is the fact that the computational efficiency is
built into the modeling process (computations with
FFT) and it accommodates other types of eye move-
ments. In the case of saccadic eye movement, this is
especially important because the occurrence of three
saccades per second and the time needed for the ocu-
lomotor system to plan and execute each saccade.
REFERENCES
de Brouwer S, Yuksel D, Blohm G, Missal M, and Lefevre
P. (2002). What triggers catch-up saccades during vi-
sual tracking? Journal of Neurophysiology, 87, 1646–
1650.
Duhamel J-R, Colby C. L, and Goldgerg M. E. (1992). The
updating of the representation of visual space in pari-
etal cortex by intended eye movements, Science 255,
90-92.
Erkelens C J. (2006). Coordination of smooth pursuit and
saccades, Vision Research, 46, 163-170.
Gollisch T and Meister M. (2010). Eye Smarter than Scien-
tists Believed: Neural Computations in Circuits of the
Retina, Neuron, 65, 150-164.
Hamker F. H, Zirnsak M, Calow D, and Lappe M.
(2008). The Peri-Saccadic Perception of Objects
and Space, PLoS Computational Biology 4(2):e31.
doi:10.1371/journal.pcbi.0040031.
Henle M. (1997). Modern Geometries. The Analytical Ap-
proach, Prentice Hall, Upper Saddle River, NJ.
Jones G and Singerman D. (1987). Complex Functions,
Cambridge University Press, Cambridge.
Klier E. M. and Angelaki D. E. (2008). Spatial Updat-
ing and the Maintenance of Visual Constancy, Neu-
roscience 156, 801-818.
Knapp A. W. (1986). Representation Theory of Semisimple
Groups: An overview Based on Examples, Princeton
University Press, Princeton, NJ.
ICINCO 2011 - 8th International Conference on Informatics in Control, Automation and Robotics
262
Lisberger S. G., Morris E. J., and Tychsen L. (1987). Visual
motion processing and sensory-motor integration for
smooth pursuit eye movements, Annual Review Neu-
roscience, 10, 97-129.
Melcher D. and Colby C. L. (2007). Trans-saccadeic per-
ception, Trends in Cognitive Sciences, 12, 466-473.
Motoyoshi I. and Kingdom F. A. (2010). The role of cocir-
cularity of local elements in texture perception, Jour-
nal of Vision 10(1):3, 1-8.
Quaia C., Joiner W. M., Fitzgibbon W. J., Optican L. M.,
and Smith M. A. (2010). Eye movement sequences
generation in humans: Motor or goal updating? Jour-
nal of Vision, 10(14):28, 1-31.
Ross, J., Morrone M C, and Burr D C. (1997). Compression
of visual space before saccades, Nature, 386, 698-601.
Sigman M, Cecchi G A, Gilbert C. D., and Magnasco M.
(2001). On a common circle: Natural scenes and
Gestalt rules, Proceedings of the National Academy
of Sciences of the U.S.A., 98, 1935-1940.
Turski J. (2000). Projective Fourier analysis for patterns,
Pattern Recognition, 33, 2033-2043.
Turski J. (2004). Geometric Fourier Analysis of the Con-
formal Camera for Active Vision, SIAM Review, 46,
230-255.
Turski J. (2005). Geometric Fourier Analysis for Computa-
tional Vision, Journal of Fourier Analysis and Appli-
cations, 11, 1-23.
Turski J. (2006). Computational Harmonic Analysis for Hu-
man and Robotic Vision Systems, Neurocomputing,
69, 1277-1280.
Turski J. (2010). Robotic Vision with the Conformal Cam-
era: Modeling Perisaccadic Perception, Journal of
Robotics, doi:10.1155/2010/130285, 1-16.
VanRullen R. (2004). A simple translation in cortical log-
coordinates may account for the pattern of saccadic
localization errors, Biological Cybernetics, 91, 131-
137.
Wurtz R. H. (2008). Neural mechanisms of visual stability,
Vision Research, 48,2070-2089.
THE CONFORMAL CAMERA IN MODELING VISUAL INFORMATION DURING EYES MOVEMENTS
263