surfaces. The horizontal lines of the pyramid can also
be detected, perhaps with a residual disparity compo-
nent, but only if the exact geometry of the projections
in our eyes is taken into account (Read and Cumming,
2007). The spikes and curved parts at the line ends in
Fig. 5 are caused by the fact that there are also hori-
zontal lines in the vertically-aligned receptive fields,
and only in the left or right half of the fields. These
“half lines” cause different responses on the left and
right side of the pyramid, because they are on the pos-
itive and negative half of the sinusoidal component of
the Gabor filters, thereby influencing the phase near
and at the corners of the squares. This complica-
tion can be explained theoretically, and it cannot be
solved without assuming more advanced processing
at a higher level than area V1. Hence, we cannot ex-
pect that all problems of phase differencing can be
solved at a very low level of the visual system.
The virtual 3D wireframe representation captures
all structural information of the pyramid, but no sur-
faces, unless the surfaces are textured. So how does
our visual system manage to create continuous sur-
faces when they are not textured? This question is
related to local feature and depth integration at a low
level, as we did here, but also to learned and global
object interpretation at a high level, likely the result
of experience in combining visual with haptic (tactile)
information in early childhood. Furthermore, there
may exist some “filling-in” processes, for example to
“hide” the blind spots of the retinae, but these occur
at a very high level (O’Regan, 1998).
The necessary circuitry in case of cyclopean vi-
sion is very limited. First, there are circuits which de-
tect events on the basis of simple and complex cells,
in both the left and right views (Rodrigues and du Buf,
2009). Second, a level of gating cells with symmet-
ric dendritic fields analyses local neighbourhoods: at
the furthest points (the two longest dendrites with a
length corresponding to the valid disparity range di-
vided by two) they receive excitatory input if iden-
tical events are detected there; the other dendrites in
between receive inhibitory input if asymmetric events
are detected. A gating cell only passes the output of
a third cell complex, which extracts the phases from
the simple cells, their derivatives, and the phase dif-
ference. The gating cell complex also codes the type
of symmetrically detected events at position x = 0 for
obtaining a cyclopean representation. As a result, dis-
parity is attributed to detected lines and edges with
one, “centralised” view, in a way as used in the mod-
eling of solid objects in computer graphics: the wire-
frame representation. As mentioned before, it does
not make sense to reconstruct 3D objects with all en-
tire surfaces at an early stage in vision, because our
visual system applies a hierarchical processing strat-
egy and the goal is to obtain a symbolic, semantic rep-
resentation.
ACKNOWLEDGEMENTS
This work was supported by the Portuguese Founda-
tion for Science and Technology (pluri-annual fund-
ing of LARSyS) and EU project NeuralDynamics
FP7-ICT-2009-6, PN: 270247.
REFERENCES
du Buf, J. (1993). Responses of simple cells: events, inter-
ferences, and ambiguities. Biol. Cybern., 68:321–333.
du Buf, J. (2007). Improved grating and bar cell models
in cortical area V1 and texture coding. Image Vision
Comput., 25(6):873–882.
Fleet, D. and Jepson, A. (1990). Computation of component
image velocity from local phase information. Int. J.
Comput. Vision, 5(1):77–104.
Fleet, D., Jepson, A., and Jenkin, M. (1991). Phase-based
disparity measurement. CVGIP: Image Understand-
ing, 53(2):198–210.
Fr
¨
ohlinghaus, T. and Buhmann, J. (1996). Regularizing
phase-based stereo. In Proc. of ICPR, pages 451–455.
Jenkin, M. and Jepson, A. (1988). The measurement of
binocular disparity. Computational Processes in Hu-
man Vision: An interdisciplinary perspective, ed. Z.
Pylyshyn, Ablex Press, Norwood, NJ, pages 69–98.
Kruizinga, P. and Petkov, N. (2000). Computational
model of dot-pattern selective cells. Biol. Cybern.,
83(4):313–325.
Monaco, J., Bovik, A., and Cormack, L. (2008). Nonlinear-
ities in stereoscopic phase differencing. IEEE Trans.
on Image Processing, 17(9):1672–84.
O’Regan, J. K. (1998). No evidence for neural filling-in -
vision as an illusion - pinning down “enaction”. Be-
havioral and Brain Sciences, 21(6):767–768.
Pauwels, K., Tomasi, M., Diaz, J., Ros, E., and Hulle, M. V.
(2012). A comparison of FPGA and GPU for real-
time phase-based optical flow, stereo, and local image
features. IEEE Trans. on Computers, 61:999–1012.
Read, J. and Cumming, B. (2007). Sensors for impossible
stimuli may solve the stereo correspondence problem.
Nature neuroscience, 10(10):1322–1328.
Rodrigues, J. and du Buf, J. (2009). Multi-scale lines and
edges in V1 and beyond: brightness, object catego-
rization and recognition, and consciousness. BioSys-
tems, 95:206–226.
Sanger, T. (1988). Stereo disparity computation using gabor
filters. Biol. Cybern., 59(6):405–418.
Solari, F., Sabatini, S., and Bisio, G. (2001). Fast technique
for phase-based disparity estimation with no explicit
calculation of phase. Electronics Letters, 37(23):1382
–1383.
Phase-differencinginStereoVision-SolvingtheLocalisationProblem
263