maneuvers is required. That is why representation of
specific knowledge for ‘maneuvers’ is important
(rounded central red block within the rectangle in
the lower right corner of Figure 1).
3 THREE LEVELS IN PARALLEL
As mentioned previously, the levels discussed
separately above have to be treated in parallel with
continuous feedback between them. Figure 3
sketches the information flow. At the base are
consecutive image evaluation processes independent
of temporal aspects. However, a component for the
generation of object hypotheses has to be available
interpreting collections of features that might stem
from the same real-world object. Initially, this
hypothesis is kept locally private for testing over the
next few video cycles. Only after the sum of the
squared prediction errors remains below a threshold,
the hypothesis is made public in the perception
system by inserting it into the scene tree
representing the relative states by homogeneous
coordinates. This makes the objects available to
situation-level 3. [For more detailed discussions see
(IV’00, 2000), Chap. 13 of (Dickmanns 2007), and
www.dyna-vision.de ].
With the object states given in the scene tree and
with the actions of subjects assumed to be
performed, a single-step prediction of the states for
the next point in time of measurements is computed
(text in red in Figure 3). This allows intelligent
control of top-down feature search (dashed arrow in
blue). For objects of special interest, longer range
predictions may be made for extended situation
analysis (green dash-dotted arrow) to the top level 3.
There may be separate routines for perceiving and
representing environmental conditions that may need
evaluation of special features (like decreasing
contrast with visual range under foggy conditions).
At the situation level (top in Figure 3), all of this
information is evaluated in conjunction, and the
result is communicated to the two sublevels for
control of gaze direction and own locomotion in the
mission context.
4 SUMMARY OF POSITION
Experience in joint use of procedural methods from
‘Control Engineering’ and declarative methods from
‘Artificial Intelligence’ for processing of image
sequences and for scene understanding has led to the
proposal to expand the knowledge base for dynamic
real-time vision and control of actions by a specific
component for ‘maneuvers’: Such a component for
the transition from state S
1
(t
1
) to S
2
(t
2
) contains for
each of these mission element (S
1
to S
2
) in task
domains the following information:
The nominal control time histories u(·);
the dynamical model for generating the nominal
trajectories of the state variables;
code for generating the coefficients of feedback
control laws for counteracting perturbations,
conditions under which the maneuver may be
used with which set of parameters.
Codes for evaluating pay-off functions that allow
judging the quality of the maneuver performed.
This process-oriented approach geared to the control
variables of dynamical systems is more efficient
than centering on state variables.
REFERENCES
Christensen H. I., Nagel H.-H. (eds.), 2006. Cognitive
Vision Systems – Sampling the Spectrum of
Approaches. Springer, (367 papes).
Dickmanns, E.D., 2007. Dynamic Vision for Perception
and Control of Motion. Springer (474 pages).
Dickmanns, E.D., 2015. BarvEye: Bifocal active gaze
control for autonomous driving. (this volume).
“ , Graefe, V., 1988. a) Dynamic monocular machine
vision. Machine Vision and Applications, Springer
International, Vol. 1, pp 223-240. b) Applications of
dynamic monocular machine vision. pp 241-261.
Gallese V., Goldman A. 1998. Mirror Neurons and the
Simulation Theory of Mind-reading. Trends in Cogn.
Sci.2, pp 493-501.
IV’00, 2000. Proc. Internat. Symp. on Intelligent Vehicles,
Dearborn (MI), with six contributions to Expectation-
based, Multi-focal, Saccadic (EMS-) vision:
1. Gregor R. et al.: EMS-Vision: A Perceptual System for
Autonomous Vehicles.
2. Gregor R., Dickmanns E.D.: EMS-Vision: Mission
Performance on Road Networks.
3. Hofmann U.; Rieder A., Dickmanns, E.D.: EMS-
Vision: Applic. to ‘Hybrid Adaptive Cruise Control’.
4. Luetzeler M., Dickmanns E.D.: EMS-Vision: Recog-
nition of Intersections on Unmarked Road Networks.
5. Pellkofer M., Dickmanns E.D.: EMS-Vision: Gaze
Control in Autonomous Vehicles.
6. Siedersberger K.-H., Dickmanns E.D.: EMS-Vision:
Enhanced Abilities for Locomotion.
Kalman, R. D. 1960. A new approach to linear filtering
and prediction problems. Trans. ASME, Series D,
Journal of Basic Engineering, pp 35–45.
Kiverstein J.D., 2005. Naturalism and Phenomenology.
Diss. Univ. Edinborough.
Leontyev A. N. 2009. The Development of Mind.
VISAPP2015-InternationalConferenceonComputerVisionTheoryandApplications
214