Figure 1: An autostereoscopic display can be realized us-
ing a parallax barrier. The barrier is located between the
eyes (visualized in blue and red) and the pixel array of the
display. It blocks certain pixels for each eye, which results
in the eyes seeing only disjoint pixel columns (at least in
an optimal setting). If the display is fed with correct image
data, the users sees a stereo image.
ray is placed in front of a display screen. If the ob-
server’s eyes remain fixed at a particular location in
space, then one eye can see only the even display pix-
els through the grating or lens array, and the other eye
can see only the odd display pixels (see Figure 1).
The limitation to a single user is not a problem in
this scenario since it is limited to one user (the driver)
anyway. The other major drawback of this approach,
namely the fact that the observer must remain in a
fixed position, can be lifted by virtually adjusting the
pixel columns such that the separation remains intact,
as presented in (Sandin et al., 2001; Peterka et al.,
2007). For this to work, some kind of user tracking is
needed which also limits the amount of possible users.
Since the user’s possible positions are very lim-
ited in this scenario, there is no need to adjust for
user movement other than head rotation. Especially,
there is no need to adjust for large distance variations,
for example, using a dynamic parallax barrier (Perlin
et al., 2000).
Most consumer products containing autostereo-
scopic displays, however, just combine parallax bar-
riers with lenticular lenses, which does not put any
constraints on the number of possible users at a time.
This approach is relatively restricted concerning the
possible viewing position(s).
The one remaining drawback of this technique is
that the resolution drops down to a half for just a sin-
gle user. In the presented scenario, this is not a critical
point since the available resolution is quite high.
2.2 Numerical Analysis
An optimization problem can be represented in the
following way: For a function f mapping elements
from a set A to the real numbers, an element x
0
∈A is
sought-after, such that
∀x ∈ A : f (x
0
) ≤ f (x). (1)
Such a formulation is called a minimization problem
and the element x
0
is called a global minimum. De-
pending on the field of application, f is called an ob-
jective function, cost function, or energy function. A
feasible solution that minimizes the objective function
is called the optimal solution.
Typically, A is some subset of the Euclidean space
R
n
, often specified by a set of constraints (equalities
or inequalities) that the elements of A have to fulfill.
Generally, a function f may have several local min-
ima, where a local minimum x
?
satisfies the expres-
sion f (x
?
) ≤ f (x) for all x ∈ A in a neighborhood of
x
?
. In other words, in some region around x
?
all func-
tion values are greater than or equal to the value at x
?
.
The occurrence of multiple extrema makes problem
solving in (nonlinear) optimization very hard. Usu-
ally, the global (best) minimizer is difficult to identify
because in most cases our knowledge of the objective
functional is only local and global information are not
available. Since there is no easy algebraic character-
ization of global optimality, global optimization is a
difficult area,especially in higher dimensions.
Further explanations on global optimization can
be found in “Numerical Methods” (Boehm and
Prautzsch, 1993), “Numerical Optimization” (No-
cedal and Wright, 1999), “Introduction to Applied
Optimization” (Diwekar, 2003), “Compact Numer-
ical Methods for Computers: Linear Algebra and
Function Minimisation” (Nash, 1990), as well as in
“Numerische Methoden der Analysis” (english: Nu-
merical Methods of Analysis) (H
¨
ollig et al., 2010).
Besides these introductions and overviews some
books emphasize practical aspects – e.g. “Practical
Optimization” (Gill et al., 1982), “Practical Methods
of Optimization” (Fletcher, 2000), and “Global Op-
timization: Software, Test Problems, and Applica-
tions” (Pinter, 2002).
All optimization algorithms can be classified in
gradient-based methods, which use the objective
function’s derivatives, and non-gradient-based meth-
ods, which do not rely on derivatives. As most gradi-
ent based methods optimize locally, they most likely
find local minima of a nonlinear function but not its
global minimum. To find a global minimum local
optimization algorithms can be combined with ge-
netic algorithms (Janikow and Michalewicz, 1990),
(Michalewicz, 1995), (Michalewicz and Schoenauer,
1996) which have good global search characteris-
tics (Okamoto et al., 1998). These combinatorial
methods – such as simulated annealing (Ingber, 1993)
– improve the search strategy through the introduction
of two tricks. The first is the so-called “Metropolis
algorithm” (Metropolis et al., 1953), in which some
iterations that do not lower the objective function are
accepted in order to “explore” more of the possible
space of solutions. The second trick limits these ex-
OptimizationofanAutostereoscopicDisplayforaDrivingSimulator
319