strokes. For a recent NPR taxonomy and a review
of stroke-based techniques see (Sousa, 2003; Hertz-
mann, 2003). Most rendering techniques are based
on image analysis algorithms from computer vision,
although there is a tendency towards including as-
pects of human vision, for example in constructing
saliency maps (DeCarlo and Santella, 2002). In con-
trast, our own, recently developed technique (du Buf
et al., 2006) is completely based on human vi-
sion. It employs four models of processes in our vi-
sual cortex: (1) colour constancy, (2) coarse back-
ground level construction in brightness perception,
(3) multi-scale representation of lines and edges, and
(4) saliency maps based on multi-scale keypoint de-
tection. The rendering engine (in OpenGL) is be-
ing complemented with an interface that shows, in
one window, available options without a complicated
menu structure. This interface allows new users, with
absolutely no prior experience, to become familiar
with the tool in about half an hour and obtain already
good results.
2 THE RENDERING PROCESS
The method has been described in detail in (du Buf
et al., 2006). Image analysis is separated from the ren-
dering process, because the analysis cannot (yet) be
done in realtime—that is, in a few seconds necessary
for fast interactivity—whereas the rendering is much
faster. Image analysis is therefore done by means of
a pre-processing program that the user must apply to
the images to be rendered, which takes a few min-
utes per image. After that, the image file is comple-
mented with files that contain information about the
local image content: detected lines and edges at dif-
ferent scales (level of detail), their positions, orienta-
tions and local contrast. Basically, the output consists
of coordinate lists, which serve to apply object-related
brush strokes where the size of the brush is coupled
to the scale of image analysis. This is called the fore-
ground process. A background process is normally
necessary, because there often are homogeneous im-
age regions (in sky etc.) where no lines and edges can
be detected. For the latter process two files are pre-
pared: the local contrast for modulating the pressure
of brush strokes, and the local dominant orientation
for steering the strokes.
The rendering engine starts with the background
process, applying for example random strokes with
a big brush. For each stroke a colour is picked in
the input image, at the stroke’s centre point. After
completion of the backgroundprocess, the foreground
process applies brush strokes at positions where lines
and edges have been detected, from coarse scales (big
brushes) to small scales (small brushes). Each co-
ordinate list can be rendered as one stroke, but long
lists can be split into smaller ones for getting discrete
strokes with a pre-defined length. As for the back-
ground process, for each stroke a colour is picked in
the input image. The rendering of back- and fore-
ground strokes is the same: coordinate lists are used
to create triangle lists, which are rendered with the
picked colour in OpenGL. The size of the triangles is
determined by the selected brush size. The brush type
is defined by opacity maps: in the case of “spray” this
map is about elliptic with a gradual decay towards
the edge; in the case of “oil” the opacity maps are
constructed by random combinations of sets of heads,
bodies and tails of oil-painted brush strokes that have
been digitised.
Figure 1 shows an input image (top-left) and
the background process using completely random oil
strokes and a flat brush (1st and 2nd rows). The three
backgrounds shown in the third row were rendered
with randomised vertical and horizontal strokes and
with diagonal crisscrossing. The bottom row shows
foregrounds rendered with flat and round brushes as
well as with spray. Figure 2 (top row) illustrates final
results obtained with changing saturation and bright-
ness; the bottom row shows an input image and the
effect of reducing the colour gamut. Finally, Fig. 3
shows another input image and the use of mixed me-
dia, in this case pen and ink on top of a watercolour.
For more results see also (du Buf et al., 2006).
Our painting algorithms aim at automatic pro-
duction of paintings, in contrast to other solutions
like Gertrudis and the more common mouse/cursor-
controlled drawing and filling of regions in PSP and
GIMP. The Amateur Painter does not allow for in-
teractive painting of regions. Detected lines and
edges are automatically translated into discrete brush
strokes and applied to the “canvas,” i.e. the foreground
process. In regions where no lines and edges have
been detected, a background must be created by the
background process. The user can decide not to paint
all foreground strokes, even not to cover the entire
canvas with background strokes. In this case the sur-
face to be painted can be prepared with any colour.
The user can decide to paint the fore- and background
with different palettes and brushes. TAP offers many
possibilities with manyparameters. Although the user
can work with pre-defined and customised stylefiles,
for example a default style (sort-of impressionist oil)
which is loaded at startup, menus and their parame-
ter options reflect the practice of a real painter with
the following logical steps: (a) Select a surface, can-
vas or paper with a certain texture, and prepare the
FINE ARTS EDUTAINMENT- The Amateur Painter
263