IMAGE LIGHTING EFFECT MANIPULATION
FOR AN EFFICIENT STYLIZATION
Catherine Sauvaget and Vincent Boyer
L.I.A.S.D., Paris 8 University, 2 Rue de la Libert´e, 93526 Saint-Denis, France
Keywords:
Non-photorealistic Computer Graphics, Stylization, Lighting.
Abstract:
We propose a model to stylize images through the manipulation of different lighting effects and their styliza-
tion. Given an input image, we propose to generate semi-automatically a map containing each effect identified
by the user. Then, we propose different distortions and coloring to obtain artistic stylizations. Our model is
flexible and well designed to help users who desire to stylize their images following light effects. This model
is intended for an image editing tool dedicated to comics stylizations.
1 INTRODUCTION
The purpose of images is to communicate a message
that can be emotional or informational. Artists use
lighting effects to enhance the desired information
and to create atmospheres bringing people to desired
psychological state.
Light has always been used in art to enhance spe-
cific parts of images. Light effects have been de-
scribed by J.M. Parram´on (Parram´on, 1987) and by
G.M. Roig (Roig, 2010). They depend on the light
source which transmission is direct or diffuse. Shad-
ows can be shadows of objects (shades) or projected
shadows representing shapes of objects (drop shad-
ows). The light can also produce dazzling effects due
to the reflective properties of the surface. As most
of viewers focus on colors used in a pictorial realiza-
tion, one can imagine that lighting effects are limited
to artistic movement techniques. Classical examples
are: Chiaroscuro consisting in violent contrasts be-
tween light and shadow to attract the viewer eye on
a specific part of the painting; Impressionism empha-
sizing the light: shadows are represented by saturated
colors and smooth diminution of light when illumi-
nated objects are in pastel colors; Comics using var-
ious styles to depict shadows: complementary colors
which are opposed on the chromatic hue wheel (It-
ten, 1961), hatching (Duc, 1983) and black flat which
areas are very common in American comics (Mc-
Cloud, 1994). Dazzling effects are often represented
in comics as white areas with edges to enhance the
contrast (Duc, 1983). But for all these artistic move-
ments, the position, size, orientation, even more, pres-
ence or absence of each lighting effects such as drop
shadows are not obvious. Excepted for the hyper-
realistic movement, where lighting effects are repro-
duced with a high realism, every other pictorial move-
ments propose at most a plausible representation. As
an example, in ”The last Supper” by Leonardo da
Vinci, the lighting is plausible but physically unreal-
istic: where are the drop shadows? feet seem to be lit.
Even more, some comics vignettes contain strong ex-
aggerations. Artists sometimes willingly add shadows
to dramatize the image or on the contrary, to stylize
images, they remove some shadows to avoid an over-
loaded scene. Before considering any stylisation of
lighting effects, an artist has to decide their existence
and representation (shape, position...).
We provide a solution able to stylize lighting ef-
fects but also able to help the user to manipulate
them. Starting from a 2D input image, we pro-
duce semi-automatically a map containing different
kinds of lighting effects. Each effect can be shifted,
turned and distorted. Based on an artistic analysis
and this map, we propose deformations and six styl-
izations involving well-known artistic styles ranging
from chiaroscuro to comics. In the following, we
present previous work, and then our lighting effects
representation map. We detail our different stages:
distortion and stylization. Finally, our results are
given and commented.
299
Sauvaget C. and Boyer V..
IMAGE LIGHTING EFFECT MANIPULATION FOR AN EFFICIENT STYLIZATION.
DOI: 10.5220/0003822902990303
In Proceedings of the International Conference on Computer Graphics Theory and Applications (GRAPP-2012), pages 299-303
ISBN: 978-989-8565-02-0
Copyright
c
2012 SCITEPRESS (Science and Technology Publications, Lda.)
2 PREVIOUS WORK
(Ibrahim and Anupama, 2005) and (Cavallaro et al.,
2004) detect shadow in videos or image sequences.
Based on retinex theory, (Sun et al., 2008) detect and
remove shadow from a single image. The work of
Ortiz (Ortiz, 2007) permits to remove dazzling ef-
fects from photographs. Unfortunately, these detec-
tions have a high computing cost and do not permit to
distinguish the different kinds of lighting effects pro-
duced by light.
Some research propose to distort images or spe-
cific objects in a scene. Carroll et al. (Carroll et al.,
2010) have proposed to change the perspective of an
image using vanishing points and lines that can be
modified by the user. This work is interesting but do
not permit to change the appearance of a specific ob-
ject in the image. Tobita (Tobita, 2010) has presented
a non-automatic solution dedicated to exaggerations
as the ones we can find in mangas. Two main defor-
mations are introduced: on the background (blur . ..)
and on persons whose position and size can be modi-
fied. They are mainly based on fish-eye deformations
bending sharply anything that does not pass through
the center of the circle.
Stylizations of lighting effects have been pro-
posed. Image-based methods have been proposed
to display soft shadows from 3D models (Agrawala
et al., 2000). Praun et al. (Praun et al., 2001) pro-
posed a system that creates hatching strokes in 3D
scenes. This kind of drawing conveys lighting and
properties of the material but we do not have the same
information in 2D. Only Sauvaget et al. (Sauvaget and
Boyer, 2010) have proposed an image-based styliza-
tion model based on a shadow map and in which six
stylizations can be combined. In this paper they do
not propose to shift or distort the lighting effects. We
use these stylizations on our lighting effect manipula-
tion model.
3 OUR MODEL
We present a new approach to manipulate and stylize
the different lighting effects of an image. Our model
permits to re-lit the previous shadow locations which
have been manipulated. We extend the model pro-
posed by Sauvaget et al. We present briefly the model
to guide the reader and only detail hereafter our con-
tribution.
3.1 Map Creation
We represent lighting effects with a map which has
the size of the image and where each kind of lighting
effects is represented by a color. Note that we offer
the user to refine it manually. The map is created in
two steps: detection, refinement.
3.1.1 Detection
A shadow is a decrease of the light intensity. We use
the HLS model with L1 norm which is well designed
for shadow detection (Angulo, 2004). As depicted in
Sauvaget et al., we consider the the maximal and me-
dian values of lightness. We compute G the global
threshold with I the number of intensity levels(see
formula 1):
G = (max med) ×
med
I
(1)
250
2550
med
max
G
med
max
255
0
G
0
255
maxmed
250200
100 150
G
100
150
50
50
58.8
39.2
19.6
Figure 1: Threshold detection examples.
With this method, only a part of pixels which
is smaller than the lightness threshold is considered.
The smaller the distance is between max and med
(in red figure 1), the smaller the considered area is
as well and avoid to take into account the highest
lightness values as shadow. This area is weighted by
the median value influencing directly the size of the
new area (in blue figure 1). The smaller med is, the
more shadow pixels are grouped and the more im-
portant reduction of threshold is needed. For now,
these pixels are considered as shadow without dis-
tinction of kind, the other pixels are considered as lit
parts in the image. Some results of this detection are
shown in section 4. As we propose to distinguish hard
and soft shadows, we choose to binarize the map for
hard shadows (black/white) or to preserve the original
lightness values of the original image for soft shad-
ows.
3.1.2 Refinement
As in Sauvaget et al., to distinguish shades and drop
shadows, the user is invited to keep the previous black
color (or grey level shading) for the shades and to put
in blue the drop shadows. If some pixels are black
in the original image, our method does not permit to
distinguish them from shadows in the original image.
GRAPP 2012 - International Conference on Computer Graphics Theory and Applications
300
In such a case, the user can refine the map adding
them in the lit parts (in white). To specify the dazzling
effects in the map, a red color is used.
3.2 Manipulation
Once the map is created, the user is allowed to modify
the place and the shape of lighting effects. With such
modifications, the image is changed and ”empty” ar-
eas may appear due to the displacement of an effect.
We present our displacement and deformation stage,
then we explain how we fill these empty areas.
3.2.1 Displacement and Deformation
A bounding box is created during the selection of a
region defining a lighting effect in the map. Its size is
x
max
x
min
× y
max
y
min
. Only the part of the image
corresponding to the selected lighting effect is kept in
the quad. The rest of the quad is transparent (see fig-
ure 2). When placed on the image, the quad may un-
dergo classical transformations in three dimensions.
Moreover, artists sometimes remove shadows to sim-
plify the image. Our model allows this by removing
the selection.
The second image of figure 2 presents our map
with user refinements for drop shadows. The bound-
ing box (in green) encompassing the drop shadowrep-
resents the selection. The previous location of the ef-
fect must be filled as if it was an lit part.
Note that deformations proposed figure 3 are ac-
tually possible with our system. Starting with the
left image, automatic transformations can provide the
center image. For the image on the right, the user
would have to refine the map since the polygon should
be deformed.
Figure 2: Geopoliticus Child Watching the Birth of the
New Man by Salvador Dali (1934); map with the selected
shadow (quad in red); displacement and rotation on x.
Figure 3: Example of a drop shadow on a wall; possible
transformation; possible transformation with refinement.
3.2.2 Space Filling
We propose two methods to transform the previous
location of the effect in a lit part of the image. The
first method consists in a comparison between neigh-
bor surrounding pixels of the effect and the pixels of
the effect. We search for the neighbor surrounding
pixel which has the closest hue and saturation to the
mean hue and saturation of the pixels of the effect.
Once we find the best pixel, we apply its lightness to
the selection shape. This lightness is applied every-
where in our selection implies the probability to cre-
ate flat region : the volume previously existing in the
shadow is no more in this new lit part. This method is
well adapted in flat colored textures but not if shading
exist. To preserve the volume, we propose a second
method consisting in calculating in the input image
the difference between lit parts and shadowparts from
our map. This difference is added to each pixel of our
selection. With this method, the volume of the shape
is preserved.
Remark that these methods cannot guaranty that
an area too desaturated or dark is rebuilt correctly be-
cause we only re-light and do not change the satura-
tion. Figure 4 illustrates this problem. On the left,
the original image with a yellow background HLS
(60, 90, 100). The second image corresponds to the
map and we select the top left shadow which is a
dark yellow. The last images show the result of each
method. With the first method (left image), the closest
value is the yellow background color (L=90) and this
value is given to our selection. We obtain a color close
to a white HLS (60, 90, 18), because our selection has
a low saturation. With the second method, adding the
difference between lit parts and shadow parts gives
us the following color: HLS (60, 83, 18). None of
these methods provide a suitable solution. The ex-
treme case is black color HLS (0, 0, 0), where the hue
is red defined.
Figure 4: Original; map; first method result; second method
result.
3.2.3 Style Propositions
We propose six different stylizations following
Sauvaget et al. (Sauvaget and Boyer, 2010):
chiaroscuro, impressionism, complementary color,
hatching, black flat areas and dazzling effects. The
IMAGE LIGHTING EFFECT MANIPULATION FOR AN EFFICIENT STYLIZATION
301
user chooses to apply the stylization on a specific
component (effect) of the map.
4 RESULTS
We present some results obtained with our model. All
of these images have been produced in real-time on a
Pentium 2.5GHz with 3Go of memory.
Figure 5 presents an original image from Le Pixx,
the map and the lighting effect transformation results
on the drop shadow of the chair. The result of the clos-
est method to fill in the previous location is shown.
Figure 6 shows a mixed result from our lighting ef-
fect stylization model and from comics stylization
model (Sauvaget and Boyer, 2008).
An assessment protocol has been realized on our
results. Ten persons (novices, experts in computer
graphics and illustrators) evaluated fifty images (in-
terior and exterior scenes, illustrations...).
We would like to know if the users succeed in
creating the stylization they desired with the existing
possibilities of our tool. All of them found intuitive to
shift, rotate and distort the lighting effects. However,
when not combining some of our lighting styliza-
tions with the comics stylization model of Sauvaget
et al. that permits to obtain a global coherence in the
stylization of the image, 90% of them felt disturbed.
Artists felt limited by the actual number of possible
shadow stylizations but they appreciated the mix be-
tween the atmosphere and light effects (see figure 6).
Figure 5: Original; map; result.
Figure 6: Original; map; result mixing our model and
Sauvaget et al. comics stylization one.
5 CONCLUSIONS
We have proposed a model to manipulate and stylize
lighting effects for 2D images. The principal limit of
the detection method is that dark objects are detected
as shadow. Our model permits a visual and semantic
distinction between the lighting effects. It is flexible
and allows different stylizations on different lighting
effects.
In future work, we will improve our model by
adding more stylizations and colored light effects. We
plan to add existing effects like hatching using gradi-
ents. We also plan to consider coupling our approach
with the depth map produced by (Sauvaget and Boyer,
2008) to enhance the contrast between the different
kinds of lighting.
REFERENCES
Agrawala, M., Ramamoorthi, R., Heirich, A., and Moll, L.
(2000). Efficient image-based methods for rendering
soft shadows.
Angulo, J.; Serra, J. (2004). Traitements des images de
couleur en repr´esentation luminance/saturation/teinte
par norme l1. In Traitement du signal, pages 583–604.
Carroll, R., Agarwala, A., and Agrawala, M. (2010). Im-
age warps for artistic perspective manipulation. ACM
Trans. Graph., 29:127:1–127:9.
Cavallaro, A., Salvador, E., and Ebrahimi, T. (2004). De-
tecting shadows in images sequences. In European
Conference on Visual Media Production, pages 165
174.
Duc, B. (1983). L’art de la BD. Gl´enat et B. Duc, gl´enat et
b. duc edition.
Ibrahim, M. and Anupama, R. (2005). Scene adaptive
shadow detection algorithm. In Proceedings Of World
Academy Of Science, Engineering and Technology,
pages 1307–6884.
Itten, J. (1961). Kunst der Farbe. Ravensburg: Otto Maier
Verlag.
McCloud, S. (1994). Understanding Comics the invisible
art. Harper Paperbacks.
Ortiz, F. (2007). Real-time elimination of brightness in
color images by ms diagram and mathematical mor-
phology. In Proceedings of the 12th international
conference on Computer analysis of images and pat-
terns, CAIP’07, pages 458–465, Berlin, Heidelberg.
Springer-Verlag.
Parram´on, J. (1987). Ombres et lumi`eres dans le dessin et
la peinture. Bordas.
Praun, E., Hoppe, H., Webb, M., and Finkelstein, A. (2001).
Real-time hatching. In Proceedings of the 28th an-
nual conference on Computer graphics and interac-
tive techniques, SIGGRAPH ’01, pages 581–, New
York, NY, USA. ACM.
Roig, G. (2010). Peindre la lumi`ere. Guides Oskar.
Sauvaget, C. and Boyer, V. (2008). Comics stylization from
photographs. In Proceedings of the 4th International
Symposium on Advances in Visual Computing, ISVC
’08, pages 1125–1134, Berlin, Heidelberg. Springer-
Verlag.
Sauvaget, C. and Boyer, V. (2010). Stylization of light-
ing effects for images. Signal-Image Technologies
GRAPP 2012 - International Conference on Computer Graphics Theory and Applications
302
and Internet-Based System, International IEEE Con-
ference on, 0:43–50.
Sun, J., Du, Y., and Tang, Y. (2008). Shadow detection
and removal from solo natural image based on retinex
theory, volume 5314 of Lecture Notes in Computer
Science. Springer-Verlag, Berlin Heidelberg.
Tobita, H. (2010). Enformanga: interactive comic creation
with deformation. In Proceedings of the International
Conference on Advanced Visual Interfaces, AVI ’10,
pages 397–398, New York, NY, USA. ACM.
IMAGE LIGHTING EFFECT MANIPULATION FOR AN EFFICIENT STYLIZATION
303