This paper proposes a rendering method of the
falling trajectory of raindrops that can deal with both
the temporal-spatial localities of raining and arbi-
trary camera movements. At first, section 2 gives
a brief overview of the proposed method and sec-
tion 3 describes a simulation model of rainfall and 3-
D wrapping-around of raindrops that was introduced
for permitting arbitrary camera movements. Sec-
tion 4 explains the raindrop-generating model for the
temporal-spatial localities of raining, that is, the den-
sity and intensity of raining. This section also de-
scribes the method for rendering raindrops consider-
ing with locality of occlusion of the sky. Section 5
shows usefulness of the method using CG animations
rendered with the proposed method. Finally, section
6 describes the conclusion and the future work.
2 OVERVIEW
2.1 Prior Conditions
In real-life raining, all the raindrops have differences
in location, velocity, its size and shape. In addi-
tion, there are several factors to be considered in ren-
dering, that is, the changes in direction and velocity
of raindrops due to non-steady wind effect, and the
changes in light intensity coming through a raindrop
depending on illumination condition due to the sky-
light. The above-mentioned factors are referred as
temporal-spatial locality of raining in our study. It
is desired to reflect these localities for improving the
rendering reality, but practical real-time applications
such as walk-through systems or driving simulators
require as simple processing as possible. In order to
solve this trade-off, we introduce the following prior-
conditions:
(1) The viewpoint is located near the ground level.
Therefore the vertical velocity of every raindrop
is assumed to be equivalent of terminal velocity.
On the other hand, only horizontal velocity is con-
trolled by temporal-spatial changes in wind.
(2) Trajectory of a raindrop is drawn with a polyline
connecting the positions of the raindrop at neigh-
boring two frames.
(3) The shape of a raindrop is represented with
sphere and its diameter is determined by the rain-
ing intensity.
2.2 Fundamental Idea
From two viewpoints, the following description
illustrates the problems in conventional methods and
the characteristics of the proposed method.
(1) Generation and Tracking of Raindrops. Prob-
lems in the previous methods which employ particle
system for generating and vanishing raindrops, are
summarized as :
• Stationary camera; Some researches discussed the
dynamic camera, but they synthesized stationary-
camera rendering of raindrops with the dynamic-
camera background.
• Temporal-spatial uniform generation of raindrops.
• Due to the consistency of the vertical axis of the
camera-coordinate system with that of the world-
coordinate system, raindrops are generated above
the top face of the view volume and fall down only
in the direction of gravitational force.
Conventional methods focused on rendering the rain-
drop trajectory and some of them were very realis-
tic for off-line processing or very fast for real-time
applications. On the contrary, the above-mentioned
constraints narrow the range of their applications. To
solve these problems, we provide a new raindrop
movement model with the following characteristics:
• A raindrop generating model that can deal with
localities on density and intensity of raining.
• A raindrop movement model that can express ap-
parent movements of raindrops caused by the ro-
tation and/or translation of camera.
• A raindrop descent tracking model that invokes
no rapid fluctuation in the number of raindrops
with satisfying two above-mentioned points.
(2) Rendering of Raindrops. As suggested by Wang
et al. (Wang et al., 2006), the light intensity com-
ing through a raindrop is determined on basis of the
reflection of the surrounding environment. Wang et
al. utilized a environment mapping for obtaining the
reflection-based light intensity. The luminance of the
object surface, however, distinctly differs from that
of the sky during daytime even in the rain. For in-
stance, in the light-raining scene shown in Figure 1
(captured by a digital camera with automatic exposure
mode), the actual measurement of luminance gives
6,886cd/m
2
, 562cd/m
2
and 53cd/m
2
at point A, B
and C, respectively. On the other hand, the pixel
value on the digital camera picture gives (R,G,B) =
(254,254,254),(182, 171, 181) and (51,50,46). As
you can see, 8 bit-colored image-based method is en-
tirely inadequate for computing the optical energy re-
ceived from the surrounding environment due to lack
of the dynamic range, therefore resulting in inaccu-
rate light transport. In order to solve this problem,
we assume that the light intensity coming through a
GRAPP 2008 - International Conference on Computer Graphics Theory and Applications
274