ticle systems. To achieve this objective we propose
a method for automatically generating rain environ-
ments with the definition of rain areas and with an
adequate management of the particles created inside
these areas. Furthermore, we include the LOD con-
cept in the GPU in order to adjust the size and the
number of particles to the conditions of the scene. In
Figure 1 we have an example of a rain scenario ob-
tained with our solution.
This paper is organized as follows. Section 2 con-
siders the state of the art on rain rendering. Section
3 introduces the concept of raining area as well as its
interactions. Section 4 presents the multiresolution
implementation. Section 5 offers the results. Lastly,
Section 6 contains some remarks on our solution.
2 STATE OF THE ART
Rain has been traditionally rendered in two ways, ei-
ther as camera-centered geometry with scrolling tex-
tures or as a particle system.
Scrolling Textures. This approach is based on the
idea of using a texture that covers the whole scene.
Then, the application scrolls it by following the
falling direction of the rain. However, this technique
exhibits certain properties that are stationary in time
(Soatto et al., 2001). In (Wang and Wade, 2004),
the authors present a novel technique for rendering
precipitation in scenes with moving camera positions.
They map textures onto a double cone, and translate
and elongate them using hardware texture transforms.
These methods fail to create a truly convincing
rain impression because of the lack depth and the rain
not reacting accurately to scene illumination. To over-
come this limitation, some authors (Tatarchuk, 2006)
have developed more complex solutions which in-
clude several layers of rain for simulating rainfall at
different distances, although the problem is not com-
pletely solved.
Particle Systems. Traditionally, this has been the
approach chosen for real-time rendering of rain, even
though particle systems tend to be expensive, espe-
cially if we want to render heavy rain. Lately, ren-
dering of rain has become very realistic, although the
management of these systems in real-time applica-
tions still poses severe restrictions.
The work presented in (Kusamoto et al., 2001)
introduced a physical motion model for rain render-
ing with particle systems. Later, the authors of (Feng
et al., 2006) considered the physical properties from
a different point of view. They developed a collision
detection method for raindrops and created a particle
subsystem of raindrops splashing after collision.
Wang et al. presented in (Wang et al., 2006) a
system formed by two parts: off-line image analysis
of rain videos and real-time particle-based synthesis
of rain. This solution is cost-effective and capable of
realistic rendering of rain in real time.
Following a similar approach, N. Tatarchuk
(Tatarchuk, 2006) developed a hybrid system of an
image-space approach for the rainfall and particle-
based effects for dripping raindrops and splashes. It
presents a detailed rainy environment and provides
a high degree of artistic control. The main issue is
that it requires 300 unique shaders dedicated to rain
alone. Moreover, the simulation requires the camera
to maintains a fixed viewing direction.
Rousseau et al. (Rousseau et al., ) propose a rain
rendering method that simulates the refraction of the
scene inside a raindrop. The scene is captured to a tex-
ture which is distorted according to the optical prop-
erties of raindrops and mapped onto each raindrop by
means of a vertex shader.
In paper (Tariq, 2007), Tariq proposes a realistic
rain application that works entirely on the GPU. Rain
particles are animated over time and in each frame
they are expanded into billboards to be rendered us-
ing the Geometry Shader. The rendering of the rain
particles uses a library of textures, which encodes the
appearance of the raindrops under different viewpoint
and lighting directions (Garg and Nayar, 2006).
More recently, the work presented in (Changbo
et al., 2008) introduces a new framework which thor-
oughly address physical properties of rain, visual ap-
pearance, foggy effects, light interactions and scatter-
ing. The main drawback of this approach is that, de-
spite offering very realistic simulations, their method
cannot render a scene with a sufficient framerate to
offer interactive walkthroughs.
Lastly, it is worth mentioning the study made in
(Puig-Centelles et al., 2008), where the authors give
some initial ideas and results about the application
of level-of-detail techniques when rendering realistic
rain. They propose the adaptation of the size of the
particles in the GPU, although the number of particles
and their distribution are initially fixed in the CPU.
Moreover, their exploitation of the Geometry Shader
is quite limited.
3 RAIN MANAGEMENT
One of the main objectives of our model is to be able
to generate rainy environments automatically by cre-
GRAPP 2009 - International Conference on Computer Graphics Theory and Applications
374