Rendering Procedural Textures for Visualization of Thematic Data in 3D
Geovirtual Environments
Matthias Trapp, Frank Schlegel, Sebastian Pasewaldt and J
¨
urgen D
¨
ollner
Hasso Plattner Institute, University of Potsdam, Germany
Keywords:
3D Geovirtual Environments, Procedural Texturing, Thematic Data, Real-time Rendering.
Abstract:
3D geovirtual environments, such as virtual 3D city and landscape models, can be used as scenery for vi-
sualizing thematic data. which can be communicated using suitable color mappings or hatch patterns. For
rendering purposes, these hatches patterns can be represented as image-based or procedural textures. The re-
sulting quality of image-based textures, and thus the effective communication of the respective thematic data,
is subject to resolution and filtering artifacts. In contrast to thereto, procedural textures are not limited with
respect to resolution and can be filtered adaptively to achieve high visual quality. However, challenges in para-
metrization and design often hinders their application. To counterbalance these drawbacks, this paper presents
an interactive rendering technique that facilitates the application and design of procedural hatch patterns for
the mapping of thematic data to 3D geovirtual environments.
1 INTRODUCTION
In 1983, Bertin described the idea of disassembling
visual information into seven basic components he
denoted as visual variables, such as position, size,
and color (Bertin, 1983). These visual variables can
be combined to form specific patterns. A pattern can
be defined as a repetitive surface appearance that is
characterized by one or more visual variables.
A hatch pattern is a special kind of pattern that is
composed of regular strokes. The significant distin-
guishing features of hatch patterns are size, orienta-
tion, and texture. Most of these hatch pattern (also
known as hatches or hachures) have evolved histori-
cally due to restrictions of the paper medium. Ne-
vertheless, they can provide a sufficient distinction of
nominal data and enable a non-ambiguous mapping.
For example, thematic maps make use of hatch pat-
terns that represent certain feature types. In specific
application domains, these pattern are often standar-
dized (e.g., DIN ISO 128-50).
Despite its application in 2D cartography, hat-
ching is sparsely used in 3D geovirtual environments
(3D GeoVEs), which can serve as scenery for com-
munication and visualization of geo-referenced data,
because of distortion effects caused by perspective
projections. Since the application of perspective pro-
jections implicitly define other visual variables (e.g.,
size and order), hatch patterns can be considered chal-
Figure 1: Examples for procedural hatch patterns applied
the a virtual 3D landscape model of the Grand Canyon (A).
Different colored hatch patterns are synthesized for diffe-
rent parts of the model (C) and blended with an aerial image
of the regions (B).
lenging in 3D GeoVE specifically. Further, they are
useful in cases where, for example, color-only map-
pings are not sufficient enough or not possible (e.g.,
monochromatic displays). Many 3D GeoVEs, such
as landscapes and their generalized variants (Glander
and D
¨
ollner, 2007) feature a number of planar surfa-
ces that are suitable to display hatch patterns. There-
fore, they are potentially suited for using hatch textu-
res for data visualizing data as shown in Figure 1. It
displays a virtual 3D model of the Grand Canyon with
hatch textures that provide information about the land
usage.
There are basically two approaches to represent
hatches for rendering in 3D GeoVE: image-based tex-
tures or procedural-based textures. While image-
based or textures are easy to create, manipulate, and
282
Trapp, M., Schlegel, F., Pasewaldt, S. and Döllner, J.
Rendering Procedural Textures for Visualization of Thematic Data in 3D Geovirtual Environments.
DOI: 10.5220/0007384302820288
In Proceedings of the 14th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2019), pages 282-288
ISBN: 978-989-758-354-4
Copyright
c
2019 by SCITEPRESS Science and Technology Publications, Lda. All rights reserved
render, they also exhibit the major disadvantage of
having fixed spatial resolution, which can yield sam-
pling artifacts. To counterbalance this, three approa-
ches can be applied: first, one can use image-based
distance maps (Green, 2007) to improve sampling.
This approach requires a single texture per hatch and
impacts memory consumptions for a high number of
different patterns. Second, one can convert an image-
based hatch texture to a vector texture representation
encoded in raster-buffers. However, its preprocessing
and implementation is complex. Third, procedural
textures can be applied. In contrast to image-based
textures, procedural-based textures are computed at
runtime and not restricted in terms of resolution and
sampling. These properties qualifies them for imple-
menting hatch patterns in interactive 3D visualizati-
ons. Nevertheless, procedural texturing suffers from a
trade-off between its visual complexity and complex-
ity required for its description, i.e. the more complex
a procedural should appear the longer is its code to
describe. In terms of performance, this results in hig-
her runtime-complexity, thus slower rendering. Ho-
wever, complex hatch patterns are hard to describe
procedurally.
This paper describes a method for composing and
rendering of hatch patterns in 3D GeoVE using pro-
cedural texturing. It introduces a layering concept for
creating complex hatch patterns by combining layers
of simpler ones. This also reduces code complexity
and facilitates reuse. However, using hatch patterns
in 3D GeoVE become a challenging task because mo-
dification of position and viewing angle of the virtual
camera affect the on-screen appearance of patterns.
This may result in distracting, unpleasant effects, e.g.,
Moir
´
e Patterns (Amidror, 2009). With respect to this,
the paper describes different techniques for counter-
balancing such effects.
2 RELATED WORK
This section focus on recent research on and applica-
tion of procedural textures in 3D GeoVE. Rost defi-
nes procedural texturing as ”the process of computing
a texture primarily by synthesizing rather than by re-
lying heavily on precomputed values” (Rost, 2006).
In contrast to image textures, procedural textures are
computed at runtime using vertex and/or fragment
coordinates.
Hatch pattern are of manifold applications in the
domain of geovisualization and information visuali-
zation in general. In cartography for example, hat-
ches are used for representing 3D topography on a
2D map by displaying quantitative measures of the
topographys slope and aspect (Kennelly and Kimer-
ling, 2000). Here, lines are drawn in the direction of
the steepest topographic gradient. This creates tonal
variations throughout the map, which are a form of
analytical hill-shading, creating a 3D impression of
the topography. Also geological illustrations in text
books make use of hatches to illustrate seismic data.
In (Patel et al., 2007; Patel et al., 2008) an approach
is presented for rendering such illustrations. There
are various techniques that can be used and combined
to generate procedural textures. This paper’s concept
is based on propagating a 2D pattern to a 3D space,
and thus generates a so called solid texture (Peachey,
1985). Prominent representatives of solid textures are
wood, granite, or marble textures, that often use noise
to create a natural look (Perlin, 1985; Lewis, 1989).
For mapping a texture to a 3D object, the surface of
that object must be parameterized with 2D texture
coordinates. During the mapping process the color
of a fragment is determined by mapping the fragment
to a texel in the image texture using these coordinates.
In contrast, solid textures use 3D (world) coordinates
of a fragment as input for their color computation.
3 PROCEDURAL PATTERNS
Based on preliminaries and requirements, this section
introduces a basic concept for hatching 3D objects in
3D GeoVEs.
3.1 Preliminaries
Assumptions & Requirements. To apply hatch
patterns to features of a 3D GeoVE, it is necessary
to enrich its geometry with additional per-vertex attri-
butes. In a preprocessing step, an unique object iden-
tifier ID is computed, which enables the identification
of a polygon in the programmable rendering pipeline
during runtime. For mappings independent of geome-
tric representation of features, e.g., per-pixel mapping
of a virtual 3D landscape model, an image-based id-
texture is used (Fig. 1.B). In addition, an axis-aligned
bounding box (AABB) for each feature geometry is
computed and stored as a per-vertex attribute. The
AABB enables the computation of texture coordina-
tes during rendering (Sec. 3.3).
Standard texture mapping (Akenine-M
¨
oller et al.,
2008) that relies on per-vertex texture coordinates is
not always suited for creating consistent hatch pat-
terns. The results depend on a consistent texture pa-
rametrization of the objects surfaces. Such parametri-
zation must be provided in advance and can be hard
to compute. Texture coordinates that are not evenly
Rendering Procedural Textures for Visualization of Thematic Data in 3D Geovirtual Environments
283
spaced or discontinuous on object edges yield incon-
sistencies in the rendered pattern. Further, a hatch pat-
tern should appear identically on each object it is ap-
plied to preserve a distinctive mapping between pat-
tern and data, i.e., the pattern is not allowed to vary
in scale or orientation between different objects. The
presented approach uses a variant of projective textu-
ring to address these two requirements. Furthermore,
the patterns should be applicable on objects in inte-
ractive applications dynamically. Hence, the imple-
mentation must support real-time rendering as well
as a flexible mapping between feature identifiers and
hatch patterns. To achieve the latter, the approach
enables the binding of a hatch pattern P to a number
of specific identifiers (ID). This mapping is resolved
during texturing using the programmable graphics pi-
peline (Segal et al., 2010).
Terminology. A procedural texture or hatch pattern
P P of a feature or object instance with the unique
identifier id is defined as a polymorphic list of pattern
layer instances L
type
i
:
P
ID
= L
type
0
,.. .,L
type
n
L
type
i
L L
type
i
= {p
type
}
The definition of layer instances can be shared bet-
ween different hatch pattern. The set of all hatch defi-
nitions is denoted as P and the set of all layer instan-
ces as L . The respective layer type can be one of the
following: type {hatch,glyph,noise}. A procedu-
ral texture layer is defined using a set of type speci-
fic parameters p
type
, which are discussed in the next
section. To synthesize complex patterns, different in-
stances of layer types can be combined to a single
pattern. Therefore each layer is attributed by a spe-
cific Boolean combination functions (e.g., OR, AND,
XOR), which enable arbitrary layer combinations.
3.2 Components of Procedural Patterns
We identified three main components, denoted as
layer types, as the major building blocks of complex
hatch patterns: linear hatch layer, glyph layer, and
noise layer. Their respective parametrization are pre-
sented in the remainder of this section. Each layer
type shares a common parameter set that comprises
(1) two orthogonal vectors ~u,~v R
3
which define
orientation of a reference plane for the 2D hatch pat-
tern, (2) a combination operator (logical OR, XOR,
AND), and (3) a color.
Linear Hatch Layer. A linear hatch layer denotes
variants of linear hatch features such as solid or stip-
pled lines. It is one of the most used primitive for
representing hatches. Its parametrization comprises
the following aspects L
hatch
= (s, w, p,t): a hatch scale
factor s defines how many lines should occur within
an interval, the hatch width w defines the line width in
relation to the space between, a stipple pattern p re-
presents a bit mask describing the pattern of the stip-
ples, and a stipple scale factor t defines how often the
stipple pattern should occur.
Figure 2: Examples of different glyphs layers applied to
feature types.
Glyph Layer. A glyph layer L
glyph
supports the
creation of complex patterns that can be hardly
represented by linear hatch layers, e.g., rounded
shapes or symbols. Glyph layers can be repre-
sented directly using image-based textures or by
distance-fields (Green, 2007) organized by texture at-
lases (Wloka, 2003). In contrast to image-based tex-
tures, distance fields provide significant visual impro-
vements due to the lack of aliasing artifacts during
up-sampling and reduces texture memory consump-
tion. Glyph layers organized in textures atlases can
be efficiently rendered using texture bombing or glyph
bombing rendering techniques (Glanville, 2004).
Noise Layer. To add irregularity to a hatch pattern,
a noise layer L
noise
can be applied. Conceptually si-
milar to a distance map, its represents gray-scale va-
lues g [0, 1] that can be thresholded using a para-
meter ε [0,1] to convert it into a binary representa-
tion. Noise layer can be represented using (hardware-
accelerated) noise functions (Lagae et al., 2010) or ti-
leable noise textures (Perlin, 2002; Lewis, 1989). The
design space comprises varying noise frequencies, a
threshold, and a scale factor s for the generated tex-
ture coordinates. For in 3D virtual environments, ani-
sotropic noise (Goldberg et al., 2008) can be used to
minimize perspective artefacts (Sec. 3.4).
3.3 Texture-Coordinate Generation
To support changes in the mapping of object geome-
try to hatch patterns, the coordinates for texturing and
evaluation are computed procedurally based on the
AABB of each object. Here, a respective vertex posi-
tion V
i
is first normalized according to its axis-aligned
IVAPP 2019 - 10th International Conference on Information Visualization Theory and Applications
284
bounding box AABB
id
= (LRF,URB), which is defi-
ned using the coordinates of its lower left front (LLF)
and upper right back (URB) vectors:
V
i
=
V
i
LLF
2 · LLF + URB
The resulting normalized coordinates are interpolated
during rasterization and yield a texture coordinate for
each fragment, which is then used to compute the in-
dividual hatches and their combination.
3.4 Counterbalance 3D Projections
A major issue of applying hatching to interactive 3D
GeoVE is to ensure the perception of patterns re-
gardless of a 3D perspective projection transforma-
tion. For example, by zooming in and out, the line
width and in-between line distances vary with an
increasing distance to the virtual camera. This is
due to the single continuous scale encountered in 3D
GeoVE, instead of discrete scales in 2D visualizati-
ons (Jobst and D
¨
ollner, 2008). Further, by rotating
the virtual camera, the hatch orientation can change.
When tilting the virtual camera, the pattern will be
distorted due to perspective compression. An impro-
per aspect ratio can also distort the slope of a line so
that the strokes look curved instead of straight.
All of the above may lead to ambiguity and an
inaccurate mapping of patterns to data. On the one
hand, it is naturally caused by perspective compres-
sion, on the other hand it hinders the correct and non-
ambiguous communication of the information enco-
ded in the patterns, i.e., distinguishing and comparing
patterns at different scene depth becomes hard, espe-
cially for almost similar patterns. Further, if the in-
between line distances of a hatch pattern becomes too
small, it interferes with the screen raster. That cau-
Figure 3: Decreasing distances between between linear ha-
tches can result in Moir
´
e patterns.
ses a vibrant effect known as Moir
´
e Pattern (Amidror,
2009) as shown in Figure 3. This effect is unplea-
sant, yields temporal incoherency, and makes it hard
to identify the original pattern. The remainder of the
section describes approaches for minimizing, coun-
terbalancing, or avoiding these effects.
Fading Distant Hatch-Patterns. One approach for
counterbalancing Moir
´
e effects is to omit the rende-
ring of hatch patterns in distant regions (Fig. 4(a).
Here, the distance to the virtual camera can be thres-
holded (fading distance), and hatch patterns are faded
by smoothing the hatch density. This approach is si-
milar to the rendering of fog (Akenine-M
¨
oller et al.,
2008). However, this approach has a number of li-
mitations which prevents its application. Since the
Moir
´
e effect appears stronger for patterns with higher
hatch frequency and larger hatch width, the effective
fading distance depends on the respective hatch con-
figuration used. This causes patterns of different fre-
quency to have different fading distances, which cre-
ate an inconsistent look-and-feel in the final visualiza-
tion. Using a suitable threshold, this approach avoids
the Moir
´
e effect, but it will also provide more ambi-
tiousness to the hatch mapping. Further, it is challen-
ging to find an adequate threshold that avoids the cre-
ation of an Moir
´
e effects but does not fade the pattern
too early.
Depth-dependent Hatching. Another approach
against the Moir
´
e Effect is to avoid thick lines by
scaling the hatch distance depending on the particular
depth of the fragment. This can be achieved by com-
puting the normalized depth value of the fragment
and scale the pattern respectively. A result of that
method is depicted in Figure 4(b). The pattern in the
foreground remains the same as in Figure 4(a), but
in the background the distance between the hatches
is increased. By this means, the Moir
´
e effect only
occurs for low viewing angles. Another advantage
of this technique is shown in Figure 5. While zoo-
ming, the distance between the lines in screen space
remains the same, i.e, independent of the zoom level.
It facilitates pattern recognition on every zoom level,
which is otherwise not possible because the hatches
become too small to be recognizable on a high zoom
level. However, this method also has a disadvantage:
The pattern is moving with the virtual camera, i.e.,
when the camera moves toward a fragment, the
relative depth value of that fragment changes. Hence
that fragment is either hatched or not, depending on
the camera position.
Rendering Procedural Textures for Visualization of Thematic Data in 3D Geovirtual Environments
285
(a) Distance-based fading. (b) Depth-dependent hatching. (c) Screen-space hatches.
Figure 4: Three different approaches for compensating perspective distortion: distance-based fading (a), depth-dependent
hatching (b) and based on screen-space coordinates (c).
Screen as Frame-of-Reference. An other approach
represents the utilization of alternative texture coor-
dinates: instead of generating 3D textures coordi-
nates of the world space prior to geometry rasteri-
zation, the actual screen-space fragment coordinates
can be used to generate the hatch pattern. A visua-
lization using this approach is shown in Figure 4(c).
The patterns on the surfaces always look identical–
independent of zoom level, view angle or distance of
the fragment. This technique provides sufficient per-
ception and comparability of patterns and enables an
unambiguous mapping of patterns to data. However,
it also introduces the following texturing artifact: the
hatches do not move consistently with the objects they
are applied on during camera position changes. This
causes the same effect as using the depth-dependent
hatching. Further, a lack of perspective compression
makes it hard to create an impression of depth in the
scene. Furthermore, an additional shading model is
required to visualize the edges and surface charac-
teristics of 3D objects, because the pattern does not
adapt to the objects’ surfaces.
Figure 5: By scaling the hatch distance depending on the
depth the pattern stays almost consistent during zooming.
4 RESULTS & DISCUSSION
This section presents application examples and evalu-
ates their rendering performance.
Application Examples. The presented concept and
rendering technique can be used in various appli-
cations Despite texturing of virtual 3D landscapes
(Fig. 1), it is suitable for cell-based generalization va-
riants of virtual 3D city models (Glander and D
¨
ollner,
2007). The abstraction of complex city geometry by
creating generalized shapes yield a number of flat sur-
faces that are qualified for applying hatches to visua-
lize data (Fig. 6). Another application of procedural
hatches are 3D architectural models. Here, hatch pat-
tern can be used to display building material accor-
ding to a specific standard to match 2D construction
plans. Figure 7(a) shows an explosion view of virtual
3D model of a reservoir dam with different linear ha-
tch pattern to visualize different materials. It demon-
strate the ability of the presented rendering technique
to synthesis of solid textures (Peachey, 1985), i.e., ha-
tch pattern that spread out consistently over a 3D ob-
ject’s surface. Figure 7(a) shows a geological profile
with colored linear hatch pattern for the visualization
of soil types.
Performance Evaluation. We evaluated a pro-
totypical implementation based on OpenGL and
the OpenGL Shading Language (GLSL) (Kessenich
et al., 2010) using test data sets of different geometric
complexity (Table 1), i.e., a varying number of scene
objects and different numbers of hatch layers. The
rendering and compositing of hatch layers is perfor-
med using a single fragment shader program.
The performance tests are conducted using an In-
tel Core i7 620M processor with 2,66 GHz clock rate
and 8 GB of DDR3-1066 RAM using a NVIDIA GT
330M graphics card with 512 MB of video memory.
The test application runs in windowed mode at two
IVAPP 2019 - 10th International Conference on Information Visualization Theory and Applications
286
Figure 6: Applications of linear hatch patterns to a genera-
lized virtual 3D city model.
different screen resolutions. The complete scene is
visible in the view frustum, and back-face culling is
activated. For each test, a total of 100 consecutive
frames are rendered and the average rendering per-
formance in frames-per-seconds is tracked. Table 2
shows the results of the performance evaluation.
The performance of the rendering technique de-
pends on the number of hatch layers defined per ob-
ject. With up to 10 layers for each object, the re-
sponse time remains within the bounds of real-time
interaction. With more layers defined, the response
time highly depends on the number of fragments that
are processed by the shader.
(a) Three different styles (left to right) for an exploded-view
visualization of a virtual 3D dam model that uses grey-scale,
linear hatch patterns.
(b) A geologival profile with colored linear hatches visuali-
zing different layers of soil.
Figure 7: Applications of complex linear hatch patterns to
virtual 3D architectural and geological models.
Table 1: Geometric complexity of examplary 3D models.
Model #Obj #Vertices #Primitives
City Model 191 14,500 21,366
Dam 10 441 828
Grand Canyon 9 265,726 524,288
Profile 8 3,114 6,194
Table 2: Performance measurements in frames-per-second.
800×600 1280×960
City Model
5 layers 52.10 21.85
10 layers 41.69 17.66
40 layers 20.03 9.07
Geological Profile
5 layers 61.44 31.68
10 layers 60.59 24.64
40 layers 25.59 14.89
Dam
5 layers 61.06 24.84
10 layers 51.76 19.75
40 layers 22.42 9.11
Grand Canyon
5 layers 40.22 19.75
10 layers 27.96 12.66
40 layers 8.67 4.12
5 CONCLUSIONS
This paper presents a concept and interactive rende-
ring technique for creating procedural texture pattern
for the visualization of 2D and 3D geovirtual envi-
ronments. It is based on a extensible layer concept
that can be easily edited and rendered using consu-
mer graphics hardware. We further analyzed shor-
tcomings and visual artifacts of hatches applied in
3D geovirtual environments and presented three dif-
ferent approaches for counterbalancing these effects.
Finally, a variety of application examples are presen-
ted, followed by a discussion on performance and li-
mitations of the rendering technique.
ACKNOWLEDGMENTS
This work was funded by the Federal Ministry of Edu-
cation and Research (BMBF), Germany, within the
InnoProfile Transfer research group ”4DnDVis”.
Rendering Procedural Textures for Visualization of Thematic Data in 3D Geovirtual Environments
287
REFERENCES
Akenine-M
¨
oller, T., Haines, E., and Hoffman, N. (2008).
Real-Time Rendering. A. K. Peters, Ltd., Natick, MA,
USA, 3rd edition edition.
Amidror, I. (2009). The Theory of the Moir
´
e Phenomenon:
Periodic layers. Springer.
Bertin, J. (1983). Semiology of graphics. University of Wis-
consin Press.
Glander, T. and D
¨
ollner, J. (2007). Cell-based generaliza-
tion of 3d building groups with outlier management.
In Proceedings of the 15th annual ACM internatio-
nal symposium on Advances in geographic informa-
tion systems, GIS ’07, pages 54:1–54:4, New York,
NY, USA. ACM.
Glanville, R. S. (2004). GPU Gems: Programming Techni-
ques, Tips and Tricks for Real-Time Graphics, chapter
20 – Texture Bombing. Addison-Wesley Longman.
Goldberg, A., Zwicker, M., and Durand, F. (2008). Aniso-
tropic noise. ACM Trans. Graph., 27(3):54:1–54:8.
Green, C. (2007). Improved alpha-tested magnification
for vector textures and special effects. In ACM SIG-
GRAPH 2007 courses, SIGGRAPH ’07, pages 9–18,
New York, NY, USA. ACM.
Jobst, M. and D
¨
ollner, J. (2008). 3d city model visuali-
zation with cartography-oriented design. In Schrenk,
M., Popovich, V. V., Engelke, D., and Elisei, P., edi-
tors, 13th International Conference on Urban Plan-
ning, Regional Development and Information Society
(REAL CORP), pages 507–516. CORP - Competence
Center of Urban and Regional Planning.
Kennelly, P. J. and Kimerling, A. J. (2000). Desktop ha-
chure maps from digital elevation models. Carto-
graphic Perspectives, (37):78–81.
Kessenich, J., Baldwin, D., and Rost, R. (2010). The
OpenGL Shading Language (Version 4.10).
Lagae, A., Lefebvre, S., Cook, R., DeRose, T., Drettakis,
G., Ebert, D., Lewis, J., Perlin, K., and Zwicker, M.
(2010). State of the art in procedural noise functions.
In Hauser, H. and Reinhard, E., editors, EG 2010 -
State of the Art Reports. Eurographics, Eurographics
Association.
Lewis, J. P. (1989). Algorithms for solid noise synthesis,
volume 23. ACM Press, New York, New York, USA.
Patel, D., Giertsen, C., Thurmond, J., Gjelberg, J., and
Gr
¨
oller, M. E. (2008). The seismic analyzer: in-
terpreting and illustrating 2D seismic data. IEEE
transactions on visualization and computer graphics,
14(6):1571–8.
Patel, D., Giertsen, C., Thurmond, J., and Gr
¨
oller, M.
(2007). Illustrative rendering of seismic data. In Pro-
ceeding of vision modeling and visualization, pages
13–22. Citeseer.
Peachey, D. R. (1985). Solid texturing of complex surfaces.
ACM SIGGRAPH Computer Graphics, 19(3):279–
286.
Perlin, K. (1985). An image synthesizer. ACM SIGGRAPH
Computer Graphics, 19(3):287–296.
Perlin, K. (2002). Improving noise. ACM Trans. Graph.,
21(3):681–682.
Rost, R. J. (2006). OpenGL Shading Language. Addison-
Wesley Longman, 2nd ed. edition.
Segal, M., Akeley, K., and Brown, P. (2010). The OpenGL
Graphics System: A Specification (Version 4.1).
Wloka, M. (2003). “Batch, Batch, Batch:” What Does It
Really Mean? In Game Developers Conference.
IVAPP 2019 - 10th International Conference on Information Visualization Theory and Applications
288