double-precision matrix and vector APIs) cannot take
as much advantage of SIMD instruction sets (Ericson,
2005), which are generally sized to support and vec-
tors that have 32-bit wide components. Few physics
engines support double floating-point precision and
game engines, such as Unity, Unreal Engine 4, Cry
Engine 3 and Godot do not natively support them.
An alternative to increase numerical precision is
the origin shifting approach. This approach estab-
lishes a real-time center point (e.g., the observer po-
sition), and 1) physics simulation considers the cen-
ter point as the Cartesian origin or 2) the objects are
shifted to the Cartesian origin, having a center point
as a reference, maintaining the same layouts of ob-
jects’ position. In an efficient and precise way, this
approach guarantees the physical simulation for any
dimension of the scenario. However, physics objects
must be close to an arbitrary center point. Otherwise,
inaccuracy errors remain. For sparsely distributed
simulations, e.g., multiplayer games, origin shifting
does not work.
We present a solution compatible with real-time
simulations that expands state-of-the-art physics en-
gines, ensuring physics simulations enough precision,
regardless of the world scale and underlying numeric
representation. Our solution is implemented as a layer
between high-level applications and physics engines.
This layer split the world in sectors and adds re-
dundancy to each sector, ensuring each part has all
the required information to, independently, be sim-
ulated without inconsistencies close to the Cartesian
origin. Furthermore, our solution benefits from how
large-scale worlds application must implement world
streaming due to memory and processing limitations,
amortizing space partitioning cost. The main con-
tribution of our work is a novel solution that pro-
vides arbitrariness in sparse simulations precision,
enabling floating-point engines to increase precision
and/or scale.
This paper is organized as follows: Section 2 pro-
vides background and related works on the subject.
Section 3 discusses all the steps of the proposed so-
lution in-depth. Section 4 discusses the effects of
the sector size choice. Results are evaluated and dis-
cussed in Section 5. Finally, Section 6 presents con-
clusion and directions for future work.
2 RELATED WORK
Many works explored the distribution and paral-
lelization of simulations to provide simulation at
an individual level, i.e., without relying on group-
generalized properties. Each agent must have the re-
quired information for its correct simulation —this
requirement is termed awareness. Different tech-
niques focus on partitioning their agents into groups
while keeping a low number of overlapping between
regions of interest of objects in different partitions
—this problem is termed communication level. Be-
sides it, the distribution of simulations introduces
other problems, such as well-balancing the simulation
load across the available processing nodes and how
to operate efficiently since the partitioning algorithms
themselves might not scale well.
(Lozano et al., 2007) explores a complete frame-
work for scalable large-scale crowd simulations.
Their solution successfully provides scalability while
providing awareness and time-space consistency.
This is achieved using rectangular grids and dividing
the agents according to their position in the grid.
(Vigueras et al., 2010) improves efficiency of pre-
vious methods by employing the use of irregular
shapes regions (convex hulls). Their results show that
irregular shapes outperform techniques with regular
ones, regardless of the crowd simulation.
(Wang et al., 2009) proposes a technique for parti-
tioning crowds into clusters, minimizing communica-
tion overhead. Their work achieves great scalability
through the application of an adapted K-means clus-
tering to efficiently partition agent-based crowd sim-
ulations.
Newer techniques explore further performance
improvement, either minimizing communication
costs, equalizing balancing, or both, while efficiently
doing so. For example, (Petkova et al., 2016)
uses community structures to group related agents
among processors, reducing communication over-
head. (Wang et al., 2018) explores parallelizing
the simulation per agent using the Power Law in
CUDA architecture to ensure behavior synchroniza-
tion across threads. Their solution explores minute
model details while improving efficiency.
(Brown et al., 2019) propose a solution applying a
partitioning solution for physics servers with horizon-
tal scalability. In their work, the scenario is parti-
tioned in regions using Distributed Virtual Environ-
ments modeling, and each partition is assigned to
a server for physics simulation. Objects interacting
across partition boundaries are handled by projecting
objects in overlapping regions to guarantee a seamless
simulation.
These solutions propose and improve the simula-
tion of a massive number of elements through par-
titioning and parallelization techniques while solv-
ing problems such as communications between agents
in different partitions. (Brown et al., 2019) apply
said methods showing that physics simulations per-
GRAPP 2021 - 16th International Conference on Computer Graphics Theory and Applications
136