tioned consumer-grade GPU. For replication purposes,
the full code of the method is provided online (The
Authors, 2019).
Limitations.
Since 3D rotation axes are computed
from 2D silhouette skeletons, rotations are not, strictly
speaking, invertible: Rotating from a viewpoint
v
1
with an angle
α
around a 3D local axis
a
1
computed
from the silhouette
Ω
1
leads to a viewpoint
v
2
in which,
from the corresponding silhouette
Ω
2
, a different axis
a
2
6= a
1
can be computed. This is however a problem
only if the user releases the pointer (mouse) button
to end the rotation; if the button is not released, the
computation of a new axis
a
2
is not started, so mov-
ing the pointer can reverse the first rotation. Another
limitation regards the measured effectiveness of our
rotation mechanism. While our tests show that one can
easily rotate a scene around its parts, it is still unclear
which specific tasks are best supported by this rotation,
and by how much so, as compared to other rotation
mechanisms such as trackball. We plan to measure
these aspects next by organizing several controlled
user experiments in which we select a specific task
to be completed with the aid of rotation and quanti-
tatively compare (evaluate) the effectiveness of our
rotation mechanism as compared to other established
mechanisms such as trackball.
6 CONCLUSION
We proposed a novel method for specifying interac-
tive rotations of 3D scenes around local axes using
image skeletons. We compute local 3D rotation axes
out of the 2D image silhouette of the rendered scene,
using heuristics that combine the silhouette’s image
skeleton and depth information from the rendering’s
Z buffer. Specifying such local rotation axes is simple
and intuitive, requiring a single click and drag ges-
ture, as the axes are automatically computed using
the closest scene fragments rendered from the current
viewpoint. Our method is simple to implement, us-
ing readily-available distance and feature transforms
provided by modern 2D skeletonization algorithms;
can handle 3D scene consisting of arbitrarily complex
polygon meshes (not necessarily watertight, connected,
and/or of good quality) but also 3D point clouds; can
be integrated in any 3D viewing system that allows ac-
cess to the rendered Z buffer; and works at interactive
frame-rates even for scenes of hundreds of thousands
of primitives. We demonstrate our method on sev-
eral polygonal and point-cloud 3D scenes of varying
complexity.
Several extension directions are possible as follows.
More cues can be used to infer more accurate 3D curve
skeletons from image data, such as shading and depth
gradients. Separately, we plan to execute a detailed
user study to measure the effectiveness and efficiency
of the proposed skeleton-based 3D rotation for specific
exploration tasks of spatial datasets such as 3D meshes,
point clouds, and volume-rendered data.
REFERENCES
Bade, R., Ritter, F., and Preim, B. (2005). Usability com-
parison of mouse-based interaction techniques for pre-
dictable 3D rotation. In Proc. Smart Graphics (SG),
pages 138–150.
Bian, S., Zheng, A., Chaudhry, E., You, L., and Zhang, J. J.
(2018). Automatic generation of dynamic skin defor-
mation for animated characters. Symmetry, 10(4):89.
Cao, T.-T., Tang, K., Mohamed, A., and Tan, T.-S. (2010).
Parallel banding algorithm to compute exact distance
transform with the GPU. In Proc. ACM SIGGRAPH
Symp. on Interactive 3D Graphics and Games, pages
83–90.
Chaouch, M. and Verroust-Blondet, A. (2009). Alignment
of 3D models. Graphical Models, 71(2):63–76.
Dubinski, J. (2001). When galaxies collide. Astronomy Now,
15(8):56–58.
Duffin, K. L. and Barrett, W. A. (1994). Spiders: A
new user interface for rotation and visualization of
N-dimensional point sets. In Proc. IEEE Visualization,
pages 205–211.
Emory, M. and Iaccarino, G. (2014). Visualizing turbulence
anisotropy in the spatial domain with componental-
ity contours. Center for Turbulence Research Annual
Research Briefs, pages 123–138.
Ersoy, O., Hurter, C., Paulovich, F., Cantareiro, G., and
Telea, A. (2011). Skeleton-based edge bundling for
graph visualization. IEEE TVCG, 17(2):2364 – 2373.
Guo, J., Wang, Y., Du, P., and Yu, L. (2017). A novel multi-
touch approach for 3D object free manipulation. In
Proc. AniNex, pages 159–172. Springer.
Hesselink, W. H. and Roerdink, J. B. T. M. (2008). Eu-
clidean skeletons of digital image and volume data in
linear time by the integer medial axis transform. IEEE
TPAMI, 30(12):2204–2217.
J. Dubinski et al. (2006). GRAVITAS: Portraits of a universe
in motion. https://www.cita.utoronto.ca/
∼
dubinski/
galaxydynamics/gravitas.html.
Jackson, B., Lau, T. Y., Schroeder, D., Toussaint, K. C., and
Keefe, D. F. (2013). A lightweight tangible 3D inter-
face for interactive visualization of thin fiber structures.
IEEE TVCG, 19(12):2802–2809.
Kaye, D. and Ivrissimtzis, I. (2015). Mesh alignment using
grid based PCA. In Proc. CGTA), pages 174–181.
Kustra, J., Jalba, A., and Telea, A. (2013). Probabilistic
view-based curve skeleton computation on the GPU.
In Proc. VISAPP. SCITEPRESS.
Interactive Axis-based 3D Rotation Specification using Image Skeletons
177