
7 CONCLUSION
We have presented a new measure to select best view-
point of a 3D object. The advantage of this contribu-
tion relies on the fact that we consider both aspects
(saliency and visibility). Being view-dependent leads
to a more realistic saliency score. Then we have con-
ducted an original and intensive evaluation to better
study the interest of the proposition. Compared to ref-
erence approaches, our method selects the viewpoints
that are the most similar to user selection. Our vi-
sual analysis also highlights that when our approach
differs from the user study, it still proposes and inter-
esting view (maybe free from human biases).
In future work, we plan to work with textured
models in order to find the most relevant views for a
textured object, and close the gap with the viewpoints
defined by images.
REFERENCES
An, G. and et al. (2016). Mesh simplification using hybrid
saliency. In IEEE International Conference on Cyber-
worlds.
Blanz, V. and Tarr, M. (1999). What object attributes deter-
mine canonical views? Perception.
Chave, A. (2015). A note about gaussian statistics on a
sphere. Geophysical Journal International.
Fabricatore, C. and et al. (2002). Playability in action
videogames: A qualitative design model. Human-
Computer Interaction.
Feixas, M. and et al. (2009). A unified information-
theoretic framework for viewpoint selection and mesh
saliency. ACM Trans. Appl. Percept.
Habibi, Z. and et al. (2015). Good feature for framing:
Saliency-based gaussian mixture. In IEEE Int. Conf.
Intellig. Rob. Sys.
Jeong, S. and Sim, J. (2017). Saliency detection for 3d
surface geometry using semi-regular meshes. Trans.
Multimedia.
Lavou
´
e, G. and et al. (2018). Visual attention for rendered
3d shapes. In Computer Graphics Forum.
Lee, C. and et al. (2005). Mesh saliency. In ACM SIG-
GRAPH.
Leifman, G. and et al. (2016). Surface regions of interest
for viewpoint selection. PAMI.
Limper, M. and et al. (2016). Mesh saliency analysis via
local curvature entropy. In Eurographics.
Meyer, M. and et al. (2003). Discrete differential-geometry
operators for triangulated 2-manifolds. In Visualiza-
tion and Mathematics.
Meynet, G. and et al. (2019). Pc-msdm: A quality met-
ric for 3d point clouds. In Conference on Quality of
Multimedia Experience.
Miao, Y. and Feng, J. (2010). Perceptual-saliency extremum
lines for 3d shape illustration. The Visual Computer.
Nehm
´
e, Y. and et al. (2023). Textured mesh quality assess-
ment: Large-scale dataset and deep learning-based
quality metric. ACM TOG.
Nouri, A. and et al. (2015). Multi-scale mesh saliency with
local adaptive patches for viewpoint selection. Signal
Processing: Image Communication.
Plemenos, D. and Sokolov, D. (2006). Intelligent scene
display and exploration. In International Conference
GraphiCon.
Plemenos, D. and et al. (2004). On viewpoint complexity
of 3d scenes. In International Conference GraphiCon.
Rudoy, D. and Zelnik-Manor, L. (2012). Viewpoint selec-
tion for human actions. IJCV.
Rusinkiewicz, S. (2004). Estimating curvatures and their
derivatives on triangle meshes. In IEEE Symposium on
3D Data Processing, Visualization and Transmission.
Secord, A. and et al. (2011). Perceptual models of view-
point preference. ACM TOG.
Sokolov, D. and Plemenos, D. (2005). Viewpoint quality
and scene understanding. In Eurographics Symposium
on Virtual Reality.
Song, R. and et al. (2014). Mesh saliency via spectral pro-
cessing. ACM TOG.
Tasse, F. and et al. (2015). Cluster-based point set saliency.
In ICCV.
Taubin, G. (1995). Estimating the tensor of curvature of a
surface from a polyhedral approximation. In ICCV.
V
´
azquez, P. and et al. (2001). Viewpoint selection using
viewpoint entropy. In Vis., Model., and Visualizat.
Wang, S. and et al. (2015). Multi-scale mesh saliency
based on low-rank and sparse analysis in shape fea-
ture space. Computer Aided Geometric Design.
Wu, J. and et al. (2013). Mesh saliency with global rarity.
Graphical Models.
Xiang, Y. and et al. (2016). Objectnet3d: A large scale
database for 3d object recognition. In ECCV.
APPENDIX
Computing Visible Faces and Vertices. To know
which vertices are visible, first we have determined
which faces are visibles. Given a Point of View pov,
we determine which faces are facing the camera
using back face culling. More precisely, a face F is
oriented towards the camera if the cosine of the angle
α
F
between its outgoing normal
−→
n
F
and the camera
vector
−−−−−→
pov −c
F
, with c
F
the center of F, is greater
than an epsilon ε = 10
−5
. Some of these faces can
be occluded. To filter them out, we use the depth
information contained in the depth maps available for
each viewpoint. A face is considered visible if the
depth associated with the 2D projection of its center
(we take the barycenter) is the same as that contained
in the depth map. Often, the 2D coordinates of the
centers are not integer. In the following we will
VISAPP 2024 - 19th International Conference on Computer Vision Theory and Applications
490