Table 1: Minimum, maximum, mean and standard devi-
ation of average of f-scores (Chen and Lin, 2006) com-
puted as follows: Firstly, features’ f-scores are computed
for each mesh model (except sphere and torus). Then, the
mean of these features’ f-scores are computed for non-noisy
and noisy mesh models. Finally, some statistics (min, max,
mean and standard deviation) are computed on similar sub-
sets of features (i.e. estimator for the angle between tangent
planes, curvatures, and first four features in F
e
).
Features
in F
e
Models Min Max Mean sdev
ˆ
θ
r
e
non-noisy 9.196 1249.950 277.934 523.248
ˆ
θ
r
e
noisy 2.618 3.202 2.910 0.208
curv. non-noisy 0.143 0.699 0.360 0.152
curv. noisy 0.056 0.456 0.238 0.127
4 first non-noisy 0.533 1428.810 357.705 714.068
4 first noisy 0.405 2.569 0.969 1.068
6.3 Experimental Settings
Tangent Plane Estimation. A triangle is accepted
into a region during the region growing process if the
angle between its normal and the currently estimated
normal of the tangent plane is less than 23 degrees.
Feature Vector. The estimator for the angle be-
tween tangent planes
ˆ
θ
r
e
has a quite high informative
power as can be seen in table 1. It is much more
significant than curvature measures according to f-
scores (Chen and Lin, 2006) and than the four first
features in the feature vector for noisy meshes. How-
ever, for non-noisy mesh models, the four first fea-
tures are slightly more informative. By increasing the
size of the support, the angle between tangent plane
normals has become more robust to the presence of
noise, at the cost of being less sensitive to small fea-
tures in non-noisy data.
Learning Feature Edge. For learning with SVMs,
the LIBSVM library (Chang and Lin, 2001) has
been used with RBF kernel. The best model hyper-
parameters have been selected using grid search with
cross-validation and maximization of the AUC which
is able to cope with unbalanced training datasets.
All duplicated entries in the training set have been
removed and a subsampling taking at maximum 5000
training samples per mesh model has been done. The
subsampling process keeps the class distributions un-
changed. To compensate for unbalanced training data,
the error weighting factor associated with the feature
edge class is set 9 times greater than the one used for
normal edge class.
Globally Consistent Feature Edge Detection. To
evaluate the benefits of the global minimization of
equation 1 (Potts model) and those of the learning
Table 2: Statistics for mesh models used for edges classi-
fication (a: non-noisy models ; b: models with Gaussian
noise GN): nb. of edges, nb. of feature edges, Area Under
Curve (%) for 4 methods: thresholding, hysteresis thresh-
olding, globally consistent edge detection with data term
based on dihedral angle, and with data term based on SVM.
Mesh #t.e. #f.e. thres. Hys. dih+G SVM+G
1232 joint 9024 660 100.0 100.0 100.0 100.0
cone 14850 50 100.0 100.0 99.8 100.0
cup 17010 381 99.7 99.7 99.7 98.2
cut cone 864 144 100.0 100.0 100.0 100.0
cylinder 3540 40 100.0 100.0 100.0 100.0
fandisk 19479 743 99.9 99.9 100.0 97.3
screw 3723 216 100.0 100.0 100.0 100.0
sphere 14700 0 - - - -
torus 7500 0 - - - -
total 90690 2234 99.9 99.9 99.9 99.4
a) Non-noisy mesh models.
Mesh+GN #t.e. #f.e. thres. Hys. dih+G SVM+G
1232 joint 9024 660 93.1 92.5 90.3 96.9
cone 14850 50 93.0 92.8 94.1 93.9
cup 17010 381 98.2 98.8 97.3 95.0
cut cone 864 144 100.0 100.0 99.3 100.0
cylinder 3540 40 99.8 99.8 99.8 100.0
fandisk 19479 743 99.4 99.6 99.1 98.3
screw 3723 216 97.8 97.9 100.0 98.6
sphere 14700 0 - - - -
torus 7500 0 - - - -
total 90690 2234 97.3 97.3 97.1 97.5
b) Noisy mesh models.
term separately, a data term depending on the edge
dihedral angle alone is proposed:
E
d
(w
e
, θ
e
) =
(2θ
true
− |θ
e
|)
2
if w
e
= 1
θ
2
e
otherwise
(5)
E
d
(w
e
, θ
e
) is an even positive smooth function over
θ
e
. Note that E
d
(0, θ
e
) = E
d
(1, θ
e
) when |θ
e
| = θ
true
(θ
true
∈ [0, π[). The chosen parameters for the pairwise
term have been set as a good trade-off between all
models parameters using grid search. For the SVM
based data term (cf. equation 2), we experimentally
set µ to 0.1 (cf. equation 1), β to 0 (cf. equation 3), λ
to 15 and σ to 10 (cf. equation 4). For dihedral angle
based data term (cf. equation 5), we experimentally
set µ to 2 (cf. equation 1), β to 10
−3
(cf. equation 3),
λ to 15 and σ to 10 (cf. equation 4). To generate the
ROC curves of equation 1 with SVM based data term,
the bias/constant term of the prediction model vary
in [−2, 1] (200 samples at all). To generate the ROC
curves of equation 1 with dihedral angle based data
term (cf. equation 5), θ
true
vary in [0, π] (200 samples
at all).
GRAPP 2011 - International Conference on Computer Graphics Theory and Applications
110