The above defined algorithm is related to the
methods of (Liu et al., 2010) and (Sharan et al., 2013),
but contains multiple key differences. First, the selec-
tion procedure is performed for each material, rather
than for all materials combined. Also, the features are
combined here using late fusion (i.e. by averaging the
probability distributions), while (Liu et al., 2010) et
al. and (Sharan et al., 2013) combine features by con-
catenating the feature distributions and retraining the
whole training set on the concatenated vector. With
late fusion, new combinations do not need to be re-
trained, which greatly reduces the amount of training
effort (Snoek et al., 2005). Lastly, a discriminative
classification method is used, as is also done in (Sha-
ran et al., 2013).
5.2 Classifying a Test Image
Classification based on class-dependent feature sub-
sets is considerably different from classification based
on a single subset. For a single subset, classification
can be done by placing the training objects in the re-
spective feature space and making predictions based
on inferred decision boundaries. For class-dependent
feature subsets, this is not directly applicable.
More conceptually, the difference can be viewed
in the context of material recognition as follows.
Classification based on a single subset can be inter-
preted as discriminating materials based on a set of
shared properties. Class-dependent feature subsets
however, perform classification from the other end of
the spectrum. Classification is done by modeling test
images as if they were a specific material, after which
the quality of the modeling process is determined. In
this context, quality is understood as the probability
of an unknown image being a specific material, if it is
modeled as such.
In other words, an unknown test image is placed
in the feature space of the feature subset of each pos-
sible material category. Because of the use of De-
cision Forests, the quality of the test image can be
stated for each feature space S
m
by the probability
P
m
of being material m. For M material categories,
this results in M probability outputs, P
1
, P
2
, .., P
M
. Al-
though it is possible to make a prediction based on
the outputs by choosing the material category which
yields the highest probability, the result can be bi-
ased, since the probabilities are yielded from different
feature spaces. To compensate for this, weights are
added to each probability value, based on the heuris-
tic weight method by Wang et al. (Wang et al., 2008).
The weight for each material m is determined as
the probability of the test image of being material m
in the union set of the feature subsets. More formally,
given the feature subsets of the material categories
X
1
, X
2
, .., X
M
, the union set is defined as ∪
M
m=1
X
m
.
The weight for material m is stated as the proba-
bility of the test image of being material m in the
union set, denotes here as W
m
. Given the probabilities
and weights, the material category for a test image is
stated as the maximum weighted category probability,
i.e.:
m
∗
= argmax
m
P
m
W
m
. (5)
6 EXPERIMENTATION
Similar to (Liu et al., 2010), (Hu et al., 2011), and
(Sharan et al., 2013), the experimental evaluation is
focused on the Flickr Materials Database, where the
100 images for each material category are divided into
50 images for training and 50 images for testing. The
experimental results are presented in three-fold. First,
the choice of Decision Forests in this method is justi-
fied by showing the effectiveness of the Decision For-
est over the Latent Dirichlet Allocation approach by
(Liu et al., 2010) for material classification. Second,
the effects of adding spatial information to the 4 uni-
formly sampled local features are shown. Third, the
results of the method as a whole are shown.
6.1 Decision Forests and αLDA
In order to experimentally verify the effect of the
method and the spatial feature enhancement, the La-
tent Dirichlet Allocation approach for material recog-
nition of (Liu et al., 2010) could have been used, since
LDA also yields probabilistic results. The main rea-
son to prefer Decision Forests over LDA is due to the
ability of Decision Forests to yield higher recognition
rates, as is indicated in Table 2.
Table 2: The performance of the local features in isolation
and the performance of the single subset.
Feature (Liu et al., 2010) Decision Forest
SIFT 35.2% 44.2%
Jet 29.6% 37.8%
HOG 37.6%
Micro-Jet 21.2% 37.4%
Colour 32% 37%
Micro-SIFT 28.2% 35%
Edge-Ribbon 30% 33.6%
Edge-Slice 33% 33.2%
Curvature 26.4% 30.2%
Single subset 44.6% 52.6%
The results in Table 2 show that both the performance
of the individual features and the performance us-
ing a single feature subset are improved when using
VISAPP2014-InternationalConferenceonComputerVisionTheoryandApplications
498