high. On the other hand, our experiments showed that
it is computationally feasible on today’s computer ar-
chitectures for up to 20 features. The framework is
in every case faster than a manual optimization by an
expert. A fully exhaustive search of all feature se-
lection combinations is clearly infeasible. Due to the
huge parameter search space of the framework, e.g.
evolutionary or random optimization strategies have
a great potential in this approach. In future work,
the framework will be applied to a larger variety of
classification tasks to show its universality. Further-
more, the best classifiers can be combined to obtain
an ensemble classifier (Jain et al., 2000) with a poten-
tially greater predictive performance. Additionally,
the framework can easily be extended with the newest
state-of-the-art classifiers, feature transform and se-
lection algorithms as “plugins”.
REFERENCES
Bergstra, J. and Bengio, Y. (2012). Random search for
hyper-parameter optimization. J. Mach. Learn. Res.,
13(1):281–305.
Beyer, K., Goldstein, J., Ramakrishnan, R., and Shaft, U.
(1999). When is ”nearest neighbor” meaningful? In
Beeri, C. and Buneman, P., editors, Database Theory
ICDT99, volume 1540 of Lecture Notes in Computer
Science, pages 217–235. Springer Berlin Heidelberg.
Buck, C., B¨urger, F., Herwig, J., and Thurau, M. (2013).
Rapid inclusion and defect detection system for large
steel volumes. ISIJ International, 53, No. 11. ac-
cepted.
B¨urger, F., Herwig, J., Thurau, M., Buck, C., Luther, W.,
and Pauli, J. (2013). An auto-adaptive measurement
system for statistical modeling of non-metallic inclu-
sions through image-based analysis of milled steel
surfaces. In Bosse, H. and Schmitt, R., editors,
ISMTII 2013, 11th International Symposium on Mea-
surement Technology and Intelligent Instruments. Ap-
primus Wissenschaftsverlag.
Chang, C.-C. and Lin, C.-J. (2011). LIBSVM: A library
for support vector machines. ACM Transactions on
Intelligent Systems and Technology, 2:27:1–27:27.
Doshi, N. and Schaefer, G. (2012). A comparative analysis
of local binary pattern texture classification. In Visual
Communications and Image Processing (VCIP), 2012
IEEE, pages 1–6.
Falconer, K. (2003). Fractal geometry: mathematical foun-
dations and applications. Wiley, 2 edition.
Herwig, J., Buck, C., Thurau, M., Pauli, J., and Luther,
W. (2012). Real-time characterization of non-metallic
inclusions by optical scanning and milling of steel
samples. In Proc. of SPIE Vol, volume 8430, pages
843010–1.
Huang, C.-L. and Wang, C.-J. (2006). A GA-based fea-
ture selection and parameters optimizationfor support
vector machines. Expert Systems with Applications,
31(2):231 – 240.
Jain, A., Duin, R. P. W., and Mao, J. (2000). Statistical
pattern recognition: a review. Pattern Analysis and
Machine Intelligence, IEEE Transactions on, 22(1):4–
37.
Juszczak, P., Tax, D., and Duin, R. (2002). Feature scal-
ing in support vector data description. In Proc. ASCI,
pages 95–102. Citeseer.
Kohavi, R. and John, G. H. (1997). Wrappers for feature
subset selection. Artificial Intelligence, 97(12):273 –
324.
Lemke, C., Budka, M., and Gabrys, B. (2013). Metalearn-
ing: a survey of trends and technologies. Artificial
Intelligence Review, pages 1–14.
Lin, S.-W., Lee, Z.-J., Chen, S.-C., and Tseng, T.-Y.
(2008a). Parameter determination of support vector
machine and feature selection using simulated anneal-
ing approach. Applied Soft Computing, 8(4):1505 –
1512. Soft Computing for Dynamic Data Mining.
Lin, S.-W., Ying, K.-C., Chen, S.-C., and Lee, Z.-J. (2008b).
Particle swarm optimization for parameter determina-
tion and feature selection of support vector machines.
Expert Systems with Applications, 35(4):1817 – 1824.
Ohser, J. and M¨ucklich, F. (2000). Statistical analysis of
microstructures in materials science. John Wiley New
York.
Reif, M., Shafait, F., Goldstein, M., Breuel, T., and Den-
gel, A. (2012). Automatic classifier selection for non-
experts. Pattern Analysis and Applications, pages 1–
14.
Somorjai, R. L., Alexander, M. E., Baumgartner, R., Booth,
S., Bowman, C., Demko, A., Dolenko, B., Man-
delzweig, M., Nikulin, A. E., Pizzi, N., Pranckevi-
ciene, E., Summers, A. R., and Zhilkin, P. (2004).
A data-driven, flexible machine learning strategy for
the classification of biomedical data. In Dubitzky, W.
and Azuaje, F., editors, Artificial Intelligence Methods
And Tools For Systems Biology, volume 5 of Compu-
tational Biology, pages 67–85. Springer Netherlands.
Toriwaki, J. and Yoshida, H. (2009). Fundamentals of three-
dimensional digital image processing. Springer.
Van der Maaten, L., Postma, E., and Van Den Herik, H.
(2009). Dimensionality reduction: A comparative re-
view. Journal of Machine Learning Research, 10:1–
41.
Wolpert, D. H. (1996). The lack of a priori distinctions
between learning algorithms. Neural computation,
8(7):1341–1390.
VISAPP2014-InternationalConferenceonComputerVisionTheoryandApplications
152