Authors:
Sebastian Tschiatschek
and
Franz Pernkopf
Affiliation:
Graz University of Technology, Austria
Keyword(s):
Bayesian network classifiers, Discriminative learning, Convex relaxation, Maximum margin Bayesian networks, Classifier enhancement, Combining weak classifiers.
Related
Ontology
Subjects/Areas/Topics:
Bayesian Models
;
Classification
;
Convex Optimization
;
Large Margin Methods
;
Pattern Recognition
;
Theory and Methods
Abstract:
Maximum margin Bayesian networks (MMBN) can be trained by solving a convex optimization problem using, for example, interior point (IP) methods (Guo et al., 2005). However, for large datasets this training is computationally expensive (in terms of runtime and memory requirements). Therefore, we propose a less resource intensive batch method to approximately learn a MMBN classifier: we train a set of (weak) MMBN classifiers on subsets of the training data, and then exploit the convexity of the original optimization problem to obtain an approximate solution, i.e., we determine a convex combination of the weak classifiers. In experiments on different datasets we obtain similar results as for optimal MMBN determined on all training samples. However, in terms of computational efficiency (runtime) we are faster and the memory requirements are much lower. Further, the proposed method facilitates parallel implementation.