Bauer, E. and Kohavi, R. (1999). An empirical comparison
of voting classification algorithms: Bagging, boost-
ing, and variants. MLJ.
Dalal, N. and Triggs, B. (2005). Histograms of oriented
gradients for human detection. In CVPR.
Dietterich, T. (2000). An experimental comparison of three
methods for constructing ensembles of decision trees:
Bagging, boosting, and randomization. MLJ.
Fan, R., Chang, K., Hsieh, C., Wang, X., and Lin, C. (2008).
LIBLINEAR: A library for large linear classification.
JMLR.
Felzenszwalb, P., Girshick, R. B., McAllester, D., and Ra-
manan, D. (2010). Object detection with discrimina-
tively trained part-based models. PAMI.
Freund, Y. (1995). Boosting a weak learning algorithm by
majority. IANDC.
Freund, Y. (1999). An adaptive version of the boost by ma-
jority algorithm. In COLT.
Freund, Y. and Schapire, R. (1995). A decision-theoretic
generalization of on-line learning and an application
to boosting. In COLT.
Freund, Y. and Science, C. (2009). A more robust boosting
algorithm. arXiv:0905.2138.
Friedman, J. (2001). Greedy function approximation: a gra-
dient machine. AOS.
Friedman, J., Hastie, T., and Tibshirani, R. (2000). Additive
logistic regression: a statistical view of boosting. AOS.
Grove, A. and Schuurmans, D. (1998). Boosting in the
limit: Maximizing the margin of learned ensembles.
In AAAI.
Hays, J. and Efros, A. (2007). Scene completion using mil-
lions of photographs. TOG.
Kumar, M. P. and Packer, B. (2010). Self-paced learning for
latent variable models. In NIPS.
Kumar, M. P., Zisserman, A., and Torr, P. H. S. (2009). Ef-
ficient discriminative learning of parts-based models.
In ICCV.
Laptev, I. (2009). Improving object detection with boosted
histograms. IVC.
Leistner, C., Saffari, A., Roth, P. M., and Bischof, H.
(2009). On robustness of on-line boosting - a com-
petitive study. In ICCV Workshops.
Long, P. M. and Servedio, R. A. (2008). Random classifi-
cation noise defeats all convex potential boosters. In
ICML.
Masnadi-Shirazi, H., Mahadevan, V., and Vasconcelos, N.
(2010). On the design of robust classifiers for com-
puter vision. In CVPR.
Masnadi-shirazi, H. and Vasconcelos, N. (2008). On the
design of loss functions for classification: theory, ro-
bustness to outliers, and savageboost. In NIPS.
Mason, L., Baxter, J., Bartlett, P., and Frean, M. (1999).
Boosting algorithms as gradient descent in function
space. In NIPS.
R
¨
atsch, G., Onoda, T., and M
¨
uller, K. (2001). Soft margins
for AdaBoost. MLJ.
Shotton, J., Fitzgibbon, A., Cook, M., Sharp, T., Finocchio,
M., Moore, R., Kipman, A., and Blake, A. (2011).
Real-time human pose recognition in parts from single
depth images. In CVPR.
Torralba, A., Fergus, R., and Freeman, W. T. (2008). 80
Million Tiny Images: a Large Data Set for Nonpara-
metric Object and Scene Recognition. PAMI.
Vezhnevets, A. and Barinova, O. (2007). Avoiding boosting
overfitting by removing confusing samples. In ECML.
Vijayanarasimhan, S. (2011). Large-scale live active learn-
ing: Training object detectors with crawled data and
crowds. In CVPR.
Viola, P. and Platt, J. (2006). Multiple instance boosting for
object detection. In NIPS.
Warmuth, M., Glocer, K., and R
¨
atsch, G. (2008). Boosting
algorithms for maximizing the soft margin. In NIPS.
ImprovedBoostingPerformancebyExclusionofAmbiguousPositiveExamples
21