Fourth, we aim to analyse our proposed method in
combination with ensemble-based classification sys-
tems. Therefore, for instance, one could design an
ensemble in which each base classifier/regressor is
trained on 1) different target label intervals, and/or 2)
different assignments between training samples and
target labels. Moreover, one could include different
classification/regression models, or combine our pro-
posed method with some approaches from the dif-
ferent class imbalance solution categories, which we
briefly discussed in Section 1.
Finally, after adapting our method specific to some
or all of the modifications proposed above, we aim to
provide a detailed comparison to latest state-of-the-art
techniques including highly imbalanced data sets.
ACKNOWLEDGEMENTS
The work of Friedhelm Schwenker and Peter Bell-
mann is supported by the project Multimodal recogni-
tion of affect over the course of a tutorial learning ex-
periment (SCHW623/7-1) funded by the German Re-
search Foundation (DFG). The work of Daniel Braun
and Heinke Hihn is supported by the European Re-
search Council, grant number ERC-StG-2015-ERC,
Project ID: 678082, BRISC: Bounded Rationality in
Sensorimotor Coordination. We gratefully acknowl-
edge the support of NVIDIA Corporation with the do-
nation of the Tesla K40 GPU used for this research.
REFERENCES
Abe, S. (2005). Support Vector Machines for Pattern Clas-
sification. Advances in Pattern Recognition. Springer,
London, U.K.
Aggarwal, C. C. (2015). Outlier analysis. In Data mining,
pages 237–263. Springer.
Bellmann, P. and Schwenker, F. (2020). Ordinal classifi-
cation: Working definition and detection of ordinal
structures. IEEE Access, 8:164380–164391.
Bellmann, P., Thiam, P., and Schwenker, F. (2018). Multi-
classifier-Systems: Architectures, Algorithms and Ap-
plications, pages 83–113. Springer International Pub-
lishing, Cham.
Breiman, L., Friedman, J. H., Olshen, R. A., and Stone,
C. J. (1984). Classification and Regression Trees.
Wadsworth.
Chawla, N. V. (2005). Data mining for imbalanced datasets:
An overview. In The Data Mining and Knowledge
Discovery Handbook, pages 853–867. Springer.
Chawla, N. V., Bowyer, K. W., Hall, L. O., and Kegelmeyer,
W. P. (2002). Smote: Synthetic minority over-
sampling technique. Journal of Artificial Intelligence
Research, pages 321–357.
Dietterich, T. G. and Bakiri, G. (1991). Error-correcting
output codes: A general method for improving mul-
ticlass inductive learning programs. In AAAI, pages
572–577. AAAI Press / The MIT Press.
Dua, D. and Graff, C. (2017). UCI machine learning repos-
itory.
Fakoor, R., Ladhak, F., Nazi, A., and Huber, M. (2013).
Using deep learning to enhance cancer diagnosis and
classification. In Proceedings of the international con-
ference on machine learning, volume 28. ACM New
York, USA.
Frank, E. and Hall, M. A. (2001). A simple approach to or-
dinal classification. In ECML, volume 2167 of Lecture
Notes in Computer Science, pages 145–156. Springer.
Galar, M., Fernandez, A., Barrenechea, E., Bustince, H.,
and Herrera, F. (2011). A review on ensembles for
the class imbalance problem: bagging-, boosting-, and
hybrid-based approaches. IEEE Transactions on Sys-
tems, Man, and Cybernetics, Part C (Applications and
Reviews), 42(4):463–484.
Hihn, H. and Braun, D. A. (2020). Specialization in hier-
archical learning systems. Neural Processing Letters,
52:2319–2352.
Jurgovsky, J., Granitzer, M., Ziegler, K., Calabretto, S.,
Portier, P.-E., He-Guelton, L., and Caelen, O. (2018).
Sequence classification for credit-card fraud detec-
tion. Expert Systems with Applications, 100:234–245.
Kuncheva, L. I. (2014). Combining Pattern Classifiers:
Methods and Algorithms. John Wiley & Sons.
Lausser, L., Sch
¨
afer, L. M., K
¨
uhlwein, S. D., Kestler, A.
M. R., and Kestler, H. A. (2020). Detecting ordinal
subcascades. Neural Process Lett, 52:2583–2605.
Lin, Y., Lee, Y., and Wahba, G. (2002). Support vector
machines for classification in nonstandard situations.
Machine learning, 46(1-3):191–202.
Loyola-Gonz
´
alez, O., Medina-P
´
erez, M. A., Mart
´
ınez-
Trinidad, J. F., Carrasco-Ochoa, J. A., Monroy, R., and
Garc
´
ıa-Borroto, M. (2017). Pbc4cip: A new contrast
pattern-based classifier for class imbalance problems.
Knowl. Based Syst., 115:100–109.
Sigillito, V. G., Wing, S. P., Hutton, L. V., and Baker, K. B.
(1989). Classification of radar returns from the iono-
sphere using neural networks. Johns Hopkins APL
Technical Digest, 10(3):262–266.
Sun, Y., Wong, A. K., and Kamel, M. S. (2009). Classifica-
tion of imbalanced data: A review. International jour-
nal of pattern recognition and artificial intelligence,
23(04):687–719.
Vapnik, V. (2013). The nature of statistical learning theory.
Springer science & business media.
Wilcoxon, F. (1945). Individual comparisons by ranking
methods. Biometrics Bulletin, 1(6):80–83.
Wu, G. and Chang, E. Y. (2003). Class-boundary align-
ment for imbalanced dataset learning. In ICML 2003
workshop on learning from imbalanced data sets II,
Washington, DC, pages 49–56.
Xiaolong, X., Wen, C., and Yanfei, S. (2019). Over-
sampling algorithm for imbalanced data classifica-
tion. Journal of Systems Engineering and Electronics,
30(6):1182–1191.
Binary Classification: Counterbalancing Class Imbalance by Applying Regression Models in Combination with One-sided Label Shifts
731