Drucker, H., Burges, C. J. C., Kaufman, L., Smola, A. J.,
and Vapnik, V. (1997). Support vector regression ma-
chines. In Mozer, M. C., Jordan, M. I., and Petsche,
T., editors, Advances in Neural Information Process-
ing Systems 9, pages 155–161. MIT Press.
Duwe, G. and Kim, K. (2017). Out with the old and in
with the new? an empirical comparison of supervised
learning algorithms to predict recidivism. Criminal
Justice Policy Review, 28(6):570–600.
Elith, J., Leathwick, J. R., and Hastie, T. (2008). A working
guide to boosted regression trees. Journal of Animal
Ecology, 77(4):802–813.
Elster, C., Klauenberg, K., Walzel, M., W
¨
ubbeler, G., Har-
ris, P., Cox, M., Matthews, C., Smith, I., Wright, L.,
Allard, A., Fischer, N., Cowen, S., Ellison, S., Wil-
son, P., Pennecchi, F., Kok, G., van der Veen, A., and
Pendrill, L. (2015). A guide to bayesian inference for
regression problems.
Freund, Y. and Schapire, R. E. (1997). A decision-theoretic
generalization of on-line learning and an application
to boosting. J. Comput. Syst. Sci., 55(1):119–139.
Friedman, J. H. (1991). Multivariate adaptive regression
splines. Ann. Statist., 19(1):1–67.
Griggs, W. (2013). Penalized spline regression and its ap-
plications.
Han, J., Pei, J., and Yin, Y. (2000). Mining frequent pat-
terns without candidate generation. In Proceedings
of the 2000 ACM SIGMOD International Conference
on Management of Data, SIGMOD ’00, pages 1–12,
New York, NY, USA. ACM.
Hastie, T. (2017). Generalized additive models. In Fager-
berg, J., Mowery, D. C., and Nelson, R. R., editors,
Statistical models in S, chapter 7, pages 249–307.
Routledge.
Hastie, T., Tibshirani, R., and Friedman, J. (2009). The el-
ements of statistical learning: data mining, inference
and prediction. Springer, 2 edition.
Hinde, J. (1982). Compound poisson regression models. In
Gilchrist, R., editor, GLIM 82: Proceedings of the In-
ternational Conference on Generalised Linear Mod-
els, pages 109–121, New York, NY. Springer New
York.
Houtsma, M. and Swami, A. (1995). Set-oriented mining
for association rules in relational databases. In Pro-
ceedings of the Eleventh International Conference on
Data Engineering, pages 25–33.
Huang, G.-B., Zhu, Q.-Y., and Siew, C.-K. (2006). Extreme
learning machine: Theory and applications. Neuro-
computing, 70(1):489 – 501. Neural Networks.
Ke, G., Meng, Q., Finley, T., Wang, T., Chen, W., Ma, W.,
Ye, Q., and Liu, T.-Y. (2017). Lightgbm: A highly
efficient gradient boosting decision tree. In Guyon, I.,
Luxburg, U. V., Bengio, S., Wallach, H., Fergus, R.,
Vishwanathan, S., and Garnett, R., editors, Advances
in Neural Information Processing Systems 30, pages
3146–3154. Curran Associates, Inc.
Koenker, R. and Hallock, K. (2001). Quantile regression.
Journal of Economic Perspectives, 15(4):143–156.
Koenker, R. W. and Bassett, G. (1978). Regression quan-
tiles. Econometrica, 46(1):33–50.
Kotsiantis, S. B. (2007). Supervised machine learning:
A review of classification techniques. In Proceed-
ings of the 2007 Conference on Emerging Artificial
Intelligence Applications in Computer Engineering:
Real Word AI Systems with Applications in eHealth,
HCI, Information Retrieval and Pervasive Technolo-
gies, pages 3–24. IOS Press.
Kumbhare, T. A. and Chobe, S. V. (2014). An overview of
association rule mining algorithms.
Luo, X., Chang, X., and Ban, X. (2015). Extreme learn-
ing machine for regression and classification using l1-
norm and l2-norm. In Cao, J., Mao, K., Cambria,
E., Man, Z., and Toh, K.-A., editors, Proceedings of
ELM-2014 Volume 1, pages 293–300, Cham. Springer
International Publishing.
Marsh, L. and Cormier, D. R. (2011). Spline regres-
sion models. Journal of Applied Business Research
(JABR), 19.
Montgomery, D. C., Peck, E. A., and Vining, G. G. (2015).
Introduction to Linear Regression Analysis. John Wi-
ley & Sons, New York.
Murphy, K. P. (2012). Machine Learning - A Probabilistic
Perspective. MIT Press, Cambridge.
N. van Wieringen, W. (2015). Lecture notes on ridge re-
gression.
Rodriguez, R. N. and Yao, Y. (2013). Five things you should
know about quantile regression. In In Proceedings of
the SAS Global Forum 2017 Conference.
Ruiand D. Wunsch, X. (2005). Survey of clustering al-
gorithms. IEEE Transactions on Neural Networks,
16(3):645–678.
Stone, C. J. (1985). Additive regression and other nonpara-
metric models. Ann. Statist., 13(2):689–705.
Tibshirani, R. (1996). Regression shrinkage and selection
via the lasso. Journal of the Royal Statistical Society
(Series B), 58:267–288.
Tiruveedhula, S., Sheela Rani, C., and Narayana, V. (2016).
A survey on clustering techniques for big data mining.
Indian Journal of Science and Technology, 9:1–12.
Unger, D. A., van den Dool, H., O’Lenic, E., and Collins,
D. (2009). Ensemble regression. Monthly Weather
Review, 137(7):2365–2379.
Winship, C. and Mare, R. D. (1984). Regression models
with ordinal variables. American Sociological Review.
Wulu, J., Singh, K., Famoye, F., Thomas, T., and McGwin,
G. (2002). Regression analysis of count data. Journal
of Indian Society of Agricultural Statistics, 55:220–
231.
Xu, D. and Tian, Y. (2015). A comprehensive survey
of clustering algorithms. Annals of Data Science,
2(2):165–193.
Zaki, M. J. (2000). Scalable algorithms for association min-
ing. IEEE Transactions on Knowledge and Data En-
gineering, 12(3):372–390.
Zou, H. and Hastie, T. (2005). Regularization and variable
selection via the elastic net. Journal of the Royal Sta-
tistical Society, Series B, 67:301–320.
A Comparative Study for the Selection of Machine Learning Algorithms based on Descriptive Parameters
415