paring the biases of new algorithms to known biases
of existing algorithms, we can provide a point of ref-
erence for comparing algorithm biases and inductive
assumptions. Making biases explainable and measur-
able is of growing importance, given the increasing
use of complex, overparameterized models such as
deep neural networks. Inductive orientation vectors
provide a quantitative tool for measuring and compar-
ing inductive biases across algorithms.
ACKNOWLEDGEMENTS
This research was supported in part by the National
Science Foundation under Grant No. 1950885. Any
opinions, findings, or conclusions are those of the au-
thors alone, and do not necessarily reflect the views
of the National Science Foundation.
REFERENCES
Baati, K. and Mohsil, M. (2020). Real-Time Prediction
of Online Shoppers’ Purchasing Intention Using Ran-
dom Forest. In IFIP International Conference on Arti-
ficial Intelligence Applications and Innovations, pages
43–51. Springer.
Bashir, D., Monta
˜
nez, G. D., Sehra, S., Sandoval Segura,
P., and Lauw, J. (2020). An Information-Theoretic
Perspective on Overfitting and Underfitting. Aus-
tralasian Joint Conference on Artificial Intelligence
(AJCAI 2020).
Brown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J.,
Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G.,
Askell, A., et al. (2020). Language Models are Few-
Shot Learners. arXiv preprint arXiv:2005.14165.
Caruana, R. and Niculescu-Mizil, A. (2006). An Empirical
Comparison of Supervised Learning Algorithms. In
Proceedings of the 23rd international conference on
Machine learning, pages 161–168.
Cortez, P., Cerdeira, A., Almeida, F., Matos, T., and Reis, J.
(2009). Modeling Wine Preferences by Data Mining
from Physicochemical Properties. Decision support
systems, 47(4):547–553.
Cui, Y., Cao, K., Zheng, G., and Zhang, F. (2011). An
Adaptive Mean Shift Algorithm Based on LSH. Pro-
cedia Engineering, 23:265–269.
Dua, D. and Graff, C. (2017). UCI Machine Learning
Repository.
Ghojogh, B. and Crowley, M. (2019). Linear and Quadratic
Discriminant Analysis: Tutorial. arXiv preprint
arXiv:1906.02590.
Gonen, H. and Goldberg, Y. (2019). Lipstick on a Pig: De-
biasing Methods Cover up Systematic Gender Biases
in Word Embeddings But do not Remove Them. arXiv
preprint arXiv:1903.03862.
Lauw, J., Macias, D., Trikha, A., Vendemiatti, J., and
Monta
˜
nez, G. D. (2020). The Bias-Expressivity
Trade-off. In Proceedings of the 12th International
Conference on Agents and Artificial Intelligence - Vol-
ume 2, pages 141–150. SCITEPRESS.
Lin, Y. and Jeon, Y. (2006). Random Forests and Adaptive
Nearest Neighbors. Journal of the American Statisti-
cal Association, 101(474):578–590.
Mikolov, T., Sutskever, I., Chen, K., Corrado, G. S.,
and Dean, J. (2013). Distributed Representations of
Words and Phrases and their Compositionality. In
Advances in Neural Information Processing Systems,
pages 3111–3119.
Mitchell, T. M. (1980). The Need for Biases in Learning
Generalizations. Department of Computer Science,
Laboratory for Computer Science Research, Rutgers
Univ.
Monta
˜
nez, G. D. (2017). The Famine of Forte: Few Search
Problems Greatly Favor Your Algorithm. In Systems,
Man, and Cybernetics (SMC), 2017 IEEE Interna-
tional Conference on, pages 477–482. IEEE.
Monta
˜
nez, G. D., Bashir, D., and Lauw, J. (2021). Trad-
ing Bias for Expressivity in Artificial Learning. In
Agents and Artificial Intelligence, pages 332–353,
Cham. Springer International Publishing.
Monta
˜
nez, G. D., Hayase, J., Lauw, J., Macias, D., Trikha,
A., and Vendemiatti, J. (2019). The Futility of Bias-
Free Learning and Search. In 32nd Australasian Joint
Conference on Artificial Intelligence, pages 277–288.
Springer.
Moro, S., Cortez, P., and Rita, P. (2014). A Data-Driven Ap-
proach to Predict the Success of Bank Telemarketing.
Decision Support Systems, 62:22–31.
Osisanwo, F., Akinsola, J., Awodele, O., Hinmikaiye,
J., Olakanmi, O., and Akinjobi, J. (2017). Super-
vised Machine Learning Algorithms: Classification
and Comparison. International Journal of Computer
Trends and Technology (IJCTT), 48(3):128–138.
Palechor, F. M. and de la Hoz Manotas, A. (2019).
Dataset for Estimation of Obesity Levels Based on
Eating Habits and Physical Condition in Individuals
from Colombia, Peru and Mexico. Data in brief,
25:104344.
Pedregosa, F., Varoquaux, G., Gramfort, A., Michel, V.,
Thirion, B., Grisel, O., Blondel, M., Prettenhofer, P.,
Weiss, R., Dubourg, V., Vanderplas, J., Passos, A.,
Cournapeau, D., Brucher, M., Perrot, M., and Duch-
esnay, E. (2011). Scikit-learn: Machine Learning
in Python. Journal of Machine Learning Research,
12:2825–2830.
Rong, K., Khant, A., Flores, D., and Monta
˜
nez, G. D.
(2021). The Label Recorder Method: Testing the
Memorization Capacity of Machine Learning Mod-
els. In The Seventh International Conference on
Machine Learning, Optimization, and Data Science
(LOD 2021).
Runarsson, T. P. and Yao, X. (2005). Search Biases in Con-
strained Evolutionary Optimization. IEEE Transac-
tions on Systems, Man, and Cybernetics, Part C (Ap-
plications and Reviews), 35(2):233–243.
Sandoval Segura, P., Lauw, J., Bashir, D., Shah, K., Sehra,
S., Macias, D., and Monta
˜
nez, G. D. (2020). The
Labeling Distribution Matrix (LDM): A Tool for Es-
timating Machine Learning Algorithm Capacity. In
Proceedings of the 12th International Conference on
Agents and Artificial Intelligence - Volume 2, pages
980–986. SCITEPRESS.
Vectorization of Bias in Machine Learning Algorithms
365