
lar classification problems in a second. arXiv preprint
arXiv:2207.01848.
Katzir, L., Elidan, G., and El-Yaniv, R. (2020). Net-dnf: Ef-
fective deep modeling of tabular data. In International
conference on learning representations.
Khan, M. S., Nath, T. D., Hossain, M. M., Mukherjee, A.,
Hasnath, H. B., Meem, T. M., and Khan, U. (2023).
Comparison of multiclass classification techniques us-
ing dry bean dataset. International Journal of Cogni-
tive Computing in Engineering, 4:6–20.
Li, W., Wang, Z., Yang, X., Dong, C., Tian, P., Qin, T., Huo,
J., Shi, Y., Wang, L., Gao, Y., et al. (2023). Libfew-
shot: A comprehensive library for few-shot learning.
IEEE Transactions on Pattern Analysis and Machine
Intelligence.
Loukili, M., Messaoudi, F., and El Ghazi, M. (2022). Su-
pervised learning algorithms for predicting customer
churn with hyperparameter optimization. Interna-
tional Journal of Advances in Soft Computing & Its
Applications, 14(3).
Matloob, I., Khan, S. A., Hussain, F., Butt, W. H., Rukaiya,
R., and Khalique, F. (2021). Need-based and opti-
mized health insurance package using clustering algo-
rithm. Applied Sciences, 11(18):8478.
Nam, J., Tack, J., Lee, K., Lee, H., and Shin, J.
(2023). Stunt: Few-shot tabular learning with self-
generated tasks from unlabeled tables. arXiv preprint
arXiv:2303.00918.
Parnami, A. and Lee, M. (2022). Learning from few exam-
ples: A summary of approaches to few-shot learning.
arXiv preprint arXiv:2203.04291.
Popov, S., Morozov, S., and Babenko, A. (2019). Neural
oblivious decision ensembles for deep learning on tab-
ular data. arXiv preprint arXiv:1909.06312.
S¸ahin, C. (2023). Predicting base station return on invest-
ment in the telecommunications industry: Machine-
learning approaches. Intelligent Systems in Account-
ing, Finance and Management, 30(1):29–40.
Shwartz-Ziv, R. and Armon, A. (2022). Tabular data: Deep
learning is not all you need. Information Fusion,
81:84–90.
Sikri, A., Jameel, R., Idrees, S. M., and Kaur, H. (2024). En-
hancing customer retention in telecom industry with
machine learning driven churn prediction. Scientific
Reports, 14(1):13097.
sklearn (2024). sklearn Documentation. https:
//scikit-learn.org/0.15/modules/generated/sklearn.
multiclass.OneVsRestClassifier.html. Accessed:
April 4, 2024.
Snell, J., Swersky, K., and Zemel, R. (2017). Prototypical
networks for few-shot learning. Advances in neural
information processing systems, 30.
Sun, B., Yang, L., Zhang, W., Lin, M., Dong, P., Young,
C., and Dong, J. (2019). Supertml: Two-dimensional
word embedding for the precognition on structured
tabular data. In Proceedings of the IEEE/CVF con-
ference on computer vision and pattern recognition
workshops, pages 0–0.
Tian, Y., Wang, Y., Krishnan, D., Tenenbaum, J. B., and
Isola, P. (2020). Rethinking few-shot image classi-
fication: a good embedding is all you need? In
Computer Vision–ECCV 2020: 16th European Con-
ference, Glasgow, UK, August 23–28, 2020, Proceed-
ings, Part XIV 16, pages 266–282. Springer.
Tunguz, B., Dieter, or Tails, H., Kapoor, K., Pandey, P.,
Mooney, P., Culliton, P., Mulla, R., Bhutani, S., and
Cukierski, W. (2023). 2023 kaggle ai report.
UCI (2020). Dry Bean. UCI Machine Learning Repository.
DOI: https://doi.org/10.24432/C50S4B.
Wang, R., Pontil, M., and Ciliberto, C. (2021). The role of
global labels in few-shot classification and how to in-
fer them. Advances in Neural Information Processing
Systems, 34:27160–27170.
Wang, Y., Yao, Q., Kwok, J. T., and Ni, L. M. (2020). Gen-
eralizing from a few examples: A survey on few-shot
learning. ACM computing surveys (csur), 53(3):1–34.
Ye, A. and Wang, Z. (2023). Modern deep learning for
tabular data: novel approaches to common modeling
problems. Springer.
Yu, Z., Wang, K., Xie, S., Zhong, Y., and Lv, Z. (2022). Pro-
totypical network based on manhattan distance. Cmes-
Comput. Model. Eng. Sci, 131:655–675.
Zhang, R. and Liu, Q. (2023). Learning with few sam-
ples in deep learning for image classification, a mini-
review. Frontiers in Computational Neuroscience,
16:1075294.
NCTA 2024 - 16th International Conference on Neural Computation Theory and Applications
542