limits the application of deep learning models in
fields that require interpretability, such as healthcare
and finance. In addition, as model complexity
deepens, model interpretability becomes difficult
(Cheng, 2016). Deep learning is usually trained on
massive amounts of data. However, deep learning
may not perform well when encountering new and
unknown data. This directly affects the model's
ability to generalize in real-world applications. Real-
world data does not fully cover all possible scenarios,
and improving the generalization ability of models is
a crucial task. In addition, as deep learning deepens
the coverage of the product, explainability becomes
important because users want to know why the model
makes a particular decision and expect the product to
provide an intuitive and easy-to-understand
explanation. Therefore, future products will focus
more on user experience and interpretability. We can
provide a visual explanation of the decision basis of a
product using interpretable algorithms such as
SHapley Additive exPlanations (SHAP), which
calculates the contribution of each input feature to the
output of the model, thus allowing the user to
understand the decision process of the model. In
addition, we can develop interactive tools that allow
the user to interact directly with the model so that the
user can have a clearer understanding of the model's
decision-making process. For example, change the
model's decision maker with some evidence to
explore whether there is a problem with the model
output. In the industrial domain, deep learning models
need to be more stable, reliable, and have good
generalization capabilities, and the efficiency and
scalability of the models need to be considered to
cope with large-scale data and complex scenarios.
Learning models for industrial domains are enabled
by transfer learning, where knowledge from the
source domain can be migrated to the target domain,
thus reducing the dependence on data volume and
enhancing the generalization ability of the model.
Domain adaptation can help the model better adapt to
the data distribution of the target domain and improve
the generalization ability of the model, so it is also
necessary to fine-tune and optimize the model for the
specific data and tasks in the target domain.
4 CONCLUSIONS
This paper is a comprehensive review of the field of
Deep Learning combined with Recommender
Systems. The research methodology of this paper
focuses on the algorithmic structure of CNN, RNN,
Auto-Encoders, and Transformers, and
comprehensively analyzes the shortcomings of the
existing learning models including Cold-start, over-
specialization, data sparsity, etc. In short, the current
recommender systems have flaws and limitations in
terms of accuracy and performance and thus require
further research and improvement.
In the future, in order to better improve the ability
to explain the user experience of the product, the main
focus is to further enhance the model's scalability
with reference to migration studies, domain
adaptation, and model centralization techniques. In
addition, developing more efficient algorithms with
higher forward accuracy is also a major direction for
future research.
AUTHORS CONTRIBUTION
All the authors contributed equally, and their names
were listed in alphabetical order.
REFERENCES
Cheng, H.-T., Koc, L., Harmsen, J., Shaked, T., Chandra,
T., Aradhye, H., Anderson, G., Corrado, G., Chai, W.,
Ispir, M., Anil, R., Haque, Z., Hong, L., Jain, V., Liu,
X., Shah, H. 2016. Wide & Deep Learning for
Recommender Systems. In Proceedings of the 1st
Workshop on Deep Learning for Recommender
Systems.
Covington, P., Adams, J., & Sargin, E. 2016. Deep Neural
Networks for YouTube Recommendations. In
Proceedings of the 10th ACM Conference on
Recommender Systems.
Kruse, R., Mostaghim, S., Borgelt, C., Braune, C., &
Steinbrecher, M. 2022. Multi-layer perceptrons.
In Computational intelligence: a methodological
introduction (pp. 53-124). Cham: Springer
International Publishing.
Lambert, B., Forbes, F., Doyle, S., Dehaene, H., & Dojat,
M. 2024. Trustworthy clinical AI solutions: a unified
review of uncertainty quantification in deep learning
models for medical image analysis. Artificial
Intelligence in Medicine, 102830.
Naumov, M., Mudigere, D., Shi, H.-J. M., Huang, J.,
Sundaraman, N., Park, J., Wang, X., Gupta, U., Wu, C.-
J., Azzolini, A. G., Dzhulgakov, D., Mallevich, A.,
Cherniavskii, I., Lu, Y., Krishnamoorthi, R., Yu, A.,
Kondratenko, V., Pereira, S., Chen, X., Chen, W., Rao,
V., Jia, B., Xiong, L., & Smelyanskiy, M. 2016. Wide
& Deep Learning for Recommender Systems.
Proceedings of the 1st Workshop on Deep Learning for
Recommender Systems.
Naumov, M., Mudigere, D., Shi, H.-J. M., Huang, J.,
Sundaraman, N., Park, J., Wang, X., Gupta, U., Wu, C.-
J., Azzolini, A. G., Dzhulgakov, D., Mallevich, A.,