
quence, with each item serving as a time step. What’s
new about this approach is the fact that this model
task is to generate an image via pixel predicting val-
ues, a regression. Additionally, a similarity model is
used on top of this generated image to recommend
a real-world image. To validate the accuracy of this
approach, the Universal Image Quality Index is em-
ployed on the recommended image and the actual im-
age. This metric serves as a measure of how closely
our recommendation aligns with real outfits.
The outcomes showcase the effectiveness of this
approach in learning the compatibility of fashion out-
fits. Recognizing that fashion compatibility is sub-
jective, varying from one individual to another, our
future research will explore modeling user-specific
compatibility and style preferences, our goal is to con-
struct a more personalized system that caters to indi-
vidual tastes and preferences, thereby enhancing the
overall user experience.
Expanding our approach to include keywords like
style, mood, artist, material, texture, and brand adds
depth to our model’s decision-making process im-
proving our latent space of features extracted. This
approach holds immense promise in the realm of gen-
erative AI, particularly in the domain of image gen-
eration. Our aspirations go beyond outfits because
compatibility isn’t exclusive to fashion. Just as there
are compatible pieces in clothing, there are combi-
nations in other industries too. By exploring diverse
fashion domains, we’re talking of compatibility that
exists beyond clothing. Moreover, our curiosity ex-
tends beyond fashion, by the potential for compati-
bility in other sectors, recognizing that the principles
guiding our recommendation system can be applied
across various industries. Whether it’s in home goods,
technology, or beyond.
ACKNOWLEDGEMENTS
TexP@CT Mobilizing Pact - Innovation Pact for
the Digitalization of Textiles and Clothing, project
no. 61, to Reinforce the Competitiveness and
Resilience of the National Economy, financed
through Component 5 - Capitalization and Business
Innovation, of the European funds allocated to
Portugal by the Recovery and Resilience Plan (PRR),
under the European Union’s (EU) Recovery and
Resilience Mechanism, as part of Next Generation
EU (https://recuperarportugal.gov.pt/ ), for the period
2021 - 2026. CMAT: partially supported by FCT-
‘Fundac¸
˜
ao para a Ci
ˆ
encia e a Tecnologia’, within
projects UIDP/00013/2020 and UIDB/00013/2020
(DOI 10.54499/UIDP/00013/2020 and DOI
10.54499/UIDB/00013/2020)
REFERENCES
Bellini, P., Palesi, L. A. I., Nesi, P., and Pantaleo, G.
(2023). Multi clustering recommendation system for
fashion retail. Multimedia Tools and Applications,
82(7):9989–10016.
Dong, J., Ma, Z., Mao, X., Yang, X., He, Y., Hong, R., and
Ji, S. (2021). Fine-grained fashion similarity predic-
tion by attribute-specific embedding learning. IEEE
Transactions on Image Processing, 30:8410–8425.
Gao, Y., Kuang, Z., Li, G., Luo, P., Chen, Y., Lin, L., and
Zhang, W. (2020). Fashion retrieval via graph reason-
ing networks on a similarity pyramid. IEEE Transac-
tions on Pattern Analysis and Machine Intelligence.
Han, X., Wu, Z., Jiang, Y.-G., and Davis, L. S. (2017).
Learning fashion compatibility with bidirectional
lstms. In Proceedings of the 25th ACM international
conference on Multimedia, pages 1078–1086.
He, K., Zhang, X., Ren, S., and Sun, J. (2016). Deep resid-
ual learning for image recognition. In Proceedings of
the IEEE Conference on Computer Vision and Pattern
Recognition (CVPR).
Hwangbo, H., Kim, Y. S., and Cha, K. J. (2018). Rec-
ommendation system development for fashion retail
e-commerce. Electronic Commerce Research and Ap-
plications, 28:94–101.
Jing, P., Cui, K., Guan, W., Nie, L., and Su, Y. (2023).
Category-aware multimodal attention network for
fashion compatibility modeling. IEEE Transactions
on Multimedia.
Manandhar, D., Bastan, M., and Yap, K.-H. (2018). Tiered
deep similarity search for fashion. In Proceedings
of the European Conference on Computer Vision
(ECCV) Workshops.
Revanur, A., Kumar, V., and Sharma, D. (2021). Semi-
supervised visual representation learning for fashion
compatibility. In Proceedings of the 15th ACM Con-
ference on Recommender Systems, pages 463–472.
Wang, Z. and Bovik, A. C. (2002). A universal image qual-
ity index. IEEE signal processing letters, 9(3):81–84.
Xiao, J., Wu, Y., Xie, K., and Hu, Q. (2019). Managing
the e-commerce disruption with it-based innovations:
Insights from strategic renewal perspectives. Informa-
tion & Management, 56(1):122–139.
Yethindra, D. N. and Deepak, G. (2021). A semantic ap-
proach for fashion recommendation using logistic re-
gression and ontologies. In 2021 International Con-
ference on Innovative Computing, Intelligent Commu-
nication and Smart Electrical Systems (ICSES), pages
1–6. IEEE.
Yin, R., Li, K., Lu, J., and Zhang, G. (2019). Enhancing
fashion recommendation with visual compatibility re-
lationship. In The world wide web conference, pages
3434–3440.
Zou, X., Pang, K., Zhang, W., and Wong, W. (2022). How
good is aesthetic ability of a fashion model? In Pro-
A Regression Deep Learning Approach for Fashion Compatibility
147