transformers are further developed, the context and
accuracy of retrieved embeddings will only improve.
These improvements will enable better responses as
the most similar chunks with respect to our natural
language prompt, will be retrieved from our
knowledge base. The research conducted in the field
of LLMs has surged tremendously in the last few
years and the natural language processing and
Generative AI capability is expected to develop
significantly offering more precise and human-like
responses. As customization and fine-tuning of these
models continue to advance, this architecture will be
able to seamlessly integrate and cater to specialised
domain specific use cases. Future work in this domain
will focus on optimizing retrieval mechanisms,
developing more intuitive explainability frameworks,
and integrating these systems seamlessly into existing
business workflows.
REFERENCES
Band, S. S., Yarahmadi, A., Hsu, C. C., Biyari, M.,
Sookhak, M., Ameri, R., ... & Liang, H. W. (2023).
Application of explainable artificial intelligence in
medical health: A systematic review of interpretability
methods. Informatics in Medicine Unlocked, vol. 40,
p. 101286
Bhattacharyya, J. (2024). A Brief Comparison of Vector
Databases - CodeX - Medium. Medium. Retrieved
October 25, 2024, from https://medium.com/codex/a-
brief-comparison-of-vect or-databases-e194dedb0a80
Branco, R., Agostinho, C., Gusmeroli, S., Lavasa, E.,
Dikopoulou, Z., Monzo, D., & Lampathaki, F. (2023).
Explainable AI in manufacturing: an analysis of
transparency and interpretability methods for the
XMANAI platform. In 2023 IEEE International
Conference on Engineering, Technology and
Innovation (ICE/ITMC), pp. 1-8, IEEE.
Costa e Silva, E., Lopes, I. C., Correia, A., & Faria, S.
(2020). A logistic regression model for consumer
default risk. Journal of Applied Statistics, vol. 47, pp.
2879-2894.
Hassija, V., Chamola, V., Mahapatra, A., Singal, A., Goel,
D., Huang, K., ... & Hussain, A. (2024). Interpreting
black-box models: a review on explainable artificial
intelligence. Cognitive Computation, vol.16, pp. 45-74.
Hu, T., & Zhou, X. H. (2024). Unveiling LLM Evaluation
Focused on Metrics: Challenges and Solutions. arXiv
preprint arXiv:2404.09135.
Jagdishbhai, N., & Thakkar, K. Y. (2023). Exploring the
Capabilities and Limitations of GPT and ChatGPT in
Natural Language Processing. J. Manag. Res. Anal,
vol. 10, pp. 18-20.
Kameswari, C. S., Kavitha, J., Reddy, T. S.,
Chinthaguntla, B., Jagatheesaperumal, S. K.,
Gaftandzhieva, S., & Doneva, R. (2023). An overview
of vision transformers for image processing: A survey.
International Journal of Advanced Computer Science
and Applications, vol. 14.
Kiangala, S. K., & Wang, Z. (2021). An effective adaptive
customization framework for small manufacturing
plants using extreme gradient boosting-XGBoost and
random forest ensemble learning algorithms in an
Industry 4.0 environment. Machine Learning with
Applications, vol. 4, p. 100024.
Magara, M. B., Ojo, S. O., & Zuva, T. (2018). A
comparative analysis of text similarity measures and
algorithms in research paper recommender systems. In
2018 conference on information communications
technology and society (ICTAS), pp. 1-5, IEEE.
Oro, E., Granata, F. M., Lanza, A., Bachir, A., De Grandis,
L.,
&
Ruffolo,
M.
(2024).
Evaluating Retrieval-
Augmented Generation for Question Answering with
Large Language Models
Rao, S., Mehta, S., Kulkarni, S., Dalvi, H., Katre, N., &
Narvekar, M. (2022). A study of LIME and SHAP
model explainers for autonomous disease predictions.
In 2022 IEEE Bombay Section Signature Conference
(IBSSC), pp. 1-6, IEEE.
Rasool, Z., Kurniawan, S., Balugo, S., Barnett, S., Vasa, R.,
Chesser, C., ... & Bahar-Fuchs, A. (2024). Evaluating
LLMs on document-based QA: Exact answer selection
and numerical extraction using CogTale dataset.
Natural Language Processing Journal, p. 100083.
Salih, A. M., Raisi‐Estabragh, Z., Galazzo, I. B., Radeva,
P., Petersen, S. E., Lekadir, K., & Menegaz, G. (2024).
A perspective on explainable artificial intelligence
methods: SHAP and LIME. Advanced Intelligent
Systems, p. 2400304
Shah, D., Shah, D., Jodhawat, D., Parekh, J., & Srivastava,
K. (2022). Xception Net & Vision Transformer: A
comparative study for Deepfake Detection. In 2022
International Conference on Machine Learning,
Computer Systems and Security (MLCSS), pp.
393-398, IEEE.
Shwartz-Ziv, R., & Armon, A. (2022). Tabular data: Deep
learning is not all you need. Information Fusion, vol.
81, pp. 84-90.
Song, Y., Xiong, W., Zhu, D., Wu, W., Qian, H., Song, M.,
... & Li, S. (2023). RestGPT: Connecting Large Language
Models with Real-World RESTful APIs. arXiv
preprint arXiv:2306.06624.
VM, K., Warrier, H., & Gupta, Y. (2024). Fine Tuning
LLM for Enterprise: Practical Guidelines and
Recommendations. arXiv preprint arXiv:2404.10779.
Weber, P., Carl, K. V., & Hinz, O. (2024). Applications of
explainable artificial intelligence in finance—a
systematic review of finance, information systems,
and computer science literature. Management Review
Quarterly, vol. 74, pp. 867-907.
Yin, C., & Zhang, Z. (2024). A Study of Sentence Similarity
Based on the All-minilm-l6-v2 Model With “Same
Semantics, Different Structure” After Fine Tuning. In
2024 2nd International Conference on Image,
Algorithms and Artificial Intelligence (ICIAAI 2024),
pp. 677-684. Atlantis Press.