and is worth pursuing in the long run.
Recent approaches in text ranking attempt to
leverage the in-context learning capabilitty of large
language models (LLMs). Soft Prompting (Peng
et al., 2023) addresses the challenge of insufficient
domain-specific training data for dense retrieval by
using soft prompt-tuning to generate weak queries
and subsequently training task-specific dense retriev-
ers. Pairwise Ranking Prompting (Qin et al., 2023)
aims at enhancing the ranking performance of LLMs
by reducing the prompt complexities. Exploring
the practical implications and potential challenges of
these techniques when faced with real-world data re-
mains a promising avenue for future research. Addi-
tionally, further benchmark testing can shed light on
the comparison between large language models and
conventional methods and is considered an extension
of this study.
REFERENCES
Aljundi, R., Rohrbach, M., and Tuytelaars, T. (2018).
Selfless sequential learning. arXiv preprint
arXiv:1806.05421.
Bajaj, P., Campos, D., Craswell, N., Deng, L., Gao, J., Liu,
X., Majumder, R., McNamara, A., Mitra, B., Nguyen,
T., et al. (2016). Ms marco: A human generated ma-
chine reading comprehension dataset. arXiv preprint
arXiv:1611.09268.
Bosch, N., Shalmashi, S., Yaghoubi, F., Holm, H., Gaim, F.,
and Payberah, A. H. (2022). Fine-tuning bert-based
language models for duplicate trouble report retrieval.
In 2022 IEEE International Conference on Big Data
(Big Data), pages 4737–4745. IEEE.
Cekic¸, T., Manav, Y., Helvacıoglu, B., D
¨
undar, E. B., Deniz,
O., and Eryigit, G. (2022). Long form question an-
swering dataset creation for business use cases using
noise-added siamese-bert.
Choi, J., Jung, E., Suh, J., and Rhee, W. (2021). Improv-
ing bi-encoder document ranking models with two
rankers and multi-teacher distillation. In Proceedings
of the 44th International ACM SIGIR Conference on
Research and Development in Information Retrieval,
pages 2192–2196.
Devlin, J., Chang, M.-W., Lee, K., and Toutanova, K.
(2018). Bert: Pre-training of deep bidirectional trans-
formers for language understanding. arXiv preprint
arXiv:1810.04805.
El-Din, D. M. (2016). Enhancement bag-of-words model
for solving the challenges of sentiment analysis. Inter-
national Journal of Advanced Computer Science and
Applications, 7(1).
Farahani, A., Voghoei, S., Rasheed, K., and Arabnia, H. R.
(2021). A brief review of domain adaptation. Ad-
vances in Data Science and Information Engineering:
Proceedings from ICDATA 2020 and IKE 2020, pages
877–894.
Grimalt, N. M. I., Shalmashi, S., Yaghoubi, F., Jonsson, L.,
and Payberah, A. H. (2022). Berticsson: A recom-
mender system for troubleshooting.
Henderson, M., Al-Rfou, R., Strope, B., Sung, Y.-H.,
Luk
´
acs, L., Guo, R., Kumar, S., Miklos, B., and
Kurzweil, R. (2017). Efficient natural language re-
sponse suggestion for smart reply. arXiv preprint
arXiv:1705.00652.
Holm, H. (2021). Bidirectional encoder representations
from transformers (bert) for question answering in
the telecom domain.: Adapting a bert-like language
model to the telecom domain using the electra pre-
training approach.
Lin, J., Nogueira, R., and Yates, A. (2021). Pretrained trans-
formers for text ranking: Bert and beyond. Synthesis
Lectures on Human Language Technologies, 14(4):1–
325.
Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D.,
Levy, O., Lewis, M., Zettlemoyer, L., and Stoyanov,
V. (2019). Roberta: A robustly optimized bert pre-
training approach. arXiv preprint arXiv:1907.11692.
Peng, Z., Wu, X., and Fang, Y. (2023). Soft prompt tun-
ing for augmenting dense retrieval with large language
models. arXiv preprint arXiv:2307.08303.
Qin, Z., Jagerman, R., Hui, K., Zhuang, H., Wu, J.,
Shen, J., Liu, T., Liu, J., Metzler, D., Wang, X.,
et al. (2023). Large language models are effective
text rankers with pairwise ranking prompting. arXiv
preprint arXiv:2306.17563.
Reimers, N. and Gurevych, I. (2019). Sentence-bert: Sen-
tence embeddings using siamese bert-networks. arXiv
preprint arXiv:1908.10084.
Salton, G. and Buckley, C. (1988). Term-weighting ap-
proaches in automatic text retrieval. Information pro-
cessing & management, 24(5):513–523.
Sanderson, M. (2010). Christopher d. manning, prabhakar
raghavan, hinrich sch
¨
utze, introduction to information
retrieval, cambridge university press. 2008. isbn-13
978-0-521-86571-5, xxi+ 482 pages. Natural Lan-
guage Engineering, 16(1):100–103.
Saneifar, H., Bonniol, S., Laurent, A., Poncelet, P., and
Roche, M. (2009). Mining for relevant terms from
log files. In KDIR’09: International Conference
on Knowledge Discovery and Information Retrieval,
pages 77–84.
Torrey, L. and Shavlik, J. (2010). Transfer learning. In
Handbook of research on machine learning appli-
cations and trends: algorithms, methods, and tech-
niques, pages 242–264. IGI global.
Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones,
L., Gomez, A. N., Kaiser, Ł., and Polosukhin, I.
(2017). Attention is all you need. Advances in neural
information processing systems, 30.
Recommendation System for Product Test Failures Using BERT
213