Learning a unified embedding space of web search
from large-scale query log. Knowledge-Based Sys-
tems, 150:38–48.
Campbell, D. J. (1988). Task complexity: A review
and analysis. The Academy of Management Review,
13(1):40–52.
Chen, J., Mao, J., Liu, Y., Zhang, F., Zhang, M., and Ma, S.
(2021). Towards a Better Understanding of Query Re-
formulation Behavior in Web Search, page 743–755.
Association for Computing Machinery, New York,
NY, USA.
Devlin, J., Chang, M.-W., Lee, K., and Toutanova, K.
(2019). BERT: Pre-training of deep bidirectional
transformers for language understanding. In Pro-
ceedings of the 2019 Conference of the North Amer-
ican Chapter of the Association for Computational
Linguistics: Human Language Technologies, Volume
1 (Long and Short Papers), pages 4171–4186, Min-
neapolis, Minnesota. Association for Computational
Linguistics.
Dosso, C., Moreno, J. G., Chevalier, A., and Tamine, L.
(2021). CoST: An Annotated Data Collection for
Complex Search, page 4455–4464. Association for
Computing Machinery, New York, NY, USA.
Ethayarajh, K. (2019). How contextual are contextualized
word representations? Comparing the geometry of
BERT, ELMo, and GPT-2 embeddings. In Proceed-
ings of the 2019 Conference on Empirical Methods
in Natural Language Processing and the 9th Inter-
national Joint Conference on Natural Language Pro-
cessing (EMNLP-IJCNLP), pages 55–65, Hong Kong,
China. Association for Computational Linguistics.
Gomes, P., Martins, B., and Cruz, L. (2019). Segment-
ing user sessions in search engine query logs leverag-
ing word embeddings. In Digital Libraries for Open
Knowledge: 23rd International Conference on The-
ory and Practice of Digital Libraries, TPDL 2019,
Oslo, Norway, September 9-12, 2019, Proceedings,
page 185–199, Berlin, Heidelberg. Springer-Verlag.
Grave, E., Bojanowski, P., Gupta, P., Joulin, A., and
Mikolov, T. (2018). Learning word vectors for
157 languages. In Proceedings of the International
Conference on Language Resources and Evaluation
(LREC 2018).
Huang, J. and Efthimiadis, E. (2009). Analyzing and eval-
uating query reformulation strategies in web search
logs. pages 77–86.
Jawahar, G., Sagot, B., and Seddah, D. (2019). What does
BERT learn about the structure of language? In Pro-
ceedings of the 57th Annual Meeting of the Associa-
tion for Computational Linguistics, pages 3651–3657,
Florence, Italy. Association for Computational Lin-
guistics.
Le, H., Vial, L., Frej, J., Segonne, V., Coavoux, M., Lecou-
teux, B., Allauzen, A., Crabb
´
e, B., Besacier, L., and
Schwab, D. (2020). Flaubert: Unsupervised lan-
guage model pre-training for french. In Proceedings
of The 12th Language Resources and Evaluation Con-
ference, pages 2479–2490, Marseille, France. Euro-
pean Language Resources Association.
Lin, Y., Tan, Y. C., and Frank, R. (2019). Open sesame:
Getting inside BERT’s linguistic knowledge. In Pro-
ceedings of the 2019 ACL Workshop BlackboxNLP:
Analyzing and Interpreting Neural Networks for NLP.
Association for Computational Linguistics.
Liu, J., Mitsui, M., Belkin, N. J., and Shah, C. (2019). Task,
information seeking intentions, and user behavior: To-
ward a multi-level understanding of web search. In
Proceedings of the 2019 Conference on Human In-
formation Interaction and Retrieval, CHIIR ’19, page
123–132, New York, NY, USA. Association for Com-
puting Machinery.
Martin, L., Muller, B., Ortiz Su
´
arez, P. J., Dupont, Y., Ro-
mary, L., de la Clergerie,
´
E., Seddah, D., and Sagot, B.
(2020). CamemBERT: a tasty French language model.
In Proceedings of the 58th Annual Meeting of the As-
sociation for Computational Linguistics, pages 7203–
7219, Online. Association for Computational Linguis-
tics.
Mehrotra, R. and Yilmaz, E. (2017). Task embeddings:
Learning query embeddings using task context. In
Proceedings of the 2017 ACM on Conference on In-
formation and Knowledge Management, CIKM ’17,
page 2199–2202, New York, NY, USA. Association
for Computing Machinery.
Mickus, T., Paperno, D., Constant, M., and van Deemter, K.
(2020). What do you mean, BERT? In Proceedings
of the Society for Computation in Linguistics 2020,
pages 279–290, New York, New York. Association for
Computational Linguistics.
Mitra, B. (2015). Exploring session context using dis-
tributed representations of queries and reformulations.
In Proceedings of the 38th International ACM SIGIR
Conference on Research and Development in Informa-
tion Retrieval, SIGIR ’15, page 3–12, New York, NY,
USA. Association for Computing Machinery.
Rieh, S. Y. and Xie, H. I. (2006). Analysis of multiple
query reformulations on the web: The interactive in-
formation retrieval context. Information Processing &
Management, 42(3):751–768.
Rogers, A., Kovaleva, O., and Rumshisky, A. (2020). A
primer in BERTology: What we know about how
BERT works. Transactions of the Association for
Computational Linguistics, 8:842–866.
Sanchiz, M., Amadieu, F., and Chevalier, A. (2020). An
evolving perspective to capture individual differences
related to fluid and crystallized abilities in information
searching with a search engine. In Fu, W. T. and van
Oostendorp, H., editors, Understanding and Improv-
ing Information Search: A Cognitive Approach, pages
71–96. Springer International Publishing, Cham.
White, R. W., Richardson, M., and Yih, W.-T. (2015).
Questions vs. Queries in Informational Search Tasks.
In Proceedings of the 24th International Conference
on World Wide Web, WWW ’15 Companion, page
135–136, New York, NY, USA. Association for Com-
puting Machinery.
KDIR 2023 - 15th International Conference on Knowledge Discovery and Information Retrieval
280