
vacy. To achieve this, we used a hash function to gen-
erate a unique ID for each username, maintaining con-
sistency across the dataset without exposing the actual
usernames. Hashing is a one-way process, meaning
the original usernames cannot be retrieved from the
hashes. This approach enabled us to uniquely iden-
tify each username while preserving privacy. Addi-
tionally, we deleted the mapping between usernames
and their hashed IDs immediately after the hashing
process to further protect user privacy. This method
complies with ethical standards, and our approach has
been approved by the ethics committee under refer-
ence number ETK-05/24-25.
ACKNOWLEDGEMENTS
We gratefully acknowledge the support of the Min-
istry of Economy, Industry, and Competitiveness
of Spain under Grant No.:INCEPTION(PID2021-
128969OB-I00).
REFERENCES
Aldhyani, T. and Alshebami, A. (2022). Detecting and
analyzing suicidal ideation on social media using
deep learning and machine learning models. Interna-
tional Journal of Environmental Research and Public
Health, 2022:1–16.
American Association of Suicidology (2023). Know
the signs: How to tell if someone might be sui-
cidal. https://suicidology.org/2023/06/01/know-the-
signs-how-to-tell-if-someone-might-be-suicidal/.
Breiman, L. (2001). Random forests. Machine Learning,
45(1):5–32.
Brown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J.,
Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G.,
Askell, A., Agarwal, S., Herbert-Voss, A., Krueger,
G., Henighan, T., Child, R., Ramesh, A., Ziegler,
D. M., Wu, J., Winter, C., Hesse, C., Chen, M., Sigler,
E., Litwin, M., Gray, S., Chess, B., Clark, J., Berner,
C., McCandlish, S., Radford, A., Sutskever, I., and
Amodei, D. (2020). Language models are few-shot
learners. CoRR, abs/2005.14165.
Chang, Y., Wang, X., Wang, J., Wu, Y., Yang, L., Zhu,
K., Chen, H., Yi, X., Wang, C., Wang, Y., Ye, W.,
Zhang, Y., Chang, Y., Yu, P. S., Yang, Q., and Xie,
X. (2024). A survey on evaluation of large language
models. ACM Trans. Intell. Syst. Technol., 15(3).
Devlin, J., Chang, M., Lee, K., and Toutanova, K. (2018).
BERT: pre-training of deep bidirectional transformers
for language understanding. CoRR, abs/1810.04805.
Floridi, L. and Chiriatti, M. (2020). Gpt-3: Its nature, scope,
limits, and consequences. Minds and Machines, 30:1–
14.
Grootendorst, M. (2022). Bertopic: Neural topic modeling
with a class-based tf-idf procedure.
Haque, F., Un Nur, R., Jahan, S., Mahmud, Z., and Shah, F.
(2020). A transformer based approach to detect suici-
dal ideation using pre-trained language models.
Hutto, C. and Gilbert, E. (2014). Vader: A parsimonious
rule-based model for sentiment analysis of social me-
dia text. In Proceedings of the Eighth International
Conference on Weblogs and Social Media (ICWSM-
14), Ann Arbor, MI.
Kumar, R., Rao, K., Nayak, S., and Chandra, R. (2020).
Suicidal ideation prediction in twitter data using ma-
chine learning techniques. Journal of Interdisci-
plinary Mathematics, 23:117–125.
Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., and Neubig,
G. (2023). Pre-train, prompt, and predict: A system-
atic survey of prompting methods in natural language
processing. ACM Comput. Surv., 55(9).
Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D.,
Levy, O., Lewis, M., Zettlemoyer, L., and Stoyanov,
V. (2019). Roberta: A robustly optimized bert pre-
training approach.
Marvin, G., Hellen, N., Jjingo, D., and Nakatumba-
Nabende, J. (2023). Prompt engineering in large lan-
guage models. In International conference on data in-
telligence and cognitive informatics, pages 387–402.
Springer.
Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S.,
Matena, M., Zhou, Y., Li, W., and Liu, P. J. (2019).
Exploring the limits of transfer learning with a unified
text-to-text transformer. CoRR, abs/1910.10683.
Sawhney, R., Manchanda, P., Mathur, P., Shah, R., and
Singh, R. (2018). Exploring and learning suicidal
ideation connotations on social media with deep learn-
ing. In Proceedings of the 9th workshop on computa-
tional approaches to subjectivity, sentiment and social
media analysis, pages 167–175.
Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux,
M.-A., Lacroix, T., Rozi
`
ere, B., Goyal, N., Hambro,
E., Azhar, F., Rodriguez, A., Joulin, A., Grave, E.,
and Lample, G. (2023). Llama: Open and efficient
foundation language models.
White, J., Fu, Q., Hays, S., Sandborn, M., Olea, C., Gilbert,
H., Elnashar, A., Spencer-Smith, J., and Schmidt,
D. C. (2023). A prompt pattern catalog to enhance
prompt engineering with chatgpt.
World Health Organization (2021). Global health estimates.
Technical report, World Health Organization.
World Health Organization (2022). Who urges more effec-
tive prevention of injuries and violence causing 1 in
12 deaths worldwide. Technical report, World Health
Organization.
Detecting Suicidal Ideation on Social Media Using Large Language Models with Zero-Shot Prompting
267