Authors:
Pervaiz Khan
1
;
2
;
Muhammad Asim
1
;
Andreas Dengel
1
;
2
and
Sheraz Ahmed
1
Affiliations:
1
German Research Center for Artificial Intelligence (DFKI), Kaiserslautern, Germany
;
2
RPTU Kaiserslautern-Landau, Germany
Keyword(s):
Language Models, Contrastive Learning, Social Media Content Analysis, Health Mention Detection, Meta Predictor.
Abstract:
An ever-increasing amount of social media content requires advanced AI-based computer programs capable of extracting useful information. Specifically, the extraction of health-related content from social media is useful for the development of diverse types of applications including disease spread, mortality rate prediction, and finding the impact of diverse types of drugs on diverse types of diseases. Language models are competent in extracting the syntactic and semantics of text. However, they face a hard time extracting similar patterns from social media texts. The primary reason for this shortfall lies in the non-standardized writing style commonly employed by social media users. Following the need for an optimal language model competent in extracting useful patterns from social media text, the key goal of this paper is to train language models in such a way that they learn to derive generalized patterns. The key goal is achieved through the incorporation of random weighted pertur
bation and contrastive learning strategies. On top of a unique training strategy, a meta predictor is proposed that reaps the benefits of 5 different language models for discriminating posts of social media text into non-health and health-related classes. Comprehensive experimentation across 3 public benchmark datasets reveals that the proposed training strategy improves the performance of the language models up to 3.87%, in terms of F1-score, as compared to their performance with traditional training. Furthermore, the proposed meta predictor outperforms existing health mention classification predictors across all 3 benchmark datasets.
(More)