Authors:
Harshil Darji
;
Jelena Mitrović
and
Michael Granitzer
Affiliation:
Chair of Data Science, University of Passau, Innstraße 41, 94032 Passau, Germany
Keyword(s):
Language Models, Natural Language Processing, Named Entity Recognition, Legal Entity Recognition, Legal Language Processing.
Abstract:
The use of BERT, one of the most popular language models, has led to improvements in many Natural Language Processing (NLP) tasks. One such task is Named Entity Recognition (NER) i.e. automatic identification
of named entities such as location, person, organization, etc. from a given text. It is also an important base step
for many NLP tasks such as information extraction and argumentation mining. Even though there is much research done on NER using BERT and other popular language models, the same is not explored in detail when
it comes to Legal NLP or Legal Tech. Legal NLP applies various NLP techniques such as sentence similarity
or NER specifically on legal data. There are only a handful of models for NER tasks using BERT language
models, however, none of these are aimed at legal documents in German. In this paper, we fine-tune a popular
BERT language model trained on German data (German BERT) on a Legal Entity Recognition (LER) dataset.
To make sure our model is not overfi
tting, we performed a stratified 10-fold cross-validation. The results we
achieve by fine-tuning German BERT on the LER dataset outperform the BiLSTM-CRF+ model used by the
authors of the same LER dataset. Finally, we make the model openly available via HuggingFace.
(More)