Authors:
Ha-Thanh Nguyen
1
;
Vu Tran
2
;
Minh-Phuong Nguyen
3
;
Le-Minh Nguyen
3
and
Ken Satoh
1
Affiliations:
1
National Institute of Informatics, Tokyo, Japan
;
2
Institute of Statistical Mathematics, Tokyo, Japan
;
3
Japan Advanced Institute of Science and Technology, Ishikawa, Japan
Keyword(s):
Contextual Embedding, Fine-Tuning, Impact Explanation.
Abstract:
Recently, deep learning models trained on large amounts of data have achieved impressive results in the field of legal document processing. However, being seen as black boxes, these models lack explainability. This paper aims to shed light on the inner behavior of legal learning models by analyzing the effect of fine-tuning on legal contextual embeddings. This paper provides pieces of evidence to explain the relationship between the moving of contextual embeddings and the effectiveness of a model when fine-tuned on legal tasks. It can help further explain the effect of finetuning on language models. To this end, we use multilingual transformer models, fine-tune them on the lawfulness classification task, and record the changes in the embeddings. The experimental results reveal interesting phenomena. The method in this paper can be used to confirm whether a deep learning model truly gains the knowledge in a legal problem to make the predictions or simply memorize the training examples
, or worse, predict randomly.
(More)