tilingual mBERT (except for the one case in ta-
ble 7 for which mBERT-FC is actually better than
CamemBERT-FC in term of F1-Score).
For ABSA task, FlauBERT and CamemBERT
models shows comparable results as for others NLP
tasks (Le et al., 2019). Flaubert shows then great
interest since, as observed in Table 1, FlauBERT
model was trained with half text data size compared
to CamemBERT model.
The AEN model has the more complex structure
and is not bringing significant improvements compare
to SPC model.
For English language, the PTM BERT has shown
great improvement on NLP tasks compared to the
state-of-the-art. For ABSA, the improvement is
smaller.
Data augmentation has shown the improvement of
the performances with additional training on the re-
view text. Many authors use data augmentation like
BERT-PT (BERT Post-Training) with review reading
comprehension (Xu et al., 2019) leading to improved
performances. Adding Auxiliary Question BERT-AQ
(Sun et al., 2019)) could also be a complementary way
to improve aspect detection.
5 CONCLUSION
French PTM shows improvements of text represen-
tation in many NLP tasks including sentiment analy-
sis at sentence-level. We propose the use of PTM for
ABSA.
In this paper, we give an overview of the state-
of-the-art methods for ABSA on French language.
We propose and implement three fine-tuning methods
(FC, SPC and AEN) using the French SemEval2016
data.
Experimental results showed that these methods
outperforms conventional models with a word embed-
ding representation. These results also indicate the
higher performance of monolingual French models
(FlauBERT and CamemBERT) compared to multilin-
gual model (mBERT).
The PTM-SPC model shows generally great per-
formances and is less complex compare to the PTM-
AEN model. Our fine-tuned French PTM for ABSA
are also robust for OOD Museum dataset.
For future work, we plan to explore other fine-
tuned models and also to use data augmentation tech-
niques with French PTM in order to improve the per-
formances of our models.
REFERENCES
Apidianaki, M., Tannier, X., and Richart, C. (2016).
Datasets for aspect-based sentiment analysis in
French. In Proceedings of the Tenth International
Conference on Language Resources and Evaluation
(LREC’16), pages 1122–1126, Portoro
ˇ
z, Slovenia.
European Language Resources Association (ELRA).
Blard, T. (2020). French sentiment analysis with
BERT. In https://github.com/TheophileBlard/french-
sentiment-analysis-with-bert.
Brun, C., Perez, J., and Roux, C. (2016). XRCE at
SemEval-2016 task 5: Feedbacked ensemble mod-
eling on syntactico-semantic knowledge for aspect
based sentiment analysis. In Proceedings of the
10th International Workshop on Semantic Evaluation
(SemEval-2016), pages 277–281, San Diego, Califor-
nia. Association for Computational Linguistics.
Devlin, J., Chang, M., Lee, K., and Toutanova, K. (2018).
BERT: pre-training of deep bidirectional transformers
for language understanding. CoRR, abs/1810.04805.
Gao, Z., Feng, A., Song, X., and Wu, X. (2019). Target-
dependent sentiment classification with bert. In
IEEEAccess 8864964.
Kooli, N. and Pigeul, E. (2018). Analyse de sentiments
`
a
base d’aspects par combinaison de r
´
eseaux profonds
: application
`
a des avis en franc¸ais. In Actes de la
Conf
´
erence TALN, Rennes.
Kumar, A., Kohail, S., A.Kumar, Ekbal, A., and Biemann,
C. (2016). Beyond sentiment lexicon: Combining
domain dependency and distributional semantics fea-
tures for aspect based sentiment analysis. In ACLweb
www.aclweb.org/anthology/S16-1174/.
Le, H., Vial, L., Frej, J., Segonne, V., Coavoux, M., Lecou-
teux, B., Allauzen, A., Crabb
´
e, B., Besacier, L., and
Schwab, D. (2019). Flaubert: Unsupervised language
model pre-training for french. CoRR, abs/1912.05372.
Lin, P. and Luo, X. (2020). A survey of sentiment analysis
based on machine learning. In Zhu, X., Zhang, M.,
Hong, Y., and He, R., editors, Natural Language Pro-
cessing and Chinese Computing - 9th CCF Interna-
tional Conference, NLPCC 2020, Zhengzhou, China,
October 14-18, 2020, Proceedings, Part I.
Ma, D., Li, S., Zhang, X., and Wang, H. (2017). Inter-
active attention networks for aspect-level sentiment
classification. In Proceedings of the Twenty-Sixth In-
ternational Joint Conference on Artificial Intelligence,
IJCAI-17, pages 4068–4074.
Mach
´
a
ˇ
cek, J. (2016). BUTknot at SemEval-2016 task 5:
Supervised machine learning with term substitution
approach in aspect category detection. In Proceedings
of the 10th International Workshop on Semantic Eval-
uation (SemEval-2016), pages 301–305, San Diego,
California. Association for Computational Linguis-
tics.
Martin, L., Muller, B., Ortiz Su
´
arez, P. J., Dupont, Y., Ro-
mary, L., de la Clergerie,
´
E., Seddah, D., and Sagot, B.
(2020). CamemBERT: a tasty French language model.
In Proceedings of the 58th Annual Meeting of the As-
sociation for Computational Linguistics, pages 7203–
7219, Online. Association for Computational Linguis-
tics.
NLPinAI 2021 - Special Session on Natural Language Processing in Artificial Intelligence
524