
REFERENCES
Ali, A., Schnake, T., Eberle, O., Montavon, G., M
¨
uller, K.-
R., and Wolf, L. (2022). Xai for transformers: Better
explanations through conservative propagation. In In-
ternational Conference on Machine Learning, pages
435–451. PMLR.
Angelov, P. P., Soares, E. A., Jiang, R., Arnold, N. I., and
Atkinson, P. M. (2021). Explainable artificial intelli-
gence: an analytical review. Wiley Interdisciplinary
Reviews: Data Mining and Knowledge Discovery,
11(5):e1424.
Bhan, M., Vittaut, J.-N., Chesneau, N., and Lesot, M.-J.
(2023). Tigtec: Token importance guided text coun-
terfactuals. arXiv preprint arXiv:2304.12425.
Binder, M. (2021). But how does it work? explaining bert’s
star rating predictions of online customer reviews. In
PACIS, page 28.
Binder, M., Heinrich, B., Hopf, M., and Schiller, A. (2022).
Global reconstruction of language models with lin-
guistic rules–explainable ai for online consumer re-
views. Electronic Markets, 32(4):2123–2138.
Borys, K., Schmitt, Y. A., Nauta, M., Seifert, C., Kr
¨
amer,
N., Friedrich, C. M., and Nensa, F. (2023). Explain-
able ai in medical imaging: An overview for clini-
cal practitioners–saliency-based xai approaches. Eu-
ropean journal of radiology, page 110787.
Devlin, J., Chang, M., Lee, K., and Toutanova, K. (2018).
BERT: pre-training of deep bidirectional transformers
for language understanding. CoRR, abs/1810.04805.
Dieber, J. and Kirrane, S. (2020). Why model why? as-
sessing the strengths and limitations of lime. arXiv
preprint arXiv:2012.00093.
Dikmen, M. and Burns, C. (2022). The effects of domain
knowledge on trust in explainable ai and task perfor-
mance: A case of peer-to-peer lending. International
Journal of Human-Computer Studies, 162:102792.
Ivanovs, M., Kadikis, R., and Ozols, K. (2021).
Perturbation-based methods for explaining deep neu-
ral networks: A survey. Pattern Recognition Letters,
150:228–234.
Kenny, E. M. and Keane, M. T. (2021). Explain-
ing deep learning using examples: Optimal feature
weighting methods for twin systems using post-hoc,
explanation-by-example in xai. Knowledge-Based
Systems, 233:107530.
Kokalj, E.,
ˇ
Skrlj, B., Lavra
ˇ
c, N., Pollak, S., and Robnik-
ˇ
Sikonja, M. (2021). Bert meets shapley: Extending
shap explanations to transformer-based classifiers. In
Proceedings of the EACL Hackashop on News Media
Content Analysis and Automated Report Generation,
pages 16–21.
Maas, A. L., Daly, R. E., Pham, P. T., Huang, D., Ng, A. Y.,
and Potts, C. (2011). Learning word vectors for sen-
timent analysis. In Proceedings of the 49th Annual
Meeting of the Association for Computational Lin-
guistics: Human Language Technologies, pages 142–
150, Portland, Oregon, USA. Association for Compu-
tational Linguistics.
Minh, D., Wang, H. X., Li, Y. F., and Nguyen, T. N. (2022).
Explainable artificial intelligence: a comprehensive
review. Artificial Intelligence Review, pages 1–66.
Niranjan, K., Kumar, S. S., Vedanth, S., and Chitrakala,
S. (2023). An explainable ai driven decision support
system for covid-19 diagnosis using fused classifica-
tion and segmentation. Procedia computer science,
218:1915–1925.
Rietberg, M. T., Nguyen, V. B., Geerdink, J., Vijlbrief, O.,
and Seifert, C. (2023). Accurate and reliable classifi-
cation of unstructured reports on their diagnostic goal
using bert models. Diagnostics, 13(7):1251.
Salih, A., Raisi-Estabragh, Z., Galazzo, I. B., Radeva,
P., Petersen, S. E., Menegaz, G., and Lekadir, K.
(2023). Commentary on explainable artificial intel-
ligence methods: Shap and lime. arXiv preprint
arXiv:2305.02012.
Szczepa
´
nski, M., Pawlicki, M., Kozik, R., and Chora
´
s, M.
(2021). New explainability method for bert-based
model in fake news detection. Scientific reports,
11(1):23705.
Van der Velden, B. H., Kuijf, H. J., Gilhuijs, K. G., and
Viergever, M. A. (2022). Explainable artificial in-
telligence (xai) in deep learning-based medical image
analysis. Medical Image Analysis, 79:102470.
Yalc¸ın, O. G. (2020). Sentiment analysis in 10 minutes with
bert and tensorflow. Towards Data Science.
Hybrid Approach to Explain BERT Model: Sentiment Analysis Case
259