Authors:
Ondrej Lukas
and
Sebastian Garcia
Affiliation:
Department of Computer Science, Faculty of Electrical Engineering, Czech Technical University in Prague, Czech Republic
Keyword(s):
Explainable AI, Functional Metrics, Explanation Evaluation, Network Security.
Abstract:
Deciding which XAI technique is best depends not only on the domain, but also on the given task, the dataset used, the model being explained, and the target goal of that model. We argue that the evaluation of XAI methods has not been thoroughly analyzed in the network security domain, which presents a unique type of challenge. While there are XAI methods applied in network security there is still a large gap between the needs of security stakeholders and the selection of the optimal method. We propose to approach the problem by first defining the stack-holders in security and their prototypical tasks. Each task defines inputs and specific needs for explanations. Based on these explanation needs (e.g. understanding the performance, or stealing a model), we created five XAI evaluation techniques that are used to compare and select which XAI method is best for each task (dataset, model, and goal). Our proposed approach was evaluated by running experiments for different security stakehol
ders, machine learning models, and XAI methods. Results were compared with the AutoXAI technique and random selection. Results show that our proposal to evaluate and select XAI methods for network security is well-grounded and that it can help AI security practitioners find better explanations for their given tasks.
(More)