Feature Importance for Deep Neural Networks: A Comparison of Predictive Power, Infidelity and Sensitivity

Lars Fluri

2024

Abstract

This paper evaluates the effectiveness of different feature importance algorithms employed on a neural network, focused on target prediction tasks with varying data complexities. The study reveals that the feature importance algorithms excel with data featuring minimal correlation between the attributes. However, their determination considerably decreases with escalating levels of correlation, while the inclusion of irrelevant features has minimal impact on determination. In terms of predictive power, DeepLIFT surpasses other methods for most data cases, but falls short in total infidelity. For more complex cases, Shapley Value Sampling outperforms DeepLIFT. In an empirical application, Integrated Gradients and DeepLIFT demonstrate lower sensitivity and lower infidelity, respectively. this paper highlights interesting dynamics between predictive power and fidelity in feature importance algorithms and offers key insights for their application in complex data scenarios.

Download


Paper Citation


in Harvard Style

Fluri L. (2024). Feature Importance for Deep Neural Networks: A Comparison of Predictive Power, Infidelity and Sensitivity. In Proceedings of the 1st International Conference on Explainable AI for Neural and Symbolic Methods - Volume 1: EXPLAINS; ISBN 978-989-758-720-7, SciTePress, pages 15-26. DOI: 10.5220/0012903300003886


in Bibtex Style

@conference{explains24,
author={Lars Fluri},
title={Feature Importance for Deep Neural Networks: A Comparison of Predictive Power, Infidelity and Sensitivity},
booktitle={Proceedings of the 1st International Conference on Explainable AI for Neural and Symbolic Methods - Volume 1: EXPLAINS},
year={2024},
pages={15-26},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0012903300003886},
isbn={978-989-758-720-7},
}


in EndNote Style

TY - CONF

JO - Proceedings of the 1st International Conference on Explainable AI for Neural and Symbolic Methods - Volume 1: EXPLAINS
TI - Feature Importance for Deep Neural Networks: A Comparison of Predictive Power, Infidelity and Sensitivity
SN - 978-989-758-720-7
AU - Fluri L.
PY - 2024
SP - 15
EP - 26
DO - 10.5220/0012903300003886
PB - SciTePress