Invertibility of ReLU-Layers: A Practical Approach

Hannah Eckert, Daniel Haider, Martin Ehler, Peter Balazs

2024

Abstract

Invertibility in machine learning models is a pivotal feature that bridges the gap between model complexity and interpretability. For ReLU-layers, the practical verification of invertibility has been shown to be a difficult task that still remains unsolved. Recently, a frame theoretic condition has been proposed to verify invertibility on an open or convex set, however, the computations for this condition are computationally infeasible in high dimensions. As an alternative, we propose an algorithm that stochastically samples the dataset to approximately verify the above condition for invertibility and can be efficiently implemented even in high dimensions. We use the algorithm to monitor invertibility and to enforce it during training in standard classification tasks.

Download


Paper Citation


in Harvard Style

Eckert H., Haider D., Ehler M. and Balazs P. (2024). Invertibility of ReLU-Layers: A Practical Approach. In Proceedings of the 16th International Joint Conference on Computational Intelligence - Volume 1: NCTA; ISBN 978-989-758-721-4, SciTePress, pages 423-429. DOI: 10.5220/0012951300003837


in Bibtex Style

@conference{ncta24,
author={Hannah Eckert and Daniel Haider and Martin Ehler and Peter Balazs},
title={Invertibility of ReLU-Layers: A Practical Approach},
booktitle={Proceedings of the 16th International Joint Conference on Computational Intelligence - Volume 1: NCTA},
year={2024},
pages={423-429},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0012951300003837},
isbn={978-989-758-721-4},
}


in EndNote Style

TY - CONF

JO - Proceedings of the 16th International Joint Conference on Computational Intelligence - Volume 1: NCTA
TI - Invertibility of ReLU-Layers: A Practical Approach
SN - 978-989-758-721-4
AU - Eckert H.
AU - Haider D.
AU - Ehler M.
AU - Balazs P.
PY - 2024
SP - 423
EP - 429
DO - 10.5220/0012951300003837
PB - SciTePress