Authors:
Debarati Bhaumik
and
Diptish Dey
Affiliation:
Amsterdam University of Applied Sciences, The Netherlands
Keyword(s):
Auditable AI, Multilevel Logistic Regression, Random Forest, Explainability, Discrimination, Ethics.
Abstract:
Multilevel models using logistic regression (MLogRM) and random forest models (RFM) are increasingly deployed in industry for the purpose of binary classification. The European Commission’s proposed Artificial Intelligence Act (AIA) necessitates, under certain conditions, that application of such models is fair, transparent, and ethical, which consequently implies technical assessment of these models. This paper proposes and demonstrates an audit framework for technical assessment of RFMs and MLogRMs by focussing on model-, discrimination-, and transparency & explainability-related aspects. To measure these aspects 20 KPIs are proposed, which are paired to a traffic light risk assessment method. An open-source dataset is used to train a RFM and a MLogRM model and these KPIs are computed and compared with the traffic lights. The performance of popular explainability methods such as kernel- and tree-SHAP are assessed. The framework is expected to assist regulatory bodies in performing
conformity assessments of binary classifiers and also benefits providers and users deploying such AI-systems to comply with the AIA.
(More)