Authors:
Nathalie Rzepka
1
;
Katharina Simbeck
1
;
Hans-Georg Müller
2
and
Niels Pinkwart
3
Affiliations:
1
Hochschule für Technik und Wirtschaft, Treskowallee 8, Berlin, Germany
;
2
Department of German Studies, University of Potsdam, Potsdam, Germany
;
3
Department of Computer Science, Main University, Berlin, Germany
Keyword(s):
Fairness, Dropout Prediction, Algorithmic Bias.
Abstract:
The increasing use of machine learning models in education is accompanied by some concerns about their fairness. While most research on the fairness of machine learning models in education focuses on discrimination by gender or race, other variables such as parental educational background or home literacy environment are known to impact children's literacy skills too. This paper, therefore, evaluates three different implementations of in-session dropout prediction models used in a learning platform to accompany German school classes with respect to their fairness based on four different fairness measures. We evaluate the models for discrimination of gender, migration background, parental education, and home literacy environment. While predictive parity and equal opportunity are rarely above the defined threshold, predictive equality and slicing analysis indicate that model quality is slightly better for boys, users with higher parental education, users with less than ten books, and u
sers with a migrant background. Furthermore, our analysis of the temporal prediction shows that with increasing accuracy of the model, the fairness decreases. In conclusion, we see that the fairness of a model depends on 1) the fairness measure, 2) the evaluated demographic group and 3) the data with which the model is trained.
(More)