Authors:
Michael Danner
1
;
Bakir Hadžić
2
;
Robert Radloff
2
;
Xueping Su
3
;
Leping Peng
4
;
Thomas Weber
2
and
Matthias Rätsch
2
Affiliations:
1
CVSSP, University of Surrey, Guildford, U.K.
;
2
ViSiR, Reutlingen University, Germany
;
3
School of Electronics and Information, Xi’an Polytechnic University, China
;
4
Hunan University of Science and Technology, China
Keyword(s):
Unbiased Machine Learning, Fairness, Trustworthy AI, Acceptance Research, Debiasing Training Data, Facial Data Sets, AI-Acceptance Analysis.
Abstract:
AI-based prediction and recommender systems are widely used in various industry sectors. However, general
acceptance of AI-enabled systems is still widely uninvestigated. Therefore, firstly we conducted a survey with
559 respondents. Findings suggested that AI-enabled systems should be fair, transparent, consider personality traits and perform tasks efficiently. Secondly, we developed a system for the Facial Beauty Prediction
(FBP) benchmark that automatically evaluates facial attractiveness. As our previous experiments have proven,
these results are usually highly correlated with human ratings. Consequently they also reflect human bias
in annotations. An upcoming challenge for scientists is to provide training data and AI algorithms that can
withstand distorted information. In this work, we introduce AntiDiscriminationNet (ADN), a superior attractiveness prediction network. We propose a new method to generate an unbiased convolutional neural network
(CNN) to improve the fairn
ess of machine learning in facial dataset. To train unbiased networks we generate
synthetic images and weight training data for anti-discrimination assessments towards different ethnicities.
Additionally, we introduce an approach with entropy penalty terms to reduce the bias of our CNN. Our research provides insights in how to train and build fair machine learning models for facial image analysis by
minimising implicit biases. Our AntiDiscriminationNet finally outperforms all competitors in the FBP benchmark by achieving a Pearson correlation coefficient of PCC = 0.9601.
(More)