Authors:
Michael Heider
;
Helena Stegherr
;
David Pätzel
;
Roman Sraj
;
Jonathan Wurth
;
Benedikt Volger
and
Jörg Hähner
Affiliation:
Universität Augsburg, Am Technologiezentrum 8, Augsburg, Germany
Keyword(s):
Rule Set Learning, Rule-based Learning, Learning Classifier Systems, Evolutionary Machine Learning, Interpretable Models, Explainable AI.
Abstract:
To fill the increasing demand for explanations of decisions made by automated prediction systems, machine
learning (ML) techniques that produce inherently transparent models are directly suited. Learning Classifier
Systems (LCSs), a family of rule-based learners, produce transparent models by design. However, the usefulness of such models, both for predictions and analyses, heavily depends on the placement and selection
of rules (combined constituting the ML task of model selection). In this paper, we investigate a variety of
techniques to efficiently place good rules within the search space based on their local prediction errors as well
as their generality. This investigation is done within a specific LCS, named SupRB, where the placement of
rules and the selection of good subsets of rules are strictly separated in contrast to other LCSs where these
tasks sometimes blend. We compare a Random Search, (1,λ)-ES and three Novelty Search variants. We find
that there is a definitive need
to guide the search based on some sensible criteria, i.e. error and generality,
rather than just placing rules randomly and selecting better performing ones but also find that Novelty Search
variants do not beat the easier to understand (1,λ)-ES.
(More)