Authors:
Basile Tousside
;
Lukas Friedrichsen
and
Jörg Frochte
Affiliation:
Bochum University of Applied Science, 42579 Heiligenhaus, Germany
Keyword(s):
Tree-CNN, Continual Learning, Deep Learning, Hierarchical Classification, Robust.
Abstract:
The ability to perform continual learning and the adaption to new tasks without losing the knowledge already acquired is still a problem that current machine learning models do not address well. This is a drawback, which needs to be tackled for different reasons. On the one hand, conserving knowledge without keeping all of the data over all tasks is a rising challenge with laws like the European General Data Protection Regulation. On the other hand, training models come along with CO2 footprint. In the spirit of a Green AI the reuse of trained models will become more and more important. In this paper we discuss a simple but effective approach based on a Tree-CNN architecture. It allows knowledge transfer from past task when learning a new task, which maintains the model compact despite network expansion. Second, it avoids forgetting, i.e., learning new tasks without forgetting previous tasks. Third, it is cheap to train, to evaluate and requires less memory compared to a single monol
ithic model. Experimental results on a subset of the ImageNet dataset comparing different continual learning methods are presented.
(More)