Authors:
Christian Witte
1
;
2
;
René Schuster
3
;
Syed Bukhari
1
;
Patrick Trampert
1
;
Didier Stricker
2
;
3
and
Georg Schneider
1
Affiliations:
1
ZF Friedrichshafen AG, Saarbrücken, Germany
;
2
University of Kaiserlautern - TUK, Kaiserslautern, Germany
;
3
DFKI - German Research Center for Artificial Intelligence, Kaiserslautern, Germany
Keyword(s):
Continual Learning, Catastrophic Forgetting, Object Detection, Autonomous Driving.
Abstract:
Incorporating unseen data in pre-trained neural networks remains a challenging endeavor, as complete retraining is often impracticable. Yet, training the networks sequentially on data with different distributions can lead to performance degradation for previously learned data, known as catastrophic forgetting. The sequential training paradigm and the mitigation of catastrophic forgetting are subject to Continual Learning (CL). The phenomenon of forgetting poses a challenge for applications with changing distributions and prediction objectives, including Autonomous Driving (AD). Our work aims to illustrate the severity of catastrophic forgetting for object detection for class- and domain-incremental learning. We propose four hypotheses, as we investigate the impact of the ordering of sequential increments and the underlying data distribution of AD datasets. Further, the influence of different object detection architectures is examined. The results of our empirical study highlight the
major effects of forgetting for class-incremental learning. Moreover, we show that domain-incremental learning suffers less from forgetting but is highly dependent on the design of the experiments and choice of architecture.
(More)