Authors:
Bruno Casella
1
;
2
;
Alessio Barbaro Chisari
3
;
4
;
Sebastiano Battiato
4
and
Mario Valerio Giuffrida
5
Affiliations:
1
Department of Computer Science, University of Torino, Torino, Italy
;
2
Department of Economics and Business, University of Catania, Catania, Italy
;
3
Department of Civil Engineering and Architecture, University of Catania, Catania, Italy
;
4
Department of Mathematics and Computer Science, University of Catania, Catania, Italy
;
5
School of Computing, Edinburgh Napier University, Edinburgh, U.K.
Keyword(s):
Parameter Aggregation, Transfer Learning, Selective Forgetting.
Abstract:
It has been demonstrated that deep neural networks outperform traditional machine learning. However, deep networks lack generalisability, that is, they will not perform as good as in a new (testing) set drawn from a different distribution due to the domain shift. In order to tackle this known issue, several transfer learning approaches have been proposed, where the knowledge of a trained model is transferred into another to improve performance with different data. However, most of these approaches require additional training steps, or they suffer from catastrophic forgetting that occurs when a trained model has overwritten previously learnt knowledge. We address both problems with a novel transfer learning approach that uses network aggregation. We train dataset-specific networks together with an aggregation network in a unified framework. The loss function includes two main components: a task-specific loss (such as cross-entropy) and an aggregation loss. The proposed aggregation los
s allows our model to learn how trained deep network parameters can be aggregated with an aggregation operator. We demonstrate that the proposed approach learns model aggregation at test time without any further training step, reducing the burden of transfer learning to a simple arithmetical operation. The proposed approach achieves comparable performance w.r.t. the baseline. Besides, if the aggregation operator has an inverse, we will show that our model also inherently allows for selective forgetting, i.e., the aggregated model can forget one of the datasets it was trained on, retaining information on the others.
(More)