Authors:
Ik Hwan Jeon
and
Soo Young Shin
Affiliation:
Dept. of IT Convergence Engineering, Kumoh National Institute of Technology, Gumi and South Korea
Keyword(s):
Continual Learning, Generative Models, Representation Learning, Variational Autoencoders.
Related
Ontology
Subjects/Areas/Topics:
Artificial Intelligence
;
Artificial Intelligence and Decision Support Systems
;
Bayesian Networks
;
Biomedical Engineering
;
Biomedical Signal Processing
;
Computational Intelligence
;
Data Manipulation
;
Enterprise Information Systems
;
Health Engineering and Technology Applications
;
Human-Computer Interaction
;
Methodologies and Methods
;
Neural Networks
;
Neurocomputing
;
Neurotechnology, Electronics and Informatics
;
Pattern Recognition
;
Physiological Computing Systems
;
Sensor Networks
;
Signal Processing
;
Soft Computing
;
Theory and Methods
Abstract:
We propose a novel architecture for the continual representation learning for images, called variational continual auto-encoder (VCAE). Our approach builds a time-variant parametric model that generates images close to the observation by using optimized approximate inference over time. When the dataset is sequentially observed, the model efficiently learns underlying representations without forgetting previously acquired knowledge. Through experiments, we evaluate the development of test log-likelihood over time, which shows resistance to the catastrophic forgetting. The results show that VCAE has stronger immunity against catastrophic forgetting in comparison to the benchmark while VCAE requires much less time for training.