Atkinson, C., McCane, B., Szymanski, L., and Robins, A.
(2018). Pseudo-recursal: Solving the catastrophic
forgetting problem in deep neural networks. arXiv
preprint arXiv:1802.03875.
Bengio, Y., Yao, L., Alain, G., and Vincent, P. (2013). Gen-
eralized denoising auto-encoders as generative mod-
els. In Advances in neural information processing sys-
tems, pages 899–907.
Castro, F. M., Mar
´
ın-Jim
´
enez, M. J., Guil, N., Schmid, C.,
and Alahari, K. (2018). End-to-end incremental learn-
ing. In Proceedings of the European conference on
computer vision (ECCV), pages 233–248.
Chaudhry, A., Dokania, P. K., Ajanthan, T., and Torr, P. H.
(2018a). Riemannian walk for incremental learning:
Understanding forgetting and intransigence. In Pro-
ceedings of the European Conference on Computer
Vision (ECCV), pages 532–547.
Chaudhry, A., Ranzato, M., Rohrbach, M., and Elhoseiny,
M. (2018b). Efficient lifelong learning with a-gem.
arXiv preprint arXiv:1812.00420.
Chaudhry, A., Rohrbach, M., Elhoseiny, M., Ajanthan,
T., Dokania, P. K., Torr, P. H., and Ranzato, M.
(2019). On tiny episodic memories in continual learn-
ing. arXiv preprint arXiv:1902.10486.
De Lange, M., Aljundi, R., Masana, M., Parisot, S.,
Jia, X., Leonardis, A., Slabaugh, G., and Tuyte-
laars, T. (2019). A continual learning survey: Defy-
ing forgetting in classification tasks. arXiv preprint
arXiv:1909.08383.
Farquhar, S. and Gal, Y. (2018). Towards robust evalua-
tions of continual learning. Bayesian Deep Learning
Workshop at NeurIPS.
Fernando, C., Banarse, D., Blundell, C., Zwols, Y., Ha, D.,
Rusu, A. A., Pritzel, A., and Wierstra, D. (2017). Path-
net: Evolution channels gradient descent in super neu-
ral networks. arXiv preprint arXiv:1701.08734.
Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B.,
Warde-Farley, D., Ozair, S., Courville, A., and Ben-
gio, Y. (2014). Generative adversarial nets. In
Advances in neural information processing systems,
pages 2672–2680.
He, K., Zhang, X., Ren, S., and Sun, J. (2015). Deep resid-
ual learning for image recognition. arXiv preprint
arXiv:1512.03385.
Hinton, G., Vinyals, O., and Dean, J. (2015). Distilling the
knowledge in a neural network. NIPS Deep Learning
and Representation Learning Workshop.
Hocquet, G., Bichler, O., and Querlioz, D. (2020). Ova-inn:
Continual learning with invertible neural networks.
arXiv preprint arXiv:2006.13772.
Jeon, I. and Shin, S. (2019). Continual representa-
tion learning for images with variational continual
auto-encoder. In Proceedings of the 11th Interna-
tional Conference on Agents and Artificial Intelli-
gence - Volume 2: ICAART,, pages 367–373. IN-
STICC, SciTePress.
Kemker, R. and Kanan, C. (2018). Fearnet: Brain-inspired
model for incremental learning. International Confer-
ence on Learning Representations (ICLR).
Kingma, D. P. and Ba, J. (2014). Adam: A method for
stochastic optimization. 3rd International Conference
on Learning Representations ICLR.
Kingma, D. P. and Welling, M. (2014). Auto-encoding vari-
ational bayes. International Conference on Learning
Representations (ICLR).
Kirkpatrick, J., Pascanu, R., Rabinowitz, N., Veness, J.,
Desjardins, G., Rusu, A. A., Milan, K., Quan, J.,
Ramalho, T., Grabska-Barwinska, A., et al. (2017).
Overcoming catastrophic forgetting in neural net-
works. Proceedings of the national academy of sci-
ences, 114(13):3521–3526.
Krizhevsky, A., Hinton, G., et al. (2009). Learning multiple
layers of features from tiny images.
Lavda, F., Ramapuram, J., Gregorova, M., and Kalousis, A.
(2018). Continual classification learning using gener-
ative models. arXiv preprint arXiv:1810.10612.
LeCun, Y., Cortes, C., and Burges, C. (2010). Mnist hand-
written digit database. ATT Labs [Online]. Available:
http://yann.lecun.com/exdb/mnist, 2.
Lesort, T., Gepperth, A., Stoian, A., and Filliat, D. (2019).
Marginal replay vs conditional replay for continual
learning. In International Conference on Artificial
Neural Networks, pages 466–480. Springer.
Mallya, A. and Lazebnik, S. (2018). Packnet: Adding mul-
tiple tasks to a single network by iterative pruning. In
Proceedings of the IEEE Conference on Computer Vi-
sion and Pattern Recognition, pages 7765–7773.
McCloskey, M. and Cohen, N. J. (1989). Catastrophic in-
terference in connectionist networks: The sequential
learning problem. In Psychology of learning and mo-
tivation, volume 24, pages 109–165. Elsevier.
Misra, D. (2019). Mish: A self regularized non-
monotonic neural activation function. arXiv preprint
arXiv:1908.08681.
Parisi, G. I., Kemker, R., Part, J. L., Kanan, C., and
Wermter, S. (2019). Continual lifelong learning with
neural networks: A review. Neural Networks, 113:54–
71.
Prabhu, A., Torr, P., and Dokania, P. (2020). Gdumb: A
simple approach that questions our progress in contin-
ual learning. In The European Conference on Com-
puter Vision (ECCV).
Rannen, A., Aljundi, R., Blaschko, M. B., and Tuytelaars,
T. (2017). Encoder based lifelong learning. In Pro-
ceedings of the IEEE International Conference on
Computer Vision, pages 1320–1328.
Rebuffi, S.-A., Kolesnikov, A., Sperl, G., and Lampert,
C. H. (2017). icarl: Incremental classifier and rep-
resentation learning. In Proceedings of the IEEE con-
ference on Computer Vision and Pattern Recognition,
pages 2001–2010.
Robins, A. (1995). Catastrophic forgetting, rehearsal and
pseudorehearsal. Connection Science, 7(2):123–146.
Rusu, A. A., Rabinowitz, N. C., Desjardins, G., Soyer,
H., Kirkpatrick, J., Kavukcuoglu, K., Pascanu, R.,
and Hadsell, R. (2016). Progressive neural networks.
arXiv preprint arXiv:1606.04671.
Shin, H., Lee, J. K., Kim, J., and Kim, J. (2017). Continual
learning with deep generative replay. In Advances in
Neural Information Processing Systems, pages 2990–
2999.
ICAART 2021 - 13th International Conference on Agents and Artificial Intelligence
216