Geoffrey E. Hinton (1981b). Shape Representation in Par-
allel Systems. Seventh International Joint Conference
on Artificial Intelligence.
Geoffrey E. Hinton, A. Krizhevsky, and S. Wang (2011).
Transforming Auto-Encoders. International Confer-
ence on Artificial Neural Networks.
Goodfellow, I. J., Pouget-Abadie, J., Mirza, M., Xu, B.,
Warde-Farley, D., Ozair, S., Courville, A., and Ben-
gio, Y. (2014). Generative Adversarial Networks.
Neural Information Processing Systems NIPS.
Hinton, G. E., Sabour, S., and Frosst, N. (2018). Matrix
capsules with EM routing. International Conference
on Learning Representations.
Hirata, D. and Takahashi, N. (2020). Ensemble learning in
CNN augmented with fully connected subnetworks.
Iesmantas, T. and Alzbutas, R. (2018). Convolutional cap-
sule network for classification of breast cancer histol-
ogy images.
Jayasundara, V., Jayasekara, S., Jayasekara, H., Ra-
jasegaran, J., Seneviratne, S., and Rodrigo, R. (2019).
TextCaps: Handwritten Character Recognition With
Very Small Datasets. In 2019 IEEE Winter Confer-
ence on Applications of Computer Vision, pages 254–
262. IEEE Computer Society, Conference Publishing
Services.
K
¨
ading, C., Erik Rodner, Alexander Freytag, and Joachim
Denzler (2017). Fine-Tuning Deep Neural Networks
in Continuous Learning Scenarios. Asian Conference
on Computer Vision, pages 588–605.
Kamra, N., Gupta, U., and Liu, Y. (2017). Deep generative
dual memory network for continual learning.
Kirkpatrick, J., Pascanu, R., Rabinowitz, N., Veness, J.,
Desjardins, G., Rusu, A. A., Milan, K., Quan, J.,
Ramalho, T., Grabska-Barwinska, A., Hassabis, D.,
Clopath, C., Kumaran, D., and Hadsell, R. (2017).
Overcoming catastrophic forgetting in neural net-
works. Proceedings of the National Academy of Sci-
ences, 114(13):3521–3526.
Kumar, A. D. (2018). Novel Deep Learning Model for Traf-
fic Sign Detection Using Capsule Networks. Interna-
tional Journal of Pure and Applied Mathematics Vol-
ume 118 No. 20.
Lake, B. M., Salakhutdinov, R., and Tenenbaum, J. B.
(2015). Human-level concept learning through proba-
bilistic program induction. Science (New York, N.Y.),
350(6266):1332–1338.
Liu, J., Chao, F., Yang, L., Lin, C.-M., and Shen, Q. (2019).
Decoder Choice Network for Meta-Learning.
Malgieri, G. (2019). Automated decision-making in the
EU Member States: The right to explanation and
other “suitable safeguards” in the national legislations.
Computer Law & Security Review, 35(5):105327.
Mobiny, A. and van Nguyen, H. (2018). Fast CapsNet for
Lung Cancer Screening.
Mundhenk, T. N., Chen, B. Y., and Friedland, G. (2019).
Efficient Saliency Maps for Explainable AI.
Nøkland, A. and Eidnes, L. H. (2019). Training Neural Net-
works with Local Error Signals.
Rajasegaran, J., Jayasundara, V., Jayasekara, S., Jayasekara,
H., Seneviratne, S., and Rodrigo, R. (2019). Deep-
Caps: Going Deeper with Capsule Networks.
IEEE/CVF Conference on Computer Vision and Pat-
tern Recognition CVPR.
Renkens, V. and van hamme, H. (2018). Capsule Networks
for Low Resource Spoken Language Understanding.
Proc. Interspeech 2018.
Roy, D., Panda, P., and Roy, K. (2018). Tree-CNN: A Hier-
archical Deep Convolutional Neural Network for In-
cremental Learning.
Sabour, S., Frosst, N., and Hinton, G. E. (2017). Dynamic
Routing Between Capsules.
Simonyan, K., Vedaldi, A., and Zisserman, A. (2013). Deep
Inside Convolutional Networks: Visualising Image
Classification Models and Saliency Maps. Interna-
tional Conference on Learning Representations ICLR.
Tielenman, T. (2014). Optimizing Neural Networks That
Generate Images. Dissertation, University of Toronto,
Toronto.
van de Ven, G. M. and Tolias, A. S. (2019). Three scenarios
for continual learning. CoRR.
Xi, E., Bing, S., and Jin, Y. (2017). Capsule Network Per-
formance on Complex Data. International Joint Con-
ference on Neural Networks (IJCNN).
Xiao, H., Rasul, K., and Vollgraf, R. (2017). Fashion-
MNIST: a Novel Image Dataset for Benchmarking
Machine Learning Algorithms.
Y. LeCun, L. Bottou, Yoshua Bengio, and P. Haffner (1998).
Gradient-based learning applied to document recogni-
tion. Proceedings of the IEEE.
Yoon, S. W., Seo, J., and Moon, J. (2019). TapNet: Neu-
ral Network Augmented with Task-Adaptive Projec-
tion for Few-Shot Learning. Proceedings of the 36th
International Conference on Machine Learning.
Zenke, F., Poole, B., and Ganguli, S. (2017). Improved mul-
titask learning through synaptic intelligence. CoRR.
Zhang, Q., Wang, X., Wu, Y. N., Zhou, H., and Zhu, S.-C.
(2019). Interpretable CNNs for Object Classification.
Explainability and Continuous Learning with Capsule Networks
273