Authors:
Theus H. Aspiras
;
Ruixu Liu
and
Vijayan K. Asari
Affiliation:
Electrical and Computer Engineering, University of Dayton, 300 College Park, Dayton and U.S.A.
Keyword(s):
Generative Adversarial Networks, Convolutional Neural Networks, Variational Autoencoders, Unsupervised Learning, Semi-Supervised Learning.
Related
Ontology
Subjects/Areas/Topics:
Artificial Intelligence
;
Biomedical Engineering
;
Biomedical Signal Processing
;
Computational Intelligence
;
Health Engineering and Technology Applications
;
Human-Computer Interaction
;
Learning Paradigms and Algorithms
;
Methodologies and Methods
;
Neural Networks
;
Neurocomputing
;
Neurotechnology, Electronics and Informatics
;
Pattern Recognition
;
Physiological Computing Systems
;
Sensor Networks
;
Signal Processing
;
Soft Computing
;
Stability and Instability in Artificial Neural Networks
;
Theory and Methods
Abstract:
Given that there are numerous amounts of unlabeled data available for usage in training neural networks, it is desirable to implement a neural network architecture and training paradigm to maximize the ability of the latent space representation. Through multiple perspectives of the latent space using adversarial learning and autoencoding, data requirements can be reduced, which improves learning ability across domains. The entire goal of the proposed work is not to train exhaustively, but to train with multiperspectivity. We propose a new neural network architecture called Active Recall Network (ARN) for learning with less labels by optimizing the latent space. This neural network architecture learns latent space features of unlabeled data by using a fusion framework of an autoencoder and a generative adversarial network. Variations in the latent space representations will be captured and modeled by generation, discrimination, and reconstruction strategies in the network using both u
nlabeled and labeled data. Performance evaluations conducted on the proposed ARN architectures with two popular datasets demonstrated promising results in terms of generative capabilities and latent space effectiveness. Through the multiple perspectives that are embedded in ARN, we envision that this architecture will be incredibly versatile in every application that requires learning with less labels.
(More)