Authors:
Theus H. Aspiras
1
;
Vijayan K. Asari
1
and
Wesam Sakla
2
Affiliations:
1
University of Dayon, United States
;
2
Air Force Research Laboratory and Wright Patterson Air Force Base, United States
Keyword(s):
Nonlinear Line Attractor, Multidimensional Data, Neural Networks, Machine Learning.
Related
Ontology
Subjects/Areas/Topics:
Artificial Intelligence
;
Biomedical Engineering
;
Biomedical Signal Processing
;
Computational Intelligence
;
Data Manipulation
;
Health Engineering and Technology Applications
;
Human-Computer Interaction
;
Image Processing and Artificial Vision Applications
;
Methodologies and Methods
;
Neural Networks
;
Neurocomputing
;
Neurotechnology, Electronics and Informatics
;
Pattern Recognition
;
Physiological Computing Systems
;
Sensor Networks
;
Signal Processing
;
Soft Computing
;
Theory and Methods
Abstract:
The human brain’s ability to extract information from multidimensional data modeled by the Nonlinear Line
Attractor (NLA), where nodes are connected by polynomial weight sets. Neuron connections in this architecture
assumes complete connectivity with all other neurons, thus creating a huge web of connections. We
envision that each neuron should be connected to a group of surrounding neurons with weighted connection
strengths that reduces with proximity to the neuron. To develop the weighted NLA architecture, we use a
Gaussian weighting strategy to model the proximity, which will also reduce the computation times significantly.
Once all data has been trained in the NLA network, the weight set can be reduced using a locality
preserving nonlinear dimensionality reduction technique. By reducing the weight sets using this technique,
we can reduce the amount of outputs for recognition tasks. An appropriate distance measure can then be used
for comparing testing data and the trained data wh
en processed through the NLA architecture. It is observed
that the proposed GNLA algorithm reduces training time significantly and is able to provide even better recognition
using fewer dimensions than the original NLA algorithm. We have tested this algorithm and showed
that it works well in different datasets, including the EO Synthetic Vehicle database and the Sheffield face
database.
(More)