a high-dimensional space are usually nonlinear, it is
best to incorporate a nonlinear line attractor to model
the manifold in space. Thus in this paper, we utilize
the nonlinear line attractor (NLA) network as the sup-
porting architecture for our work.
1.2 Biological Implications
With the use of recurrent associative networks, most
are totally interconnected, meaning that every node is
connected to all other nodes. This type of architec-
ture allows influence of all nodes, especially if there
are significant changes in input at farther nodes. But
in biological structures, nodes are only interconnected
with surrounding nodes with longer, weaker connec-
tions with farther nodes. These types are expressed
by Tononi et al. (Tononi et al., 1999), who found that
there is much redundancy in highly interconnected
networks. They introduced an optimized degenera-
tive case, which minimizes the amount of connections
while maintaining the cognitive ability and reducing
redundancy.
By using only neighborhood connections, por-
tions of the network will be responsible for model-
ing that region while relying on the propagation of
influence from other sections to travel as the network
iterates. This local modeling will aid in faster conver-
gence due to reliance to only closer nodes and give
way to higher variance areas, which contain more in-
formation than other areas of input, for example a
background region. This also reduces the amount of
redundancies in the network for modeling a specific
portion of data. Guido et al. (Guido et al., 1990)
found that there is functional compensation when the
visual cortex, which is highly modular, is damaged.
1.3 Modularity in Neural Networks
By developing neighboring connections, we can cre-
ate modularity while still keeping inter-connectivity
in the networks. Happel et al. (Happel and Murre,
1994) investigated various interconnections and mod-
ularity in networks and found that different configu-
rations aided in further recognition of different tasks.
Other techniques also used modularity, like modu-
lar principal component analysis (Gottumukkal and
Asari, 2004), which takes specific portions of an im-
age and takes the PCA of those sub-images to aid in
the recognition of the whole image. Modularity re-
duces the model complexity which uses specific mod-
ules to learn only a portion of data to aid in the overall
complex task (Gomi and Kawato, 1993).
Instead of fully incorporating modularity, algo-
rithms also include overlap between modules to im-
prove the recognition capabilities. Auda et al. (Auda
and Kamel, 1997) used Cooperative Modular Neural
Networks with various degrees of overlap to improve
various classification applications. Also since mod-
ularity reduces the amount of connections between
neurons by dividing processing into smaller subtasks,
the amount of computations can be greatly reduced.
The type of neural network we will be looking at
is the nonlinear line attractor, proposed by Seow et
al. (Seow and Asari, 2004), which has been used for
skin color association, pattern association (Seow and
Asari, 2006), and pose and expression invariant face
recognition (Seow et al., 2012). Given that modu-
larity is able to reduce computation complexity and
improve recognition in many cases, we aim to incor-
porate it into the nonlinear line attractor network. In-
stead of complete modularity, we propose a smooth,
Gaussian weighting strategy to make smooth overlaps
for each module. According to the weighting scheme
of the network, modularity will also reduce the com-
plexity of the modeling of the network. In (Seow
et al., 2012), the weighting scheme is reduced using
Nonlinear Dimensionality Reduction. We will look
into the ability of the algorithm to reduce the weights
and propose an improvement to the algorithm.
The main contributions of this paper are:
• Gaussian weighting strategy to the Nonlinear Line
Attractor Network to introduce modularity
• Reduction of the computational complexity to im-
prove the convergence time of the GNLA archi-
tecture
• An improved scenario for using the Nonlinear Di-
mensionality Reduction for object recognition
2 METHODOLOGY
The nonlinear line attractor network is a recurrent as-
sociative neural network, which aims to converge on
a trained pattern given an input. Each trained pat-
tern has connective information, which links one de-
gree of information to another. These trained patterns
usually are learned as a point in the feature space
and thus variations of the pattern would be repre-
sented as a basin. Convergence would specify that
the input pattern would be associated to one of the
learned patterns. When considering a basin of attrac-
tion, single point representations of a particular set
of patterns may be insufficient to totally encompass
the patterns. The NLA architecture formulates a non-
linear line representation which would allow patterns
to converge towards the line attractor. An example
would be using manifold learning on a set of objects
Gaussian Nonlinear Line Attractor for Learning Multidimensional Data
131