determine a new direction in order to avoid the
obstacle. This is a simple task but the key aim is to
demonstrate the meta-network’s ability to improve
the decision making ability of the robot by
modifying the architecture of the network.
Weightless Neural Networks typically take
binary as inputs and as such the data must to be
parsed – this is shown in a later section. For this
particular experiment, the network will be analysing
the distances and determining a direction from this.
As a comparison, a GCN network using randomly
generated architectures will be employed to see how
effective the meta-network is at improving the
accuracy of the results.
5 META-NETWORK
As stated above, to combat the problem of finding a
practically employable architecture for a weightless
neural network, a genetic algorithm was chosen. The
framework used in the present work is a layered
process of an evolutionary search of ANNs, as
illustrated in Figure 4.
Figure 4: Basic Diagram depicting the various processes
involved.
As mentioned in the introduction, architecture
design is crucial to the successful application of a
weightless ANN because it has significant impact on
the network’s information processing capabilities.
With a variable number of layers, neurons per layer,
and neuron placement, the number of potential
architectures is potentially very large. For a
particular learning scenario, a network with
relatively few connections may imply it will be
unable to perform the task due to its limited
capability. Conversely, a network with a large
number of connections may add noise to the training
data and fail to generalise appropriately –a balance
must be struck. To combat the problem of finding a
practical architecture for a weightless neural
network, the following Genetic Algorithm was
employed.
There are several variables that can be modified
when using the GCN architecture, including input
size, number of layers, number of neurons (per
layer), and the size of the training set size. For these
experiments, the size of the training set and the input
size were set to seven and 6x7 respectively. The
reason for the input size is described in a later
section. This paper investigates the use of a Genetic
Algorithm to optimise the parameter configuration
for employment in an obstacle avoidance task.
There were some inherent problems with using at
standard genetic algorithm as a base however:
5.1 Inputs
Typically the inputs to a genetic algorithm are
strings or numbers which are the parameters the
genetic algorithm can modify. However, for this
experiment, numbers would not suffice due to the
complexity of the problem – It needed to modify 3
component parameters of information. The first is
the number of layers, the second is the number of
neurons within each of these layers, and the third is
the placement of these neurons. As such, a custom
input was defined as shown in Figure 5. On the left,
Figure 5 shows 3 ‘layers’– each pair of zeros
represents the relative coordinates for a neuron from
which its inputs will be derived, remembering that
the dimensions of the layer and input pattern are
identical. As described in the previous section
discussing the weightless neural architecture, the
pattern ‘wraps’ around, meaning that neurons on the
right side of the pattern are virtually clamped to
those on the left. If a coordinate given exceeds the
boundaries of the matrix, it simply wraps around. So
the layer on the right in Figure 3 translates as a
straight line of neurons for that layer for the element
in the centre of the layer.
5.2 Initial Population
The initial population is created using a random
generator for both the number or layers and how
many neurons will be in each individual layer.
Subsequently, tests are carried out on the data and
the error rate is returned for each individual. The
error rates are then multiplied by a factor of their
complexity – each additional layer adds a 0.05 to a
value that the error rate will be multiplied by, so that
smaller networks with similar results will edge out
those with larger networks. For example, if there are
two architectures – A and B - each with an error
IMPROVED ADAPTIVE META-NETWORK DESIGN EMPLOYING GENETIC ALGORITHM TECHNIQUES
145