using neural networks. The prediction of fuzz fibres
on fabric surface was studied by using regression
analysis and ANN and was found that neural
networks gave better results than regression analysis
(Ertugrul and Ucar, 2007). Park, Hwang and Kang
(2001) concentrated on the objective evaluation of
total hand value in knitted fabrics using the theory of
neural networks.
In this work, it is aimed to predict the bursting
strength of plain knitted fabrics using artificial
neural networks before manufacturing the
aforementioned fabrics with regard to the yarn
properties and fabric properties.
2 MATERIAL AND METHOD
In this study, in order to predict the bursting strength
of plain knitted fabrics, fabrics were produced in
four different yarn counts (Ne 20, Ne 25, Ne 30, and
Ne 35) having three different kinds of twist
coefficients (α
e
3.8, α
e
4.2, and α
e
4.6). In order to
obtain yarns having different tenacity, elongation
and unevenness values, the yarns were produced
from six different cotton types. Totally, seventy two
different plain knitted fabrics were produced. For the
yarn tenacity and breaking elongation tests Uster
Tensorapid tensile tester was used. Yarn unevenness
measurements were performed on Uster Tester 3.
For fabric testing, the numbers of wales and courses
per cm were counted and bursting strength
properties of each plain knitted fabric were
measured with James H. Heal TruBurst Tester.
2.1 Artificial Neural Network Design
For the prediction of bursting strength of the plain
knitted fabrics, a multi layer feed forward network
with one hidden layer was used. While bursting
strength property of the plain knitted fabrics was
used as an output, yarn count (Ne), twist
(turns/inch), yarn tensile strength (cN/tex), yarn
elongation (%), yarn unevenness (CVm%) and
number of multiplication of wales and courses per
cm
2
were used as inputs in the model. As an
activation function, a hyperbolic function
)()()(
xxxx
eeeexf
−−
+−=
was used in the hidden layer
and a linear function
xxf =)(
was used in the
input and output layers. The training was performed
in one stage via using the back propagation
algorithm;
() ( 1)
ij j i ij
to t
ηδ α ω
Δ= +Δ−
(1)
where
the learning rate,
= the local error
gradient,
the momentum coefficient,
i
o
the
output of the i
th
unit.
As it is generally known, learning rate influences
the speed of the neural network. Increasing the
learning rate will cause the network either oscillate
or diverge from the true solution. Giving a too low
value for this parameter will make the network too
slow and it will be time consuming to converge on
the final outputs. The other parameter that affects the
performance of the back propagation algorithm is
the momentum coefficient. High values of
momentum coefficient ensure high speed of
convergence of the network. However, choosing too
high momentum coefficients may sometimes cause
missing the minimum error. On the other hand,
setting this parameter to a low value guarantees the
local minima and will slow down the training stage.
In the constitution of the network, the first step was
to determine the number of hidden layers and the
number of neurons in each layer. In our study, one
hidden layered network gave satisfactory results
with regard to error standard deviation, absolute
error mean and coefficient of regression. In the
second step it was aimed to determine the number of
neurons in the hidden layer. For this purpose, three
levels of number of neurons such as 3, 6 and 9, three
levels of number of epochs such as 5000, 10000 and
20000, three levels of learning rate and momentum
coefficients 0.001, 0.01, 0.1 and 0.1, 0.3, 0.5 were
tried respectively according to the orthogonal
experimental design.
As there are four parameters of neural network,
three different levels of each parameter make it
difficult and time consuming to perform full
factorial experimental design (3
4
). Thus, an
orthogonal experimental design was used. As a
result, 16 different kinds of neural networks were
tried.
3 RESULTS
According to the orthogonal experimental design,
the number of neurons were changed and found that
increasing the number of neurons increased the
regression coefficient of training and testing (Figure
1). As a result, 9 neurons in the hidden layer were
chosen.
In the back propagation, training was started at
5000 epochs and then it was increased up to 20000
epochs. However, increasing the number of epochs
ICAART 2010 - 2nd International Conference on Agents and Artificial Intelligence
616