tions, and shows generalities of the approach. Section
3 explains the standard GNG algorithm, as well as the
modifications that were introduced in order to make
it more suitable for incremental modelling. Section 4
shows some tests of the performance of the algorithm
on a toy-set problem, and Section 5 presents the ap-
plication of the algorithm to a real-world large multi-
dimensional dataset. Finally, Section 6 draws some
conclusions about the approach of incremental mod-
elling in the task of processing huge databases, and on
the use of incremental learning algorithms like GNG
in this purpose.
2 INCREMENTAL MODELLING
The fact of having large or growingdatasets, while be-
ing positive and also desirable from the point of view
of a data acquisition system, constitutes a major draw-
back if the data are to be used in inductive modelling.
In the case of a huge database, one can devise two
possibilities. On the one hand, using huge amounts
of data to build models could be prohibitive because
of computational or storage constraints. And on the
other hand, taking only partial information from the
complete set and use it to build a model, implies the
use of special modelling paradigms being able to ap-
pend or insert new information into partial or growing
adaptive models.
A similar landscape is depicted if one thinks on
growing databases. It is a matter of fact that in order
to build an accurate model, a large enough amount of
data must be available. There are also two difficulties
here. On the one hand, it is not a trivial issue to know
beforehand the amount of data that is large enough for
building a model, and this is even more difficult if the
data collecting process is still running. On the other
hand, suppose that the amount of gathered informa-
tion is large enough to build a model. In the case of
having a static process as modelling objective, those
data would be sufficient to reach some level of accu-
racy. Conversely, if the model should fit a dynamic
process, it would be better to have an adaptive mod-
elling framework that follows the changes of the pro-
cess, instead of having absolute stable models which
have to be rebuilt from scratch each time the process
changes. Again, a model structure that incrementally
changes with the amount and quality of data might be
a good strategy.
A modelling framework which fulfils the afore-
said requirements is incremental modelling, also
called incremental learning. In the area of ma-
chine learning, the term incremental learning has syn-
onymously been used with pattern learning and on-
line learning to describe the opposite of batch learn-
ing (Chalup, 2002). Within this context, it only means
to distinguish two policies for modifying the param-
eters of a model during training i.e., after the presen-
tation of each training example in the online case, or
after the cumulation of a certain number of modifica-
tions in the batch case. We are considering a more
specific definition of the concept of incremental mod-
elling, which will be explained after defining the con-
cept of incremental task in the next subsection.
2.1 Incremental Task
In general, modelling tasks where the examples or ob-
servations become available over time (usually one
at a time) are considered as incremental learning
tasks (Giraud-Carrier, 2000).
Traditional static methods can be employed for
building a model from an incremental task given that,
if there exist the possibility of waiting for the data,
any incremental learning task can be transformed into
a non-incremental one (Giraud-Carrier, 2000). This
approach has the hindrance of reaching excessive vol-
umes of data that could render infeasible the mod-
elling task. It would be preferable to make use of
the advantages of incremental learners in this case. In
the same way, a large non-incremental modelling task
being unbearable by traditional modelling approaches
can be transformed into an incremental task by sam-
pling or splitting the data, and then to use a incremen-
tal learner in order to build a model from the obtained
incremental dataset.
2.2 Incremental Learning
Besides being a synonym of pattern learning or on-
line learning in the machine learning terminology, in-
cremental learning is a concept which has been as-
sociated with learning processes where a standard
learning mechanism is combined with or is influenced
by stepwise adjustments during the learning process.
These adaptations can be changes in the structure
of the learning system (e.g., growing and construc-
tive neural networks), or changes in its parameters
(e.g., stochastic learning), or even changes in the con-
stitution of its input signals (e.g., order, complex-
ity) (Chalup, 2002). These adaptations have the pur-
pose of enabling the construction of more specialized
models by adding new information to the already ex-
istent knowledge, when it is available.
Within this context, incremental learning shares
the same meaning of sequential learning. In se-
quential learning, the learning system is sequentially
trained by using different datasets, which most of the
NCTA 2011 - International Conference on Neural Computation Theory and Applications
268