We remarked in (Sim et al., 2015) that the natu-
ral immune system provides a obvious metaphor for
building a system that meets the requirements of a
LML as noted by (Silver et al., 2013). It exhibits
memory that enables it to respond rapidly when faced
with pathogens it has previously been exposed to; it
can selectively adapt prior knowledge via clonal se-
lection mechanisms that can rapidly adapt and im-
prove existing antibodies (pathogen-fighting cells) to
cope better with new variants of previous pathogens
and finally, it embodies a systemic approach by main-
taining a repertoire of antibodies that collectively
cover the space of potential pathogenic material.
In the human immune system, immune cells are
generated from gene libraries: the DNA encoding for
the cells is constructed by random sampling from so-
called V, D and J gene libraries which gives rise a
huge diversity of cells due to the combinatorics of the
process. A huge advantage of this process is that a
very large number of cells can be constructed from a
fixed repertoire of DNA. Shifting the focus to opti-
misation, we propose that genetic programming can
provide an analagous function: from a fixed set of
terminals and functions, a very large space of algo-
rithms can be generated, thus providing the diversity
required to achieve lifelong learning.
3 NELLI: AN L2O
In previous work, (Hart and Sim, 2014; Sim et al.,
2015), we have combined the immune metaphor with
genetic programming in a system dubbed NELLI:
Network for Lifelong Learning. The system has been
applied in bin-packing and job-shop scheduling do-
mains. NELLI autonomously generates an ensemble
of optimisation algorithms that are capable of solv-
ing a broad range of problem instances from a given
domain. The size of the ensemble varies over time
depending on the stream of instances that the system
is exposed to: each algorithm generalises over some
region of the area of instance space defined by the
problems of interest. It has been demonstrated to im-
prove its performance as it is exposed to more and
more instances from a given family of problems, and
generate new algorithms when faced with instances
that exhibit very different characteristics from those
previously seen. Finally, it is also shown to retain
memory, in that if re-exposed to instances seen in the
past, it quickly returns new algorithms which exhibit
good performance.
4 CONCLUSIONS
NELLI represents the first steps towards creating L2O
systems — optimisers that continue to adapt over
time. However much work can be done in improv-
ing the system. The human immune system adapts
over two time scales. Over an individual lifetime, new
cells are generated from gene libraries as described
above, while the gene libraries themselves adapt on an
evolutionary timescale across generations, therefore
changing their content. There is no reason why the
same process canot be applied to Genetic Program-
ming, with the functions/terminals that make up the
algorithm — or even the operations of the GP process
itself — evolving over time.
Another direction for future work concerns the
manner in which the system reacts to change in in-
stance characteristics. The current approach relies on
trial and error, with newly generated algorithms com-
peting against each other to remain in the system. The
integration of machine-learning approaches to predict
likely changes in instances offers the potential to pre-
generate algorithms in anticipation of future demand,
thereby increasing the efficiency of the system. Some
efforts towards this have been described by (Ortiz-
Bayliss et al., 2015) in relation to solving constraint
satisfaction problems.
In conclusion, we argue for a shift in direction
for the optimisation community: rather than focus-
ing effort on developing more and more complex al-
gorithms trained on large but static sets of data, a
move towards developing systems that autonomously
and continually generate specialised algorithms on-
demand may bear considerable fruit.
REFERENCES
Hart, E. and Sim, K. (2014). On the life-long learning capa-
bilities of a nelli*: A hyper-heuristic optimisation sys-
tem. In International Conference on Parallel Problem
Solving from Nature, pages 282–291. Springer.
Ortiz-Bayliss, J., Terashima-Marn, H., and Conant-Pablos,
S. (2015). Lifelong learning selection hyper-heuristics
for constraint satisfaction problems. In Advances in
Artificial Intelligence and Soft Computing.
Silver, D., Yang, Q., and Li, L. (2013). Lifelong machine
learning systems: Beyond learning algorithms. In
AAAI Spring Symposium Series.
Sim, K., Hart, E., and Paechter, B. (2015). A lifelong learn-
ing hyper-heuristic method for bin packing. Evolu-
tionary computation, 23(1):37–67.
Thrun, S. and Pratt, L. (1997). Learning to Learn. Kluwer
Academic Publishers, Boston, MA.