be evoked. Instead of using a classic genetic algorithm for fine-tuning of the network
weights, new, very fast and powerful black box optimisation algorithms [11] [12] could
further increase network performance and allow to find even smaller networks for bet-
ter generalisation. ESNs can be used for direct control tasks ( see [13]) and scale well
with a high number of training patterns and motor outputs [14]. A more complex simu-
lation, for example of a humanoid robot, will show if direct, attractor-based storage of
parameterized motor patterns is flexible enough for complex behaviour generation.
References
1. Tani, J., Itob, M., Sugitaa, Y.: Self-organization of distributedly represented multiple behav-
ior schemata in a mirror system: reviews of robot experiments using rnnpb. Neural Networks
17 (2004) 1273 – 1289
2. Haruno, M., Wolpert, D. M., Kawato, M.: Mosaic model for sensorimotor learning and
control. Neural Computation 13(10) (2001) 2201–2220
3. Yamashita, Y., Tani, J.: Emergence of functional hierarchy in a multiple timescale neural
network model: A humanoid robot experiment. PLoS Computational Biology 4 (11) (2008)
4. Werbos, P.: Backpropagation through time: what it does and how to do it. In: Proceedings
of the IEEE. Volume 78(10). (1990) 1550–1560
5. J¨ager, H., Haas, H.: Harnessing nonlinearity: Predicting chaotic systems and saving energy
in wireless communication. Science 304 (2004) 78 – 80
6. Jaeger, H.: Tutorial on training recurrent neural networks, covering bppt, rtrl, ekf and the
”‘echo state network”‘ approach. Technical Report GMD Report 159, German National
Research Center for Information Technology (2002)
7. Hochreiter, S., Bengio, Y., Frasconi, P., Schmidhuber, J.: Gradient flow in recurrent nets: the
difficulty of learning long-term dependencies. In S. C. Kremer, J. F. K., ed.: A Field Guide
to Dynamical Recurrent Neural Networks. IEEE Press (2001)
8. Jaeger, H., Lukosevicius, M., Popovici, D., Siewert, U.: Optimization and applications of
echo state networks with leaky integrator neurons. Neural Networks 20(3) (2007) 335–352
9. Deb, K., Pratap, A., Agarwal, S., Meyarivan, T.: A fast and elitist multiobjective genetic
algorithm: Nsga-ii. IEEE Transactions on Evolutionary Computation 6, No. 2 (2002) 182–
197
10. Pearlmutter, B. A.: Learning state space trajectories in recurrent neural networks. Neural
Computation 1 (1989) 263–269
11. Kramer, O.: Fast blackbox optimization: Iterated local search and the strategy of powell. In:
The 2009 International Conference on Genetic and Evolutionary Methods (GEM’09). (2009)
in press.
12. Vrugt, J. A., Robinson, B. A., Hyman, J. M.: Self-adaptive multimethod search for global
optimization in real-parameter spaces. Evolutionary Computation, IEEE Transactions on
13(2) (2008) 243–259
13. Krause, A. F., Bl¨asing, B., D¨urr, V., Schack, T.: Direct Control of an Active Tactile Sen-
sor Using Echo State Networks. In: Human Centered Robot Systems. Cognition, Interac-
tion, Technology. Volume 6 of Cognitive Systems Monographs. Berlin Heidelberg: Springer-
Verlag (2009) 11–21
14. J¨ager, H.: Generating exponentially many periodic attractors with linearly growing echo
state networks. technical report 3, IUB (2006)