when the size of training data is small; otherwise left
pseudo-inverse is more appropriate. Hence, it can be
observed from Table 2 and Table 3 that OS-eELM-
right runs relatively faster than OS-eELM-left since
the size of initial training data is chosen much smaller
than the number of hidden nodes.
6 CONCLUSIONS
In this paper, an online sequential learning algorithm
for SLFNS and other regularization networks based
on the enhanced ELM is proposed, which is capa-
ble of learning data on a one-by-one basis or chunk-
by-chunk basis. Simulations on six benchmarking
datasets have shown that, by adding a positive value
to the diagonal of HH
T
and H
T
H, the generalization
performance of our proposed methods outperform the
original OS-ELM. In addition, in the simulations, OS-
eELM-right is more suitable for sequential learning
than OS-eELM-left concerning the issue of training
speed since there are less than 1,000 observations dur-
ing the initial training phase. Different hidden nodes,
such as RBF nodes, can be implemented in the future
work.
REFERENCES
Asirvadam, V. S., McLoone, S. F., and Irwin, G. W. (2002).
Parallel and separable recursive levenberg-marquardt
training algorithm. In 12th IEEE Workshop on Neu-
ral Networks for Signal Processing, pages 129–138.
IEEE.
Blake and Merz, C. J. (1998). UCI repository of machine
learning databases.
Boyd, S., Ghaoui, L. E., Feron, E., and Balakrishnan, V.
(1994). Linear Matrix Inequalities in System and Con-
trol Theory. Society for Industrial and Applied Math-
ematic.
Deng, W., Zheng, Q., and Chen, L. (2009). Proximal
support vector machine classifiers. In IEEE Sympo-
sium on Computational Intelligence and Data Mining
(CIDM 09), pages 389–395.
Feng, G., Huang, G.-B., Lin, Q., and Gay, R. (2009). Error
minimized extreme learning machine with growth of
hidden nodes and incremental learning. IEEE Trans-
actions on Neural Networks, 20(8):1352–1357.
Hoerl, A. E. and Kennard, R. W. (1970). Ridge regression:
Biased estimation for nonorthogonal problems. Tech-
nometrics, 12(1):55–67.
Huang, G.-B. and Chen, L. (2007). Convex incremental ex-
treme learning machine. Neurocomputing, 70:3056–
3062.
Huang, G.-B. and Chen, L. (2008). Enhanced random
search based incremental extreme learning machine.
Neurocomputing, 71:3460–3468.
Huang, G.-B., Chen, L., and Siew, C.-K. (2006a). Universal
approximation using incremental constructive feed-
forward networks with random hidden nodes. IEEE
Transactions on Neural Networks, 17(4):879–892.
Huang, G.-B., Ding, X., and Zhou, H. (2010). Optimization
method based extreme learning machine for classifi-
cation. Neurocomputing, 74:155–163.
Huang, G.-B., Saratchandran, P., and Sundararajan, N.
(2004). An efficient sequential learning algorithm for
growing and pruning rbf (gap-rbf) networks. IEEE
Transactions on Systems, Man, and Cybernetics, Part
B: Cybernetics, 34.
Huang, G.-B., Saratchandran, P., and Sundararajan, N.
(2005). A generalized growing and pruning rbf (ggap-
rbf) neural network for function approximation. IEEE
Transactions on Neural Networks, 16(1):57–67.
Huang, G.-B., Zhou, H., Ding, X., and Zhang, R. (2011).
Extreme learning machine for regression and multi-
class classification. (in press) IEEE Transactions on
Systems, Man, and Cybernetics.
Huang, G.-B., Zhu, Q.-Y., and Siew, C.-K. (2006b). Ex-
treme learning machine: Theory and applications.
Neurocomputing, 70(1-3):489 – 501.
Liang, N.-Y., Huang, G.-B., Saratchandran, P., and Sun-
dararajan, N. (2006). A fast and accurate online se-
quential learning algorithm for feedforward networks.
IEEE Transactions on Neural Networks, 17(6):1411–
1423.
Ngia, L. S., Sj¨oberg, J., and Viberg, M. (1998). Adap-
tive neural nets filter using a recursive levenberg-
marquardt search direction. In the 32nd Asilomar
Conference on Signals, Systems and Computers, CA,
USA.
Rao, C. R. and Mitra, S. K. (1971). Generalized Inverse
of Matrices and its Applications. John Wiley & Sons,
Inc, New York.
Rumelhart, D. E., Hinton, G. E., and Williams, R. J. (1986).
Learning representations by back-propagation errors.
Nature, 323:533–536.
Serre, D. (2002). Matrices: Theory and Applications.
Springer-Verlag New York, Inc.
Toh, K.-A. (2008). Deterministic neural classification. Neu-
ral Computation, 20(6):1565–1595.
ONLINE SEQUENTIAL LEARNING BASED ON ENHANCED EXTREME LEARNING MACHINE USING LEFT OR
RIGHT PSEUDO-INVERSE
305