of both structure and performance to the SHPNN is
SAO-ELM. In the presented algorithm, the initial num-
ber of hidden layer nodes is 35 in the image segmen-
tation and satellite image datasets and this number rose
to 178.3 and 534.6 automatically. On the other hand,
the initial number of hidden nodes for SAO-ELM is 180
and 400 for these datasets respectively, which are opti-
mal node numbers for OS-ELM, and these numbers rose
to 191 and 413. In other words, it appears that the initial
number of nodes for SAO-ELM needs to be very close to
the optimal value, but the SHPNN is capable of finding
its optimal structure from a very rudimentary initial con-
figuration. This property makes SHPNN a well-suited
algorithm to simulate anthropomorphic learning process.
One of the important properties of the proposed al-
gorithm (which also applies to RHPNN) is the fact that
its hidden node kernels can be used to determine sub-
categories in data and can be used to get a sense of the
statistics of the classes. This property of SHPNN is in
line with the recent trend in machine learning commu-
nity to design interpretable algorithms.
5 CONCLUSIONS
In this paper, a novel algorithm for online supervised
classification is presented and its performance is com-
pared with other similar algorithms. The results show
that this algorithm has a great potential in terms of both
speed and accuracy. It is also worth noting that the al-
gorithm is still at the development phase and as future
work, more formula are developed to change the vari-
ances and centers adaptively, while the linking weights
can be adjusted too.
REFERENCES
Asgary, R., Mohammadi, K., and Zwolinski, M. (2007).
Using neural networks as a fault detection mecha-
nism in mems devices. Microelectronics Reliability,
47(1):142–149.
de Jes
´
us Rubio, J. (2017). A method with neural networks
for the classification of fruits and vegetables. Soft
Computing, 21(23):7207–7220.
Dua, D. and Graff, C. (2017). UCI machine learning repos-
itory.
Giap, C. N., Son, L. H., and Chiclana, F. (2018). Dynamic
structural neural network. Journal of Intelligent &
Fuzzy Systems, 34(4):2479–2490.
Hamid, O. H. and Braun, J. (2017). Reinforcement learn-
ing and attractor neural network models of associative
learning. In International Joint Conference on Com-
putational Intelligence, pages 327–349. Springer.
Huang, G.-B., Saratchandran, P., and Sundararajan, N.
(2004). An efficient sequential learning algorithm for
growing and pruning rbf (gap-rbf) networks. IEEE
Transactions on Systems, Man, and Cybernetics, Part
B (Cybernetics), 34(6):2284–2292.
LeCun, Y. A., Bottou, L., Orr, G. B., and M
¨
uller, K.-
R. (2012). Efficient backprop. In Neural networks:
Tricks of the trade, pages 9–48. Springer.
Li, G., Liu, M., and Dong, M. (2010). A new online learn-
ing algorithm for structure-adjustable extreme learn-
ing machine. Computers & Mathematics with Appli-
cations, 60(3):377–389.
Liang, N.-Y., Huang, G.-B., Saratchandran, P., and Sun-
dararajan, N. (2006). A fast and accurate online se-
quential learning algorithm for feedforward networks.
IEEE Transactions on neural networks, 17(6):1411–
1423.
Platt, J. (1991). A resource-allocating network for function
interpolation. MIT Press.
Somu, N., MR, G. R., Kalpana, V., Kirthivasan, K., and
VS, S. S. (2018). An improved robust heteroscedastic
probabilistic neural network based trust prediction ap-
proach for cloud service selection. Neural Networks,
108:339–354.
Venkatesan, R. and Er, M. J. (2016). A novel progressive
learning technique for multi-class classification. Neu-
rocomputing, 207:310–321.
Wang, J., Belatreche, A., Maguire, L., and Mcginnity, T. M.
(2014). An online supervised learning method for
spiking neural networks with adaptive structure. Neu-
rocomputing, 144:526–536.
Yang, Z. R. and Chen, S. (1998). Robust maximum like-
lihood training of heteroscedastic probabilistic neural
networks. Neural Networks, 11(4):739–747.
Yang, Z. R., Zwolinski, M., Chalk, C. D., and Williams,
A. C. (2000). Applying a robust heteroscedastic prob-
abilistic neural network to analog fault detection and
classification. IEEE Transactions on computer-aided
design of integrated circuits and systems, 19(1):142–
151.
Yingwei, L., Sundararajan, N., and Saratchandran, P.
(1997). A sequential learning scheme for function ap-
proximation using minimal radial basis function neu-
ral networks. Neural computation, 9(2):461–478.
A Sequential Heteroscedastic Probabilistic Neural Network for Online Classification
453