Moreover, Eq.(11) can be embedded into the
following differential equation:
0
()()
s
K1Θ
z
(18)
and hence, for number m even (i.e. even number of
training points), it can be implemented by a lossless
neural network, as shown schematically in Fig.4.
) = c
z
-
0
1
Lossless neural
network
Weight matrix
W=-K
s
Figure 4: Lossless neural network-based structure for
solution of Eq.(17).
The output of this neural network is:
1
0
() ( )
s
c Θζ K1z
(19)
The stability of solution (19) can be achieved by
damping action of parameter
0
> 0 (it can be
regarded as a regularization mechanism (Evgeniou
et. al., 2000). It is easy to see that the lossless neural
network shown in Fig.4. can be realized by using
octonionic modules, similarly as Hamiltonian neural
network given by Eq.(3). Thus, one gets the
following statement: Octonionic module is a
fundamental building block for the realization of AI
compatible processors. The 8-dimensional structure
from Fig.3 can be directly scaled up to dimension
N = 2
k
, k = 5, 6, 7, … using octonionic modules.
4 ON IMPLEMENTATION OF
OCTONIONIC MODULES
It can be seen that HNN as described by Eq.(2) is a
compatible connection of n elementary building
blocks-lossless neurons. A lossless neuron is
described by the differential equation:
1
111
21 2
2
d
x0wΘ(x )
xw0Θ(x )
d
(20)
Hence, the octonionic module, with weight matrix
W
8
consists of four lossless neurons, according to
Eq.(6). A practical circuit solution of near lossless
neurons can be realized using nonlinear voltage
controlled current sources (VCCS), which are
compatible with VLSI technology. A concrete
circuit however, is beyond the scope of this
presentation.
5 CONCLUSIONS
The main goal of this paper was to prove the
following statement:
AI compatible processor should be formulated in
the form of top-down structure via the following
hierarchy: Hamiltonian neural network (composed
of lossless neurons) – octonionic module (a basic
building block) – nonlinear voltage controlled
current source (device compatible with VLSI
technology).
Hence, it has been confirmed in this paper that
by using octonionic module based structures, one
obtains regularized and stable networks for learning.
Thus, typical for AI tasks, such as realization of
classifiers, pattern recognizers and memories, could
be physically implemented for any number N=2
k
(dimension of input vectors) and any even m <
(number of training patterns).
It is clear that octonionic module cannot be
ideally realized as an orthogonal filter (decoherence-
like phenomena).
Hence, the problem under consideration now is
as follows: how exactly an octonionic module be
realized by using cheap VLSI technology to
preserve the main property-orthogonality, power
efficiency and scaleability.
The possibility to directly transform the static
structure to the phase-locked loop (PLL)-based
oscillatory structure of octonionic modules is
noteworthy.
REFERENCES
Gea-Banacloche, J., (2010). Quantum Computers: A status
update,
Proc. IEEE, vol. 98, No 12.
Hooft, G., (2000). Determinism and Dissipation in
Quantum Gravity.
arXiv: hep-th/0003005v2, 16.
Basu, A., Hasler, P. A, (2010). Nullcline-based Design of
a Silicon Neuron. Proc. IEEE Tr. on CAS, vol 57, No
11.
Versace, M., Chandler, B., (2010). The Brain of a New
Machine
. IEEE Spectrum, vol.47, No 12.
Sienko, W., Citko, W., (2009). Hamiltonian Neural
Networks Based Networks for Learning, In Mellouk,
A. and Chebira, A. (Eds.),
Machine Learning, ISBN
978-953-7619-56-1, J, pp. 75-92, Publisher: I-Tech,
Vienna, Austria.
Evgeniou, T., Pontil, M., Poggio, T., (2000).
Regularization Networks and Support Vector
Machines, In Smola, A., Bartlett, P., Schoelkopf, G.,
Schuurmans, D., (Eds),
Advances in Large Margin
Classifiers
, pp. 171-203, Cambridge, MA, MIT Press.
HAMILTONIAN NEURAL NETWORK-BASED ORTHOGONAL FILTERS - A Basis for Artificial Intelligence
127