today, the cost of computation has improved by a
factor about 1 million. We can count on an
additional factor of 100 before fundamental
limitations are encountered. At that point, a state-of-
the-art digital system will still require 10MW to
process information at the rate that it is processed by
a single human brain. The unavoidable conclusion,
which (Carver, 1990) reached about ten years ago, is
that we have something fundamental to learn from
the brain about a new and much more effective form
of computation. Even the simplest brains of the
simplest animals are awesome computational
instruments. They do computations we do not know
how to do, in ways we do not understand. We might
think that this big disparity in the effectiveness of
computation has to do with the fact that, down at the
device level, the nerve membrane is actually
working with single molecules. Perhaps
manipulating single molecules is fundamentally
more efficient than is using the continuum physics
with which we build transistors. If that conjecture
were true, we would have no hope that our silicon
technology would ever compete with the nervous
system. In fact, however, the conjecture is false.
Nerve membranes use populations of channels,
rather than individual channels, to change their
conductances, in much the same way that transistors
use populations of electrons rather than single
electrons. It is certainly true that a single channel
can exhibit much more complex behaviors than can
a single electron in the active region of a transistor,
but these channels are used in large populations, not
in isolation (Carver, 1990). We can compare the two
technologies by asking how much energy is
dissipated in charging up the gate of a transistor
from a 0 to a 1. We might imagine that a transistor
would compute a function that is loosely comparable
to synaptic operation. In today’s technology, it takes
about 10
-13
j to charge up the gate of a single
minimum-size transistor.
In ten years, the number
will be about 10
-15
j within
shooting range of the
kind of efficiency realized by nervous
systems. So
the disparity between the efficiency of computation
in the nervous system and that in a computer is
primarily attributable not to the individual device
requirements,/operation. A whole computer is thus
about two orders of magnitude less efficient than is a
single chip. The disparity between the efficiency of
computation
in the nervous system and that in a
computer is
primarily attributable not to the
individual device requirements,
but rather to the way the devices are used in the
system.
Where did all the energy go? There is a factor of 1
million unaccounted for between what it costs to
make a transistor work and what is required to do an
operation the way we do it in a digital computer.
There are two primary causes of energy waste in the
digital systems we build today.
1) We lose a factor of about 100 because, the way
we build digital hardware, the capacitance of the
gate is only a very small fraction of capacitance of
the node. The node is mostly wire, so we spend most
of our energy charging up the wires and not the gate.
2) We use far more than one transistor to do an
operation; in a typical implementation, we switch
about 10 000 transistors to do one operation. So
altogether it costs 1 million times as much energy to
make what we call an operation in a digital machine
as it costs to operate a single transistor. (Carver,
1990) does not believe that there is any magic in the
nervous system, that there is a mysterious fluid in
there that is not defined, some phenomenon that is
orders of magnitude more effective than anything we
can ever imagine.
There is nothing that is done in the nervous
system that we cannot emulate with electronics if
we understand the principles of neural
information processing by suitable conceptual or
software transformations in general reference
(geometry ).
We can starts by letting the device physics define
elementary operations. These functions provide a
rich set of computational primitives, each a direct
result of fundamental physical principles. They are
not the operations out of which we are accustomed
to building computers, but in many ways, they are
much more interesting. They are more interesting
than AND and OR. They are more interesting than
multiplication and addition. But they are very
different. (Carver,1990) tries to fight them, to turn
them into something with which we are familiar, he
thinks to end up making a mess. We show in this
paper that this is not true. In fact (Carver,1990)
forgot that the new operations must be oriented to a
specific goal or intension. Now we are in agreement
with and his neuromorphic network but we add a
new dimension to the electrical system by the
geometry in multidimensional space of charges to
mimic the wanted transformation in the
multidimensional space of the states.
So the real trick is to invent a vector
representation of the electrical charges that
takes advantage of the inherent capabilities of
the medium, such as the abilities to mimic the
wanted transformation. These are powerful
primitives. In conclusion we use the nervous
IJCCI2012-InternationalJointConferenceonComputationalIntelligence
462