
serves as a multiplier of feature connections or a
feature connection list expander. Such multipliers
increase association capabilities of a cortical area.
Importantly, a length of a connection list should not
grow excessively. Otherwise, the information value
of the feature column declines: a vote cast for all
candidates does not make much sense.
Inverted indexing structures were invented long
before emergence of machine learning. Indeed, it is
not known who invented a back-of-the-book index,
where rows or columns of page numbers instantly
point to a location of keywords. The inverted index
was re-invented in (Harris, D., 1954) and christened
as Bag-of-Words. Computer science text-books
describe a use of inverted (and fully inverted) files. In
computer vision, the bag-of-visual-words model,
where image features are treated as words was re-
invented in (Csurska, G., et al., 2004), though it was
first discussed in (Bledsoe, W., Browning, I., 1959).
Partly borrowing ideas from text search engines
(Brin, S., Page, L. 1998), the numerical data indexing
was discussed in (Sivic, J., Zisserman, A., 2009). In
the latter paper, image local features are first
converted into words and later processed using
inverted text index.
However, this paper’s method does not convert
noisy numeric features into words, but directly treats
features using a numeric inverted index. Whereas
artificial neural networks make use of iterative
training, which results into slow learning, the numeric
index leads to practically an instant learning. For
instance (Mikhailov, A. et al., 2023), the training with
dataset that involves 800 patients, each represented
by 20531 gene profiles, took only 0.075 seconds.
Also, learning of half of 581012 patterns, 52 features
each, from famous CoverType dataset took only
0.00046 seconds. Both training sessions were
followed by pattern recognition sessions that
produced 99.75% and 90% accuracy, respectively.
However, the novelty of this paper comes from
applying inverted index technique to elucidating the
function of the neocortical column in the context of
pattern recognition. For that, a pattern recognition
model was built, whose performance is discussed in
Section 5, whereas its mathematics is presented in
Section 6.
2 INTRODUCTORY EXAMPLE
A seemingly chaotic network can be mathematically
represented by perfectly ordered columns. In Figure
1, all connections depicted with thin lines between
feature nodes (denoted by blank squares) and target
nodes (depicted as black dots) were chosen randomly.
Upon arrival of the feature pattern {b, c, e, g, i},
connections depicted with bold lines become active,
where the feature “b” talks to nodes (1,2,3,4), feature
“g” talks to nodes (5,3,6). If a combination of features
were to spawn connections that never intersect, such
a network would be a waste of efforts because no
node would receive a sufficiently strong input. Hence,
a subset of connections must converge to a few nodes.
Here, nodes 3 and 8 become most excited as it can be
seen from 1st level node histogram, which is obtained
from network’s columnar representation. On 2nd
level, the winner is the node alpha.
The results of the paper are based on the
neurobiological evidence presented in the next session.
3 NEUROBIOLOGICAL
EVIDENCE
(a) Patterns are represented by combinations of
feature columns or sensory neurons (Tsunoda, K. et
al., 2001, Wilson, D., 2008).
(b) Branching of neuronal axons allows for
simultaneous transmission of messages to a number
of target neurons (Horton, J., Adams, D., 2005,
LeDoux, J., 2002, Squire, L., 2013) (excluding
internal connections within each minicolumn).
(c) Neurons in a minicolumn have the same receptive
field and respond to the same stimulus
(Buxhoeveden, D., Casanova, M., 2002).
(d) There exist hypercolumns in the neocortex. The
term hypercolumn "denotes a unit containing a full set
of values for any given set of receptive field
parameters" (Mountcastle, V., 1997, Horton, J.,
Adams, D., 2005).
What is a feasible number of feature patterns a
cortical area can memorize? Firstly, “Complex
objects are represented in macaque inferotemporal
cortex by the combination of feature columns”
(Tsunoda, K. et al., 2001). Secondly, “Any given
sensory neuron will respond to many different odors
as long as they share a common feature. The brain’s
olfactory cortex then looks at the combination of
sensory neurons activated at any given time and
interprets that pattern” (Wilson, D., 2008). Secondly,
let us suppose that active feature columns transmit
their messages through axon terminals to distinct
destinations that never intersect. Then such a network
would be a waste of efforts, energy and money like
sprinkling water on the sand. There would be no
beneficiaries as no target neuron would ever receive
more than one input.
ICPRAM 2024 - 13th International Conference on Pattern Recognition Applications and Methods
462