overall input space. If they do not, for instance if the
centering or rotation of MNIST digits were random-
ized, SFC’s results will be much degraded. Another
process would be needed to identify bounding boxes
and rotation. Alternatively, SFC could be used to con-
struct multiple layers of fields which, similar to parts
of the visual cortex, move from simple, invariant pat-
terns (lines) into more complex ones (digits).
Another challenge is the separation of patterns or
irrelevant (or inactive) features. All experiments ex-
cept the one involving iBeacon signals used random
masks and a negative error signal to create separa-
tion. Without a separating force, inactive features
will always occupy their initial position. Clusters do
not separate on their own. Cluster orientation and
position across the spherical field is relative. Irrele-
vant features could, by their starting position, insinu-
ate themselves into a cluster. It is our goal to find a
method by which masks are not necessary or at least
not for this purpose. A possible solution is the in-
clusion of polarity among points. A class of negative
points would attract positive co-active points but repel
each other.
Next, the order in which points move toward other
points matters. In Figure 1, a movement first toward
C and then B would produce a different new A. To
combat this, we randomize update order between each
iteration. The impact of update order is unknown.
However, without randomization, points have a ten-
dency to orbit the field while maintaining relative dis-
tances.
Co-activity depends on the length of the learning
window. If it is too short, co-activity over longer inter-
vals will be missed. If the window is too long, there is
a tendency for all points to form a single cluster. The
latter also depends on the shape of the Hebbian func-
tion. One possibility is to use multiple discrete win-
dows across different networks to identify long and
short term relationships. This aspect is a challenge of
neural plasticity algorithms and credit assignment in
general.
A significant problem in data analysis concerns
the identification of the number of classes within a
data set. In the MNIST experiment, we presupposed
the number of classes by training ten different sets
of points and selecting one set of features from each
field. In an unlabeled context, this is not possible.
One possibility for future research is to use SFC as a
wrapper where results from each iteration are used to
partition the data set for subsequent iterations.
In conclusion, we note that SFC is part of a larger
project to create artificial neurons with fully train-
able dendrites. Neuroscience evidence suggests that
dendritic neurons are computationally attractive (Mel,
1999). Such dendritic neurons would have expansive
dendritic trees in which synaptic (i.e., point) position,
tree shape, branch compartmentalization and synaptic
weights are all trainable parameters. Spike-time de-
pendent feature clustering describes the first stage of
this work: synaptic position on a spherical plane. SFC
has a second part not detailed in this paper. The sec-
ond half uses co-activity of input and output neurons
to adjust the radius of a point (or synapse). In other
words, this paper describes clustering due to input-
input co-activity, while the other half describes clus-
tering due to input-output co-activity. The second half
was not included because the goal of this work was
to show that input co-activity can be used to encode
small world information in the position of synapses in
a dendritic field.
REFERENCES
Chandrashekar, G. and Sahin, F. (2014). A survey on feature
selection methods. Computers & Electrical Engineer-
ing, 40(1):16–28.
Dua, D. and Graff, C. (2017). UCI machine learning repos-
itory.
Hebb, D. O. (2005). The organization of behavior: A neu-
ropsychological theory. Psychology Press.
Izhikevich, E. M. (2007). Dynamical systems in neuro-
science. MIT press.
Khalid, S., Khalil, T., and Nasreen, S. (2014). A survey of
feature selection and feature extraction techniques in
machine learning. In 2014 science and information
conference, pages 372–378. IEEE.
LeCun, Y., Cortes, C., and Burges, C. (2010). Mnist hand-
written digit database. ATT Labs [Online]. Available:
http://yann.lecun.com/exdb/mnist, 2.
Maass, W. (1997). Networks of spiking neurons: the third
generation of neural network models. Neural net-
works, 10(9):1659–1671.
Mel, B. W. (1999). Why have dendrites? a computational
perspective.
Mohammadi, M., Al-Fuqaha, A., Guizani, M., and Oh, J. S.
(2017). Semi-supervised Deep Reinforcement Learn-
ing in Support of IoT and Smart City Services. IEEE
Internet of Things Journal, pages 1–12.
ICAART 2022 - 14th International Conference on Agents and Artificial Intelligence
194