Experiments show (Fig. 3) that as dimensionality N
grows the speed advantage of SNN-tree over ex-
haustive search increases. For example, at
N=M=2 000 and b=0.2 SNN-tree is faster than ex-
haustive search in 12 times, and at N=M=10 000
acceleration reaches 26 times. Note b
max
is a fixed
value, but b is a number of distorted components in
the input vector. Using experimental result, we esti-
mated the average number of scalar product opera-
tions needed for SNN-tree search:
2
exp 1.3 0.44 0.4 log .
bbN
(9)
Solid lines in figure 3 were built using equation
(9). This equation allows estimating average speed
advantage of SNN-tree over exhaustive search. Us-
ing (9), it is possible to predict advantage of SNN-
tree at large values of parameters (table 1).
Table 1: The speed advantage of SNN-tree over exhaus-
tive search for M=N=10
5
and b
max
=0.3 using equation (9).
b M / θ
0.1 189
0.2 88
0.3 41
9 CONCLUSIONS
The paper considers the problem of nearest-neighbor
search in a high-dimensional configuration space.
The use of most popular methods (k-d tree, spill-
tree, BBF-tree, LSH) proved to be inefficient in this
case. We offered a tree-like algorithm that solves the
given problem (SNN-tree).
In this work, theoretical estimate of the upper
bound on the error probability of SNN-tree algo-
rithm was obtained. This estimate shows that the
error probability decreases as the dimensionality of
the problem grows. Since even at N>500 the error is
less than 10
-15
, it does not seem possible to measure
it experimentally. Therefore, it is safe to say that
SNN-tree is an exact algorithm. Research investiga-
tions of the computational complexity of the algo-
rithm shows that the speed advantage of SNN-tree
algorithm over exhaustive search increases as the
dimensionality N grows.
So, we can conclude that SNN-tree algorithm
represents an efficient alternative to exhaustive
search.
ACKNOWLEDGEMENTS
The research is supported by the Russian Foundation
for Basic Research (grant 12-07-00295a).
REFERENCES
Friedman, J.H., Bentley, J.L. and Finkel, R.A., 1977.An
algorithm for finding best matches in logarithmic ex-
pected time. ACM Transactions on Mathematical
Software. vol. 3. pp. 209–226.
Ting Liu, Andrew W. Moore, Alexander Gray and Ke
Yang., 2004. An Investigation of Practical Approxi-
mate Nearest Neighbor Algorithms. Proceeding of
Conference. Neural Information Processing Systems.
Indyk, P. and Motwani, R., 1998. Approximate nearest
neighbors: Towards removing the curse of dimension-
ality. In Proc. 30
th
STOC. pp. 604–613.
Beis, J.S. and Lowe, D.G., 1997. Shape Indexing Using
Approximate Nearest-Neighbor Search in High-
Dimensional Spaces. Proceedings of IEEE Computer
Society Conference on Computer Vision and Pattern
Recognition. pp. 1000-1006.
Kryzhanovsky B., Kryzhanovskiy V., Litinskii. L., 2010.
Machine Learning in Vector Models of Neural Net-
works. // Advances in Machine Learning II. Dedicated
to the memory of Professor Ryszard S. Michalski. Ko-
ronacki, J., Ras, Z.W., Wierzchon, S.T. (et al.) (Eds.),
Series “Studies in Computational Intelligence”.
Springer. SCI 263, pp. 427–443.
Kryzhanovsky V., Malsagov M., Tomas J.A.C., 2013.
Hierarchical Classifier: Based on Neural Networks
Searching Tree with Iterative Traversal and Stop Cri-
terion. Optical Memory and Neural Networks (Infor-
mation Optics). vol. 22. No. 4. pp. 217–223.
Kryzhanovsky V., Malsagov M., Zelavskaya I., Tomas
J.A.C., 2014. High-Dimensional Binary Pattern Clas-
sification by Scalar Neural Network Tree. Proceedings
of International Conference on Artificial Neural Net-
works. (in print).
APPENDIX A
It is necessary to calculate the following probability:
1
*
max
1
1Pr (12 ) .
M
m
m
PbN
XX
(А1)
Let scalar products
m
XX and
XX
be inde-
pendent random quantities,
m
.
1
*
max
1
1Pr (12).
M
m
m
PbN
XX
(А2)
NCTA2014-InternationalConferenceonNeuralComputationTheoryandApplications
304