At the same time, from Figure 5 we can conclude
that the value of K does not have a major influence
on the performance. Still it should be confirmed with
more examples. Currently it has been shown empiri-
cally that the value of K could be selected as about
4% of the minimum number of samples in single
class.
4 HARDWARE
IMPLEMENTATION ISSUES
As described in the algorithm part, the calculation of
the proposed algorithm is nearly the same with K-
means clustering. The dedicated custom VLSI chips
for large-scale K-means clustering have already
been developped (Shikano et al., 2007) and (Ma and
Shibata. 2010). By adding only a series of Margin
processing unit for calculating the multiplication of
and distance, the algorithm can be easily
implementated on VLSI.
5 CONCLUSIONS
A template reduction algorithm for nearest neighbor
classifier using K-means centers based on critical
boundary vectors has been proposed. Experiments
have shown this algorithm has superior classification
performance to NN classifier and linear-kernel SVM,
while comparable to RBF-kernel SVM. The efficient
values of parameters have also been fixed empiri-
cally. In addition, this algorithm is highly computa-
tionally efficient and friendly to hardware imple-
mentation. Our further work will focus on the self
adaption of the K value.
REFERENCES
Bajramovic, F., Mattern, F., Butko, N., (2006). A com-
parison of nearest neighbor search algorithms for ge-
neric object recognition. In ACIVS’06, Advanced Con-
cepts for Intelligent Vision Systems.
Bovolo, F., Bruzzone, L., Carlin, L., (2010). A novel
technique for subpixel image classification based on
support vector machine. IEEE Transactions on Image
Processing, 19, 2983-2999.
Chapelle, O., Haffner, P., Vapnik, V. N., (1999). Support
vector machines for histogram-based image classifica-
tion. IEEE Transactions on Neural Networks, 10,
1055-1064.
Eick, C. F., Zeidat, N., Vilalta, R., (2004). Using represen-
tative-based clustering for nearest neighbor dataset ed-
iting. In ICDM’04, IEEE International Conference on
Data Mining.
Fayed, H., Atiya, A., (2009). A novel template reduction
approach for the k-nearest neighbor method. IEEE
Transactions on Neural Networks, 20, 890-896.
Hsu, C. W., Lin, C. J., (2002). A comparison of methods
for multiclass support vector machines. IEEE Transac-
tions on Neural Networks, 13, 415-425
Ma, Y., Shibata, T., (2010). A binary-tree hierarchical
multiple-chip architecture of real-time large-scale
learning processor systems. Japanese Journal of Ap-
plied Physics, 49, 04DE08-04DE08-8.
Nikolaidis, K., Goulermas, J. Y., Wu, Q. H., (2011). A
class boundary preserving algorithm for data conden-
sation. Pattern Recognition, 44, 704-715.
Shikano, H., Ito, K., Fujita, K., Shibata, T., (2007). A real-
time learning processor based on k-means algorithm
with automatic seeds generation. In Soc’07, the 2007
International Symposium on System-on-Chip.
Suzuki, Y., Shibata, T., (2004). Multiple-clue face detec-
tion algorithm using edge-based feature vectors. In
ICASSP’04, IEEE International Conference on Acous-
tic, Speech, and Signal Processing.
Wu, Y. Q., Ianakiev, K., Govindaraju, V., (2002). Im-
proved k-nearest neighbor classification. Pattern Rec-
ognition, 35, 2311-2318.
Zhou, Y., Li, Y. W., Xia, S. X., (2009). An improved
KNN text classification algorithm based on clustering.
Journal of Computers, 4, 230-237.
NCTA 2011 - International Conference on Neural Computation Theory and Applications
98