International Conference on Computational Statistics
COMPSTAT 2010, pages 177–187. Springer.
Cao, L. J., Keerthi, S. S., Ong, C. J., Zhang, J. Q., Periy-
athamby, U., Fu, X. J., and Lee, H. P. (2006). Paral-
lel sequential minimal optimization for the training of
support vector machines. IEEE Transactions on Neu-
ral Networks, 17(4):1039–1049.
Carpenter, A. (2009). Cusvm: A cuda implementation of
support vector classification and regression. Technical
report.
Catanzaro, B., Sundaram, N., and Keutzer, K. (2008). Fast
support vector machine training and classification on
graphics processors. In Proceedings of the 25th inter-
national conference on Machine learning, ICML ’08,
pages 104–111, New York, NY, USA. ACM.
Chang, C.-C. and Lin, C.-J. (2011). Libsvm: A library for
support vector machines. ACM Transactions on Intel-
ligent Systems and Technology, 2:27:1–27:27.
Chang, E. Y., Zhu, K., Wang, H., Bai, H., Li, J., Qiu, Z., and
Cui, H. (2007). Psvm: Parallelizing support vector
machines on distributed computers. In NIPS.
Chapelle, O., Haffner, P., and Vapnik, V. N. (1999). Support
vector machines for histogram-based image classifica-
tion. Neural Networks, IEEE Transactions on, pages
1055–1064.
Cortes, C. and Vapnik, V. (1995). Support-vector networks.
Mach. Learn., 20(3):273–297.
Cotter, A., Srebro, N., and Keshet, J. (2011). A gpu-tailored
approach for training kernelized svms. In Proceed-
ings of the 17th ACM SIGKDD conference, KDD ’11,
pages 805–813.
Fan, R.-E., Chen, P.-H., and Lin, C.-J. (2005). Working
set selection using the second order information for
training svm. JOURNAL OF MACHINE LEARNING
RESEARCH, 6:1889–1918.
Gorecki, P., Artiemjew, P., Drozda, P., and Sopyla, K.
(2012). Categorization of similar objects using bag
of visual words and support vector machines. In Fil-
ipe, J. and Fred, A. L. N., editors, ICAART (1), pages
231–236. SciTePress.
Graf, H. P., Cosatto, E., Bottou, L., Durdanovic, I., and Vap-
nik, V. (2005). Parallel support vector machines: The
cascade svm. In In Advances in Neural Information
Processing Systems, pages 521–528. MIT Press.
Harris, M. (2008). Optimizing Parallel Reduction in
CUDA. Technical report, nVidia.
Herrero-Lopez, S., Williams, J. R., and Sanchez, A. (2010).
Parallel multiclass classification using svms on gpus.
In Proceedings of the 3rd Workshop on General-
Purpose Computation on Graphics Processing Units,
GPGPU ’10, pages 2–11, New York, NY, USA. ACM.
Joachims, T. (1998). Text categorization with support vec-
tor machines: learning with many relevant features. In
N´edellec, C. and Rouveirol, C., editors, Proceedings
of ECML-98, 10th European Conference on Machine
Learning, number 1398, pages 137–142. Springer
Verlag, Heidelberg, DE.
Joachims, T. (1999). Advances in kernel methods. chap-
ter Making large-scale support vector machine learn-
ing practical, pages 169–184. MIT Press, Cambridge,
MA, USA.
Joachims, T., Finley, T., and Yu, C. J. (2009). Cutting-plane
training of structural svms. Mach. Learn., 77(1):27–
59.
Keerthi, S., Shevade, S., Bhattacharyya, C., and Murthy,
K. (2001). Improvements to platt’s smo algorithm
for svm classifier design. Neural Computation,
13(3):637–649.
Lazebnik, S., Schmid, C., and Ponce, J. (2006). Beyond
bags of features: Spatial pyramid matching for rec-
ognizing natural scene categories. In Proceedings
of the 2006 IEEE Computer Society Conference on
Computer Vision and Pattern Recognition - Volume 2,
CVPR ’06, pages 2169–2178, Washington, DC, USA.
IEEE Computer Society.
Lin, T.-K. and Chien, S.-Y. (2010). Support vector ma-
chines on gpu with sparse matrix format. Machine
Learning and Applications, Fourth International Con-
ference on, 0:313–318.
Platt, J. (1998). Fast training of support vector machines
using sequential minimal optimization. In Advances
in Kernel Methods - Support Vector Learning. MIT
Press.
Sopyla, K., Drozda, P., and Gorecki, P. (2012). Svm with
cuda accelerated kernels for big sparse problems. In
Proceedings of the ICAISC, volume 7267 of Lecture
Notes in Computer Science, pages 439–447. Springer.
Vapnik, V. N. (1995). The nature of statistical learning the-
ory. Springer-Verlag New York, Inc., New York, NY,
USA.
V´azquez, F., Garz´on, E. M., Martinez, J. A., and Fern´andez,
J. J. (2009). The sparse matrix vector product on gpus.
Technical report, University of Almeria.
Volkov, V. and Demmel, J. W. (2008). Benchmarking gpus
to tune dense linear algebra. In Proceedings of the
2008 ACM/IEEE conference on Supercomputing, SC
’08, pages 31:1–31:11, Piscataway, NJ, USA. IEEE
Press.
Zanni, L., Serafini, T., and Zanghirati, G. (2006). Parallel
software for training large scale support vector ma-
chines on multiprocessor systems. J. Mach. Learn.
Res., 7:1467–1492.
Zhao, H. X. and Magoules, F. (2011). Parallel support vec-
tor machines on multi-core and multiprocessor sys-
tems. In Proceedings of the 11th International Confer-
ence on Artificial Intelligence and Applications (AIA
2011).
ICPRAM2014-InternationalConferenceonPatternRecognitionApplicationsandMethods
336