7 CONCLUSIONS
In this paper, a recognition and classification method
based on CNN-SVM has been proposed. In the trai-
ning process, the deep image features are extracted
by CNN in the YCbCr color space. SVM is con-
nected with the last layer of CNN fo r fur ther classi-
fication, which contributes to a better trainin g results.
On the other hand, some images preprocessing pro-
cedures are conducted in the testing p rocess, in or-
der to eliminate those negative impacts, e.g., insuffi-
cient illumination, pa rtial occlusion and serious de-
formation. Experiment-based comparison with oth e r
state-of-the-art methods verify that ou r model is su-
perior than the others both in training accuracy and
speed. Furthermore, we found that some traffic signs
are miss-recognize d when we apply this method in the
unmanned ground vehicle. In ne ar future, we p lan to
expand our data set by seeking out more images of
traffic signs, especially the images a t night. Then, we
will accelerate the speed by optimizing the algorithm
for real- time application in vehicles.
ACKNOWLEDGEM EN TS
This work was supported b y the Natio nal Natu-
ral Science Found ation of China (NSFC), G rant
No.61373106. The authors gratefully acknowledge
everyone who helped in the work. Correspond ing
author: Lan Lin.
REFERENCES
Basilio, J. A. M., Torres, G. A., Rez, G. S., nchez, Medina,
L. K. T., Meana, H ., and Ctor, M. P. (2011). Explicit
image detection using ycbcr space color model as skin
detection. In American Conference on Applied Mat-
hematics and the Wseas International Conference on
Computer Engineering and Applications, pages 123–
128.
Cai, W., Liu, Y., Li, M., Cheng, L. , and Zhang, C. (2011).
A self-adaptive homomorphic filter method for re-
moving thin cloud. In International Conference on
Geoinformatics, pages 1–4.
Chan, T. H ., Jia, K., Gao, S., Lu, J., Zeng, Z., and Ma, Y.
(2015). Pcanet: A simple deep learning baseline for
image classification? IEEE Transactions on Image
Processing A Publication of the IEEE Signal Proces-
sing Society, 24(12):5017–5032.
Chang, C. C. and Lin, C. J. (2011). LIBSVM: A library for
support vector machines. ACM.
Comaniciu, D. and Meer, P. (2002). Mean shift: A robust
approach toward feature space analysis. IEEE Tran-
sactions on Pattern Analysis & Machine Intelligence,
24(5):603–619.
Creusen, I . M., Wijnhoven, R. G. J., Herbschleb, E., and
With, P. H. N. D. (2010). Color exploitation in hog-
based traffic sign detection. I n IEEE International
Conference on Image Processing, pages 2669–2672.
Dai, J., Li, Y., He, K. , and Sun, J. (2016). R-fcn: O bject de-
tection via region-based fully convolutional networks.
Duan, J. and Viktor, M. (2015). Real time road edges de-
tection and r oad signs recognition. In International
Conference on Control, Automation and Information
Sciences, pages 107–112.
Gidaris, S. and Komodakis, N. (2015). Object detection via
a multi-region and semantic segmentation-aware cnn
model. In IEEE International Conference on Compu-
ter Vision, pages 1134–1142.
Girshick, R. (2015). Fast r-cnn. In IEEE International Con-
ference on Computer Vision, pages 1440–1448.
He, K., Zhang, X., Ren, S., and Sun, J. (2016). Deep resi-
dual learning for image recognition. Computer Vision
and Pattern Recognition, pages 770–778.
Houben, S., Stallkamp, J., Salmen, J., Schlipsing, M., and
Igel, C. (2014). Detection of traffic signs in r eal-
world images: The german traffic sign detection ben-
chmark. In International Joint Conference on Neural
Networks, pages 1–8.
Huang, S. C., Cheng, F. C., and Chiu, Y. S. (2013). Ef-
ficient contrast enhancement using adaptive gamma
correction with weighting distribution. IEEE Tran-
sactions on Image Processing A Publication of the
IEEE Signal Processing Society, 22(3):1032–41.
Krizhevsky, A., Sutskever, I., and Hinton, G. E. (2012).
Imagenet classification with deep convolutional neu-
ral networks. In International Conference on Neural
Information Processing Systems, pages 1097–1105.
Li, H., Lin, Z., Shen, X., Brandt, J., and Hua, G. (2015).
A convolutional neural network cascade for face de-
tection. In Computer Vision and Pattern Recognition,
pages 5325–5334.
Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S.,
Fu, C. Y. , and Berg, A. C. (2016). SSD: Single Shot
MultiBox Detector. Springer International Publishing.
Nassu, B. T. and Ukai, M. (2010). Automatic recognition of
railway signs using sift features. In Intelligent Vehicles
Symposium, pages 348–354.
Ouyang, W. and Wang, X. (2014). Joint deep learning for
pedestrian detection. In IEEE International Confe-
rence on Computer Vision, pages 2056–2063.
Redmon, J., Divvala, S., Girshick, R., and Farhadi, A.
(2016). You only look once: Unified, real-time object
detection. Computer Vision and Pattern Recognition,
pages 779–788.
Ren, S., Girshick, R., Girshick, R., and Sun, J. (2017). Fas-
ter r-cnn: Towards real-time object detection with re-
gion proposal networks. IEEE Transactions on Pat-
tern Analysis & Machine Intelligence, 39(6):1137–
1149.
Schmidhuber, J. (2012). Multi-column deep neural net-
works for image classification. Computer Vision and
Pattern Recognition, 157(10):3642–3649.
ICPRAM 2018 - 7th International Conference on Pattern Recognition Applications and Methods