sidered only in relation to selecting the locations for
pasting the road signs. In the future, we will also con-
sider the context in relation to setting the parameters
to assign to the degradation model when generating
the signs.
5 CONCLUSION
In this paper, we propose a method for training end-
to-end traffic sign detectors without using actual im-
ages of the traffic signs. Our proposed method en-
ables generating scene images that preserve the con-
text information surrounding the traffic signs. The
proposed method achieves mAP that is approximately
8% higher than that of the conventional method, in
which the signs are pasted at random locations. This
result demonstrates that training using scene images
that preserve the context information is effective for
improving the precision. However, mAP of the pro-
posed method is approximately 7% lower than that of
the sign detectors that are trained using actual images.
The difference in the precision is high for signs that
are relatively small in the scenes compared with the
models trained using actual images. It would be pos-
sible to improve the precision by considering the con-
text information when determining the values of the
degradation parameters for generating synthetic traf-
fic signs.
REFERENCES
Baird, H. S. (1992). Document Image Defect Models, pages
546–556.
Cheng, P., Liu, W., Zhang, Y., and Ma, H. (2018). Loco:
Local context based faster r-cnn for small traffic sign
detection. In MultiMedia Modeling, pages 329–341.
Fritsch, J., Kuehnl, T., and Geiger, A. (2013). A new per-
formance measure and evaluation benchmark for road
detection algorithms. In International Conference on
Intelligent Transportation Systems (ITSC).
Haselhoff, A., Nunn, C., M¨uller, D., Meuter, M., and Roese-
Koerner, L. (2017). Markov random field for image
synthesis with an application to traffic sign recogni-
tion. In 2017 IEEE Intelligent Vehicles Symposium
(IV), pages 1407–1412.
Hoessler, H., W¨ohler, C., Lindner, F., and Kreßel, U.
(2007). Classifier training based on synthetically gen-
erated samples. In 5th International Conference on
Computer Vision Systems (ICVS).
Ishida, H., Takahashi, T., Ide, I., Mekada, Y., and Murase,
H. (2006). Identification of degraded traffic sign sym-
bols by a generative learning method. In 18th Inter-
national Conference on Pattern Recognition (ICPR),
volume 1, pages 531–534.
Ishida, H., Takahashi, T., Ide, I., Mekada, Y., and Murase,
H. (2007). Generation of training data by degrada-
tion models for traffic sign symbol recognition. IEICE
TRANSACTIONS on Information and Systems, E90-
D(8):1134–1141.
Medici, P., Caraffi, C., Cardarelli, E., Porta, P. P., and Ghi-
sio, G. (2008). Real time road signs classification.
In 2008 IEEE International Conference on Vehicular
Electronics and Safety (ICVES), pages 253–258.
Møgelmose, A., Trivedi, M. M., and Moeslund, T. B.
(2012). Learning to detect traffic signs: Compara-
tive evaluation of synthetic and real-world datasets. In
21st International Conference on Pattern Recognition
(ICPR), pages 3452–3455.
Moiseev, B., Konev, A., Chigorin, A., and Konushin,
A. (2013). Evaluation of traffic sign recognition
methods trained on synthetically generated data. In
Advanced Concepts for Intelligent Vision Systems
(ACIVS), pages 576–583.
Peng, E., Chen, F., and Song, X. (2017). Traffic sign detec-
tion with convolutional neural networks. In Cognitive
Systems and Signal Processing (ICCSIP), pages 214–
224.
Stallkamp, J., Schlipsing, M., Salmen, J., and Igel, C.
(2012). Man vs. computer: Benchmarking machine
learning algorithms for traffic sign recognition. Neu-
ral Networks, 32:323 – 332.
Telea, A. (2004). An image inpainting technique based on
the fast marching method. Journal of Graphics Tools,
9(1):23–34.
Urˇsiˇc, P., Tabernik, D., Mandeljc, R., and Skoˇcaj, D. (2017).
Towards large-scale traffic sign detection and recog-
nition. In 22nd Computer Vision Winter Workshop
(CVWW).
Zhu, Z., Liang, D., Zhang, S., Huang, X., Li, B., and Hu, S.
(2016). Traffic-sign detection and classification in the
wild. In 2016 IEEE Conference on Computer Vision
and Pattern Recognition (CVPR), pages 2110–2118.