Environment Adaptive Pedestrian Detection using In-vehicle Camera and GPS

Daichi Suzuo, Daisuke Deguchi, Ichiro Ide, Hiroshi Murase, Hiroyuki Ishida, Yoshiko Kojima

Abstract

In recent years, accurate pedestrian detection from in-vehicle camera images is focused to develop a safety driving assistance system. Currently, successful methods are based on statistical learning. However, in such methods, it is necessary to prepare a large amount of training images. Thus, the decrease in the number of training images degrades the detection accuracy. That is, in driving environments with few or no training images, it is difficult to detect pedestrians accurately. Therefore, we propose an approach that collects training images automatically to build classifiers for various driving environments. This is expected to realize highly accurate pedestrian detection by using an appropriate classifier corresponding to the current location. The proposed method consists of three steps; Classification of driving scenes, collection of non-pedestrian images and training of classifiers for each scene class, and associating a scene-class-specific classifier with GPS location information. Through experiments, we confirmed the effectiveness of the method compared to baseline methods.

References

  1. Bay, H., Tuytelaars, T., and Gool, L. V. (2008). Surf: Speeded up robust features. Computer Vision and Image Understanding (CVIU), 110(3):346-359.
  2. Broggi, A., Cerri, P., Ghidoni, S., Grisleri, P., and Jung, H. G. (2009). A new approach to urban pedestrian detection for automatic braking. IEEE Transactions on Intelligent Transportation Systems, 10(4):594-605.
  3. Csurka, G., Dance, C. R., Fan, L., Willamowski, J., and Bray, C. (2004). Visual categorization with bags of keypoints. In Proceedings of Workshop on Statistical Learning in Computer Vision in the The 8th European Conference on Computer Vision (ECCV), pages 1-22.
  4. Dalal, N. and Triggs, B. (2005). Histograms of oriented gradients for human detection. In Proceedings of 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, volume 1, pages 886- 893.
  5. Fan, R.-E., Chang, K.-W., Hsieh, C.-J., Wang, X.-R., and Lin, C.-J. (2008). LIBLINEAR: A library for large linear classification. Journal of Machine Learning Research, 9:1871-1874.
  6. Mitsumori, R., Uchiyama, H., Saito, H., Servières, M., and Moreau, G. (2009). Change detection based on SURF and color edge matching. In Proceedings of Workshop on Vision and Control for Access Space (VCAS) in the 9th Asian Conference on Computer Vision (ACCV). 12p.
  7. Nair, V. and Clark, J. J. (2004). An unsupervised, online learning framework for moving object detection. In Proceedings of 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, volume 1, pages 317-324.
  8. Pang, J., Huang, Q., Yan, S., Jiang, S., and Qin, L. (2011). Transferring boosted detectors towards viewpoint and scene adaptiveness. IEEE Transactions on Image Processing, 20(1):1388-1400.
  9. Sand, P. and Teller, S. (2004). Video matching. ACM Transactions on Graphics, 23(3):592-599.
  10. Vinicius, P., Borges, K., Tews, A., and Haddon, D. (2012). Pedestrian detection in industrial environments: Seeing around corners. In Proceedings of 2012 IEEE International Conference on Intelligent Robots and Systems, pages 4231-4232.
  11. Wang, M. and Wang, X. (2011). Automatic adaptation of a generic pedestrian detector to a specific traffic scene. In Proceedings of 2011 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pages 3401-3408.
  12. W öhler, C. (2002). Autonomous in situ training of classification modules in real-time vision systems and its application to pedestrian recognition. Pattern Recognition Letters, 23(11):1263-1270.
Download


Paper Citation


in Harvard Style

Suzuo D., Deguchi D., Ide I., Murase H., Ishida H. and Kojima Y. (2014). Environment Adaptive Pedestrian Detection using In-vehicle Camera and GPS . In Proceedings of the 9th International Conference on Computer Vision Theory and Applications - Volume 2: VISAPP, (VISIGRAPP 2014) ISBN 978-989-758-004-8, pages 354-361. DOI: 10.5220/0004677003540361


in Bibtex Style

@conference{visapp14,
author={Daichi Suzuo and Daisuke Deguchi and Ichiro Ide and Hiroshi Murase and Hiroyuki Ishida and Yoshiko Kojima},
title={Environment Adaptive Pedestrian Detection using In-vehicle Camera and GPS},
booktitle={Proceedings of the 9th International Conference on Computer Vision Theory and Applications - Volume 2: VISAPP, (VISIGRAPP 2014)},
year={2014},
pages={354-361},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0004677003540361},
isbn={978-989-758-004-8},
}


in EndNote Style

TY - CONF
JO - Proceedings of the 9th International Conference on Computer Vision Theory and Applications - Volume 2: VISAPP, (VISIGRAPP 2014)
TI - Environment Adaptive Pedestrian Detection using In-vehicle Camera and GPS
SN - 978-989-758-004-8
AU - Suzuo D.
AU - Deguchi D.
AU - Ide I.
AU - Murase H.
AU - Ishida H.
AU - Kojima Y.
PY - 2014
SP - 354
EP - 361
DO - 10.5220/0004677003540361