Table 7: Rank of greedy algorithm (rearranged according to the fundamental ideas).
Ideas Feature’s name
Rank
Ave.
L = 3 L = 4 L = 5 L = 6
Density
Maximum value of reflection intensity voxel 3 4 1 1 2.3
Approximated volume of a point-cloud 2 2 4 4 3.0
Maximum value of normalized reflection intensity voxel 1 3 6 5 3.8
Mean of normalized reflection intensity voxel 6 5 7 6 6.0
Change
Weighted mean of relative slice position 4 7 3 2 4.0
Weighted mean of slice feature 7 6 2 3 4.5
Weighted mean of maximum value of normalized
reflection intensity
8 1 5 8 5.5
Weighted mean of variance of normalized
reflection intensity
5 8 8 7 7.0
suggests that the idea 1 was more effective than the
idea 2. This is because it is likely that the move-
ment of pedestrians and vehicles in the range of three
frames used for multi-frame feature extraction was
very small.
5 CONCLUSIONS
In this paper, we proposed a pedestrian detection
method using multi-frame features extracted from
low-resolution LIDAR data. We introduced the multi-
frame features extracted by combining point-clouds
over multiple frames to increase its resolution and
capturing temporal changes of the point-clouds. The
proposed method detected pedestrians using the clas-
sifier trained by inputting the LIDAR data divided by
their numbers of scan hits L.
Using the data collected in real-world environ-
ments, experiments showed the proposed method us-
ing a combination of proposed multi-frame features,
could detect pedestrians more accurately than using
conventional single-frame features. We also analyzed
the contribution of each feature to the performance
improvement. The results showed the idea of integrat-
ing point-cloudsto increase their density was effective
for pedestrian detection from low resolution LIDAR.
Future work includes improvement of the pro-
posed method considering the combination of single-
frame features and multi-frame features simultane-
ously, construction of the classifier using partial AUC
(Narasimhan and Agarwal, 2013), and comparison of
features learned by Deep Learning.
ACKNOWLEDGEMENTS
Parts of this research were supported by MEXT,
Grant-in-Aid for Scientific Research.
REFERENCES
World Health Organization. (2015). Global status report on
road safety 2015.
Arras, K. O., Mozos, O. M., and Burgard, W. (Apr. 2007).
Using boosted features for the detection of people in
2D range data. In Proc. 2007 IEEE Int. Conf. on
Robotics and Automation, pages 3402–3407.
Kidono, K., Miyasaka, T., Watanabe, A., Naito, T., and
Miura, J. (June 2011). Pedestrian recognition using
high-definition LIDAR. In Proc. 2011 IEEE Intelli-
gent Vehicles Symposium, pages 405–410.
Maturana, D. and Scherer, S. (Sept. 2015). Voxnet: A
3D convolutional neural network for real-time object
recognition. In Proc. 2015 IEEE/RSJ Int. Conf. on In-
telligent Robots and Systems, pages 922–928.
Narasimhan, H. and Agarwal, S. (Aug. 2013). SVM pAUC
tight: A new support vector method for optimizing
partial AUC based on a tight convex upper bound. In
Proc. 19th ACM SIGKDD Int. Conf. on Knowledge
Discovery and Data Mining, pages 167–175.
Navarro-Serment, L. E., Mertz, C., and Hebert, M. (Oct.
2010). Pedestiran detection and tracking using three-
dimensional LADAR data. Int. J. of Robotics Re-
search, vol.29, no.12, pages 1516–1528.
Ogawa, T., Sakai, H., Suzuki, Y., Takagi, K., and Morikawa,
K. (June 2011). Pedestrian detection and tracking
using in-vehicle LIDAR for automotive application.
In Proc. 2011 IEEE Intelligent Vehicles Symposium,
pages 734–739.
Premebida, C., Ludwig, O., and Nunes, U. (Oct. 2009). Ex-
ploiting LIDAR-based features on pedestrian detec-
tion in urban scenarios. In Proc. 2009 IEEE Int. Conf.
on Intelligent Transportation Systems, pages 1–6.
Shroff, D., Nangalia, H., Metawala, A., Parulekar, M., and
Padte, V. (Jan. 2013). Dynamic matrix and model pre-
dictive control for a semi-auto pilot car. In Proc. 2013
IEEE Int. Conf. on Advances in Technology and Engi-
neering, pages 1–5.
Spinello, L., Luber, M., and Arras, K. O. (May 2011).
Tracking people in 3D using a bottom-up top-down
detector. In Proc. 2011 IEEE Int. Conf. on Robotics
and Automation, pages 1304–1310.