Depth Sensor Placement for Human Robot Cooperation

Max Stähr, Andrew M. Wallace, Neil M. Robertson


Continuous sensing of the environment from a mobile robot perspective can prevent harmful collisions between human and mobile service robots. However, the overall collision avoidance performance depends strongly on the optimal placement of multiple depth sensors on the mobile robot and maintains flexibility of the working area. In this paper, we present a novel approach to optimal sensor placement based on the visibility of the human in the robot environment combined with a quantified risk of collision. Human visibility is determined by ray tracing from all possible camera positions on the robot surface, quantifying safety based on the speed and direction of the robot throughout a pre-determined task. A cost function based on discrete cells is formulated and solved numerically for two scenarios of increasing complexity, using a CUDA implementation to reduce computation time.


  1. Amanatides, J. and Woo, A. (1987). A fast voxel traversal algorithm for ray tracing. In Proceedings of EUROGRAPHICS.
  2. Angerer, S., Strassmair, C., Staehr, M., Roettenbacher, M., and Robertson, N. (2012). Give me a hand. In Technologies for Practical Robot Applications (TePRA), 2012 IEEE International Conference on.
  3. Bodor, R., Drenner, A., Schrater, P., and Papanikolopoulos, N. (2007). Optimal camera placement for automated surveillance tasks. Journal of Intelligent & Robotic Systems.
  4. Brogan, D. C. and Johnson, N. L. (2003). Realistic human walking paths. In Computer Animation and Social Agents, 2003. 16th International Conference on.
  5. Chen, X. and Davis, J. (2008). An occlusion metric for selecting robust camera configurations. Machine Vision and Applications.
  6. De Luca, A., Albu-Schaffer, A., Haddadin, S., and Hirzinger, G. (2006). Collision detection and safe reaction with the dlr-iii lightweight manipulator arm. In Intelligent Robots and Systems, 2006 IEEE/RSJ International Conference on. IEEE.
  7. Dhillon, S. S., Chakrabarty, K., and Iyengar, S. S. (2002). Sensor placement for grid coverage under imprecise detections. Information Fusion, 2002. Proceedings of the Fifth International Conference on.
  8. Dunn, E., Olague, G., and Lutton, E. (2006). Parisian camera placement for vision metrology: Evolutionary Computer Vision and Image Understanding. Pattern Recognition Letters.
  9. Ebert, D. M. and Henrich, D. D. (2002). Safe human-robotcooperation: Image-based collision detection for industrial robots. In Intelligent Robots and Systems, 2002. IEEE/RSJ International Conference on. IEEE.
  10. Fischer, M. and Henrich, D. (2009). 3D collision detection for industrial robots and unknown obstacles using multiple depth images. Advances in Robotics Research.
  11. Flacco, F. and De Luca, A. (2010). Multiple depth/presence sensors: Integration and optimal placement for human/robot coexistence. In Robotics and Automation (ICRA), 2010 IEEE International Conference on. IEEE.
  12. Graf, J., Puls, S., and W örn, H. (2010). Recognition and understanding situations and activities with description logics for safe human-robot cooperation. In COGNITIVE 2010, The Second International Conference on Advanced Cognitive Technologies and Applications.
  13. Haddadin, S. (2013). Towards Safe Robots: Approaching Asimov's 1st Law. Springer Publishing Company, Incorporated.
  14. Henrich, D. and Gecks, T. (2008). Multi-camera collision detection between known and unknown objects. In Distributed Smart Cameras, 2008. ICDSC 2008. Second ACM/IEEE International Conference on. IEEE.
  15. Kulic, D. and Croft, E. (2007). Affective state estimation for human-robot interaction. Robotics, IEEE Transactions on.
  16. Lacevic, B. and Rocco, P. (2010). Kinetostatic danger field - a novel safety assessment for human-robot interaction. In Intelligent Robots and Systems (IROS), 2010.
  17. Lenz, C. (2012). Fusing multiple kinects to survey shared human-robot-workspaces. Technical report, Technical Report Technical Report TUM-I1214, Technische Universität München, Munich, Germany.
  18. Mittal, A. and Davis, L. (2008). A General Method for Sensor Planning in Multi-Sensor Systems: Extension to Random Occlusion. International journal of computer vision, 76(1):31-52.
  19. Nikolaidis, S., Ueda, R., Hayashi, A., and Arai, T. (2009). Optimal camera placement considering mobile robot trajectory. In Robotics and Biomimetics, 2008. IEEE International Conference on. IEEE.
  20. O'rourke, J. (1987). Art gallery theorems and algorithms, volume 1092. Oxford University Press Oxford.
  21. Siegwart, R., Nourbakhsh, I. R., and Scaramuzza, D. (2011). Introduction to autonomous mobile robots. MIT Press.
  22. Sisbot, E., Marin-Urias, L., Broquere, X., Sidobre, D., and Alami, R. (2010). Synthesizing robot motions adapted to human presence. International Journal of Social Robotics.
  23. Yao, Y., Chen, C.-H., Abidi, B., Page, D., Koschan, A., and Abidi, M. (2008). Sensor planning for automated and persistent object tracking with multiple cameras. In Computer Vision and Pattern Recognition, 2008. IEEE Conference on. IEEE.

Paper Citation

in Harvard Style

Stähr M., M. Wallace A. and M. Robertson N. (2014). Depth Sensor Placement for Human Robot Cooperation . In Proceedings of the 11th International Conference on Informatics in Control, Automation and Robotics - Volume 2: ICINCO, ISBN 978-989-758-040-6, pages 311-318. DOI: 10.5220/0005017103110318

in Bibtex Style

author={Max Stähr and Andrew M. Wallace and Neil M. Robertson},
title={Depth Sensor Placement for Human Robot Cooperation},
booktitle={Proceedings of the 11th International Conference on Informatics in Control, Automation and Robotics - Volume 2: ICINCO,},

in EndNote Style

JO - Proceedings of the 11th International Conference on Informatics in Control, Automation and Robotics - Volume 2: ICINCO,
TI - Depth Sensor Placement for Human Robot Cooperation
SN - 978-989-758-040-6
AU - Stähr M.
AU - M. Wallace A.
AU - M. Robertson N.
PY - 2014
SP - 311
EP - 318
DO - 10.5220/0005017103110318