Detecting Fine-grained Sitting Affordances with Fuzzy Sets

Viktor Seib, Malte Knauf, Dietrich Paulus

2016

Abstract

Recently, object affordances have moved into the focus of researchers in computer vision. Affordances describe how an object can be used by a specific agent. This additional information on the purpose of an object is used to augment the classification process. With the herein proposed approach we aim at bringing affordances and object classification closer together by proposing fine-grained affordances. We present an algorithm that detects fine-grained sitting affordances in point clouds by iteratively transforming a human model into the scene. This approach enables us to distinguish object functionality on a finer-grained scale, thus more closely resembling the different purposes of similar objects. For instance, traditional methods suggest that a stool, chair and armchair all afford sitting. This is also true for our approach, but additionally we distinguish sitting without backrest, with backrest and with armrests. This fine-grained affordance definition closely resembles individual types of sitting and better reflects the purposes of different chairs. We experimentally evaluate our approach and provide fine-grained affordance annotations in a dataset from our lab.

References

  1. Bar-Aviv, E. and Rivlin, E. (2006). Functional 3d object classification using simulation of embodied agent. In BMVC, pages 307-316.
  2. Castellini, C., Tommasi, T., Noceti, N., Odone, F., and Caputo, B. (2011). Using object affordances to improve object recognition. Autonomous Mental Development, IEEE Transactions on, 3(3):207-215.
  3. Chemero, A. and Turvey, M. T. (2007). Gibsonian affordances for roboticists. Adaptive Behavior, 15(4):473- 480.
  4. Gibson, J. J. (1986). The ecological approach to visual perception. Routledge.
  5. Grabner, H., Gall, J., and Van Gool, L. (2011). What makes a chair a chair? In Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on, pages 1529-1536.
  6. Hermans, T., Rehg, J. M., and Bobick, A. (2011). Affordance prediction via learned object attributes. In International Conference on Robotics and Automation: Workshop on Semantic Perception, Mapping, and Exploration.
  7. Hinkle, L. and Olson, E. (2013). Predicting object functionality using physical simulations. In Intelligent Robots and Systems (IROS), 2013 IEEE/RSJ International Conference on, pages 2784-2790. IEEE.
  8. Hornung, A., Wurm, K. M., Bennewitz, M., Stachniss, C., and Burgard, W. (2013). OctoMap: An efficient probabilistic 3D mapping framework based on octrees. Autonomous Robots. Software available at http://octomap.github.com.
  9. Jiang, Y. and Saxena, A. (2013). Hallucinating humans for learning robotic placement of objects. In Experimental Robotics, pages 921-937. Springer.
  10. Kjellström, H., Romero, J., and Kragic, D. (2011). Visual object-action recognition: Inferring object affordances from human demonstration. Computer Vision and Image Understanding, 115(1):81-90.
  11. Lopes, M., Melo, F. S., and Montesano, L. (2007). Affordance-based imitation learning in robots. In Intelligent Robots and Systems, 2007. IROS 2007. IEEE/RSJ International Conference on, pages 1015- 1021. IEEE.
  12. Maier, J. R., Ezhilan, T., and Fadel, G. M. (2007). The affordance structure matrix: a concept exploration and attention directing tool for affordance based design. In ASME 2007 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference, pages 277-287. American Society of Mechanical Engineers.
  13. Maier, J. R., Mocko, G., Fadel, G. M., et al. (2009). Hierarchical affordance modeling. In DS 58-5: Proceedings of ICED 09, the 17th International Conference on Engineering Design, Vol. 5, Design Methods and Tools (pt. 1), Palo Alto, CA, USA, 24.-27.08. 2009.
  14. Montesano, L., Lopes, M., Bernardino, A., and SantosVictor, J. (2008). Learning object affordances: from sensory-motor coordination to imitation. Robotics, IEEE Transactions on, 24(1):15-26.
  15. Pan, J., Chitta, S., and Manocha, D. (2012). Fcl: A general purpose library for collision and proximity queries. In Robotics and Automation (ICRA), 2012 IEEE International Conference on, pages 3859-3866.
  16. Ridge, B., Skocaj, D., and Leonardis, A. (2009). Unsupervised learning of basic object affordances from object properties. In Computer Vision Winter Workshop, pages 21-28.
  17. S¸ahin, E., C¸akmak, M., Do g?ar, M. R., Ug?ur, E., and Üc¸oluk, G. (2007). To afford or not to afford: A new formalization of affordances toward affordance-based robot control. Adaptive Behavior, 15(4):447-472.
  18. Seib, V., Wojke, N., Knauf, M., and Paulus, D. (2015). Detecting fine-grained affordances with an anthropomorphic agent model. In Fleet, D., Pajdla, T., Schiele, B., and Tuytelaars, T., editors, Computer Vision - ECCV 2014 Workshops, volume II of LNCS, pages 413-419. Springer International Publishing Switzerland.
  19. Stark, M., Lies, P., Zillich, M., Wyatt, J., and Schiele, B. (2008). Functional object class detection based on learned affordance cues. In Computer Vision Systems, pages 435-444. Springer.
  20. Sun, J., Moore, J. L., Bobick, A., and Rehg, J. M. (2010). Learning visual object categories for robot affordance prediction. The International Journal of Robotics Research, 29(2-3):174-197.
  21. Wünstel, M. and Moratz, R. (2004). Automatic object recognition within an office environment. InCRV, volume 4, pages 104-109. Citeseer.
  22. Zadeh, L. A. (1965). Fuzzy sets. Information and Control, 8(3):338-353.
Download


Paper Citation


in Harvard Style

Seib V., Knauf M. and Paulus D. (2016). Detecting Fine-grained Sitting Affordances with Fuzzy Sets . In Proceedings of the 11th Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 4: VISAPP, (VISIGRAPP 2016) ISBN 978-989-758-175-5, pages 289-298. DOI: 10.5220/0005638802890298


in Bibtex Style

@conference{visapp16,
author={Viktor Seib and Malte Knauf and Dietrich Paulus},
title={Detecting Fine-grained Sitting Affordances with Fuzzy Sets},
booktitle={Proceedings of the 11th Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 4: VISAPP, (VISIGRAPP 2016)},
year={2016},
pages={289-298},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0005638802890298},
isbn={978-989-758-175-5},
}


in EndNote Style

TY - CONF
JO - Proceedings of the 11th Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 4: VISAPP, (VISIGRAPP 2016)
TI - Detecting Fine-grained Sitting Affordances with Fuzzy Sets
SN - 978-989-758-175-5
AU - Seib V.
AU - Knauf M.
AU - Paulus D.
PY - 2016
SP - 289
EP - 298
DO - 10.5220/0005638802890298