Skip to main content

Affordance Origami: Unfolding Agent Models for Hierarchical Affordance Prediction

  • Conference paper
  • First Online:
Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2016)

Abstract

Object affordances have moved into the focus of researchers in computer vision and have been shown to augment the performance of object recognition approaches. In this work we address the problem of visual affordance detection in home environments with an explicitly defined agent model. In our case, the agent is modeled as an anthropomorphic body. We model affordances hierarchically to allow for discrimination on a fine-grained scale. The anthropomorphic agent model is unfolded into the environment and iteratively transformed according to the defined affordance hierarchy. A scoring function is computed to evaluate the quality of the predicted affordance. This approach enables us to distinguish object functionality on a finer-grained scale, thus more closely resembling the different purposes of similar objects. For instance, traditional methods suggest that a stool, chair and armchair all afford sitting. However, we additionally distinguish sitting without backrest, with backrest and with armrests. This fine-grained affordance definition closely resembles individual types of sitting and better reflects the purposes of different chairs. We report evaluation results of our approach on publicly available data as well as on real sensor data.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Real-World dataset available at http://agas.uni-koblenz.de/data/datasets/furniture_affordances/uni-koblenz_kinect_v1.tar.gz.

References

  1. Gibson, J.J.: The ecological approach to visual perception. Routledge, Abingdon (1986)

    Google Scholar 

  2. Chemero, A.: An outline of a theory of affordances. Ecol. psychol. 15, 181–195 (2003)

    Article  Google Scholar 

  3. Turvey, M.T.: Affordances and prospective control: an outline of the ontology. Ecol. psychol. 4, 173–187 (1992)

    Article  Google Scholar 

  4. Hinkle, L., Olson, E.: Predicting object functionality using physical simulations. In: 2013 IEEE/RSJ International Conference on, Intelligent Robots and Systems (IROS), pp. 2784–2790. IEEE (2013)

    Google Scholar 

  5. Grabner, H., Gall, J., Van Gool, L.: What makes a chair a chair? In: 2011 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1529–1536 (2011)

    Google Scholar 

  6. Montesano, L., Lopes, M., Bernardino, A., Santos-Victor, J.: Learning object affordances: from sensory-motor coordination to imitation. IEEE Trans. Robot. 24, 15–26 (2008)

    Article  Google Scholar 

  7. Ridge, B., Skocaj, D., Leonardis, A.: Unsupervised learning of basic object affordances from object properties. In: Computer Vision Winter Workshop, pp. 21–28 (2009)

    Google Scholar 

  8. Stark, M., Lies, P., Zillich, M., Wyatt, J., Schiele, B.: Functional object class detection based on learned affordance cues. In: Gasteratos, A., Vincze, M., Tsotsos, J.K. (eds.) ICVS 2008. LNCS, vol. 5008, pp. 435–444. Springer, Heidelberg (2008). doi:10.1007/978-3-540-79547-6_42

    Chapter  Google Scholar 

  9. Kjellström, H., Romero, J., Kragić, D.: Visual object-action recognition: inferring object affordances from human demonstration. Comput. Vis. Image Underst. 115, 81–90 (2011)

    Article  Google Scholar 

  10. Lopes, M., Melo, F.S., Montesano, L.: Affordance-based imitation learning in robots. In: IEEE/RSJ International Conference on Intelligent Robots and Systems IROS 2007, pp. 1015–1021. IEEE (2007)

    Google Scholar 

  11. Castellini, C., Tommasi, T., Noceti, N., Odone, F., Caputo, B.: Using object affordances to improve object recognition. IEEE Trans. Auton. Ment. Dev. 3, 207–215 (2011)

    Article  Google Scholar 

  12. Wünstel, M., Moratz, R.: Automatic object recognition within an office environment. In: CRV, vol. 4, pp. 104–109. Citeseer (2004)

    Google Scholar 

  13. Jiang, Y., Saxena, A.: Hallucinating humans for learning robotic placement of objects. In: Desai, J.P., Dudek, G., Khatib, O., Kumar, V. (eds.) Experimental Robotics, vol. 88, pp. 921–937. Springer, Heidelberg (2013). doi:10.1007/978-3-319-00065-7_61

    Chapter  Google Scholar 

  14. Gupta, A., Satkin, S., Efros, A., Hebert, M., et al.: From 3D scene geometry to human workspace. In: 2011 IEEE Conference on, Computer Vision and Pattern Recognition (CVPR), pp. 1961–1968. IEEE (2011)

    Google Scholar 

  15. Şahin, E., Çakmak, M., Doğar, M.R., Uğur, E., Üçoluk, G.: To afford or not to afford: a new formalization of affordances toward affordance-based robot control. Adapt. Behav. 15, 447–472 (2007)

    Article  Google Scholar 

  16. Seib, V., Knauf, M., Paulus, D.: Detecting fine-grained sitting affordances with fuzzy sets. In: Magnenat-Thalmann, N., Richard, P., Linsen, L., Telea, A., Battiato, S., Imai, F., Braz (eds.) Proceedings of the 11th Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications. SciTePress (2016)

    Google Scholar 

  17. Sun, J., Moore, J.L., Bobick, A., Rehg, J.M.: Learning visual object categories for robot affordance prediction. Int. J. Robot. Res. 29, 174–197 (2010)

    Article  Google Scholar 

  18. Hermans, T., Rehg, J.M., Bobick, A.: Affordance prediction via learned object attributes. In: International Conference on Robotics and Automation: Workshop on Semantic Perception, Mapping, and Exploration (2011)

    Google Scholar 

  19. Seib, V., Wojke, N., Knauf, M., Paulus, D.: Detecting fine-grained affordances with an anthropomorphic agent model. In: Agapito, L., Bronstein, M.M., Rother, C. (eds.) ECCV 2014. LNCS, vol. 8926, pp. 413–419. Springer, Cham (2015). doi:10.1007/978-3-319-16181-5_30

    Google Scholar 

  20. Maier, J.R., Ezhilan, T., Fadel, G.M.: The affordance structure matrix: a concept exploration and attention directing tool for affordance based design. In: ASME 2007 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference, pp. 277–287. American Society of Mechanical Engineers (2007)

    Google Scholar 

  21. Maier, J.R., Mocko, G., Fadel, G.M., et al.: Hierarchical affordance modeling. In: DS 58–5: Proceedings of ICED 09, the 17th International Conference on Engineering Design, vol. 5, Design Methods and Tools (pt. 1), Palo Alto, CA, USA, 24–27 08 2009 (2009)

    Google Scholar 

  22. Stark, L., Bowyer, K.: Function-based generic recognition for multiple object categories. CVGIP: Image Underst. 59, 1–21 (1994)

    Article  Google Scholar 

  23. Bar-Aviv, E., Rivlin, E.: Functional 3D object classification using simulation of embodied agent. In: BMVC, pp. 307–316 (2006)

    Google Scholar 

  24. Chemero, A., Turvey, M.T.: Gibsonian affordances for roboticists. Adapt. Behav. 15, 473–480 (2007)

    Article  Google Scholar 

  25. GESIS: Wie groß sind Sie? allgemeine bevölkerungsumfrage der sozialwissenschaften allbus 2014 (2015). http://de.statista.com/statistik/daten/studie/278035/umfrage/koerpergroesse-in-deutschland/. Accessed 26 Jan 2016

  26. Bogin, B., Varela-Silva, M.I.: Leg length, body proportion, and health: a review with a note on beauty. Int. J. Environ. Res. Public Health 7, 1047–1075 (2010)

    Article  Google Scholar 

  27. Zadeh, L.A.: Fuzzy sets. Inf. Control 8, 338–353 (1965)

    Article  MATH  Google Scholar 

  28. Pan, J., Chitta, S., Manocha, D.: FCL: a general purpose library for collision and proximity queries. In: 2012 IEEE International Conference on Robotics and Automation (ICRA), pp. 3859–3866 (2012)

    Google Scholar 

  29. Hornung, A., Wurm, K.M., Bennewitz, M., Stachniss, C., Burgard, W.: OctoMap: an efficient probabilistic 3D mapping framework based on Octrees. Autonomous Robots (2013). http://octomap.github.com

  30. Bonissone, P.P., Decker, K.S.: Selecting uncertainty calculi and granularity: an experiment in trading-off precision and complexity. Uncertainty in Artificial Intelligence (1985)

    Google Scholar 

  31. Stark, L., Hall, L.O., Bowyer, K.: Investigation of methods of combining functional evidence for 3-D object recognition. In: Intelligent Robots and Computer Vision IX: Algorithms and Techniques (1991)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Viktor Seib .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing AG

About this paper

Cite this paper

Seib, V., Knauf, M., Paulus, D. (2017). Affordance Origami: Unfolding Agent Models for Hierarchical Affordance Prediction. In: Braz, J., et al. Computer Vision, Imaging and Computer Graphics Theory and Applications. VISIGRAPP 2016. Communications in Computer and Information Science, vol 693. Springer, Cham. https://doi.org/10.1007/978-3-319-64870-5_27

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-64870-5_27

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-64869-9

  • Online ISBN: 978-3-319-64870-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics