Skip to main content

Hierarchical Decision Theoretic Planning for Navigation Among Movable Obstacles

  • Conference paper
Book cover Algorithmic Foundations of Robotics X

Part of the book series: Springer Tracts in Advanced Robotics ((STAR,volume 86))

Abstract

In this paper we present the first decision theoretic planner for the problem of Navigation Among Movable Obstacles (NAMO). While efficient planners for NAMO exist, they are challenging to implement in practice due to the inherent uncertainty in both perception and control of real robots. Generalizing existing NAMO planners to nondeterministic domains is particularly difficult due to the sensitivity of MDP methods to task dimensionality. Our work addresses this challenge by combining ideas from Hierarchical Reinforcement Learning with Monte Carlo Tree Search, and results in an algorithm that can be used for fast online planning in uncertain environments. We evaluate our algorithm in simulation, and provide a theoretical argument for our results which suggest linear time complexity in the number of obstacles for typical environments.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 169.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 219.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 219.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Barto, A.G., Mahadevan, S.: Recent advances in hierarchical reinforcement learning. Discrete Event Dynamic Systems 13(4), 341–379 (2003)

    Article  MathSciNet  Google Scholar 

  2. Chen, P., Hwang, Y.: Practical path planning among movable obstacles. In: Proceedings of the IEEE International Conference on Robotics and Automation, pp. 444–449 (1991)

    Google Scholar 

  3. Demaine, E., O’Rourke, J., Demaine, M.L.: Pushpush and push-1 are np-hard in 2d. In: Proceedings of the 12th Canadian Conference on Computational Geometry, pp. 211–219 (2000)

    Google Scholar 

  4. Dietterich, T.G.: An Overview of MAXQ Hierarchical Reinforcement Learning. In: Choueiry, B.Y., Walsh, T. (eds.) SARA 2000. LNCS (LNAI), vol. 1864, pp. 26–44. Springer, Heidelberg (2000)

    Chapter  Google Scholar 

  5. Howard, R.A.: Dynamic probabilistic systems, vol. 317. John Wiley & Sons, New York (1971)

    Google Scholar 

  6. Hsiao, K., Kaelbling, L.P., Lozano-pérez, T.: Grasping pomdps. In: Proc. IEEE Int. Conf. on Robotics and Automation (ICRA), pp. 4685–4692 (2007)

    Google Scholar 

  7. Wu, H., Levihn, M., Stilman, M.: Navigation among movable obstacles in unknown environments. In: IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, IROS 2010 (October 2010)

    Google Scholar 

  8. Kaelbling, L.P., Littman, M.L., Moore, A.W.: Reinforcement learning: A survey. Arxiv preprint cs/9605103 (1996)

    Google Scholar 

  9. Kakiuchi, Y., Ueda, R., Kobayashi, K., Okada, K., Inaba, M.: Working with movable obstacles using on-line environment perception reconstruction using active sensing and color range sensor. In: IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS), pp. 1696–1701 (2010)

    Google Scholar 

  10. Kearns, M., Mansour, Y., Ng, A.Y.: A sparse sampling algorithm for near-optimal planning in large markov decision processes. Machine Learning 49, 193–208 (2002)

    Article  MATH  Google Scholar 

  11. Kocsis, L., Szepesvári, C.: Bandit Based Monte-Carlo Planning. In: Fürnkranz, J., Scheffer, T., Spiliopoulou, M. (eds.) ECML 2006. LNCS (LNAI), vol. 4212, pp. 282–293. Springer, Heidelberg (2006)

    Google Scholar 

  12. Koenig, S., Simmons, R.G.: Xavier: A robot navigation architecture based on partially observable markov decision process models. In: Artificial Intelligence Based Mobile Robotics: Case Studies of Successful Robot Systems, pp. 91–122. MIT Press (1998)

    Google Scholar 

  13. Parr, R., Russell, S.: Reinforcement learning with hierarchies of machines. Advances in Neural Information Processing Systems, 1043–1049 (1998)

    Google Scholar 

  14. Pineau, J., Gordon, G., Thrun, S.: Point-based value iteration: An anytime algorithm for pomdps (2003)

    Google Scholar 

  15. Roy, N., Gordon, G., Thrun, S.: Finding approximate pomdp solutions through belief compression. Technical report (2003)

    Google Scholar 

  16. Russell, S., Norvig, P.: Artificial Intelligence: A Modern Approach, 3rd edn. Prentice Hall Press, Upper Saddle River (2009)

    Google Scholar 

  17. Stilman, M., Nishiwaki, K., Kagami, S., Kuffner, J.: Planning and executing navigation among movable obstacles. In: IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS 2006), pp. 820–826 (October 2006)

    Google Scholar 

  18. Stilman, M., Kuffner, J.J.: Navigation among movable obstacles: Real-time reasoning in complex environments. Journal of Humanoid Robotics, 322–341 (2004)

    Google Scholar 

  19. Sutton, R.S., Precup, D., Singh, S.: Between mdps and semi-mdps: A framework for temporal abstraction in reinforcement learning. Artificial Intelligence 112(1), 181–211 (1999)

    Article  MATH  MathSciNet  Google Scholar 

  20. Chiba, N., Nishizeki, T.: Planar graphs: theory and algorithms. Elsevier Science Ltd. (1988)

    Google Scholar 

  21. van den Berg, J., Stilman, M., Kuffner, J., Lin, M., Manocha, D.: Path Planning among Movable Obstacles: A Probabilistically Complete Approach. In: Chirikjian, G.S., Choset, H., Morales, M., Murphey, T. (eds.) Algorithmic Foundation of Robotics VIII. STAR, vol. 57, pp. 599–614. Springer, Heidelberg (2009)

    Chapter  Google Scholar 

  22. Walsh, T.J., Goschin, S., Littman, M.L.: Integrating sample-based planning and model-based reinforcement learning. In: Proceedings of AAAI, vol. (1) (2010)

    Google Scholar 

  23. Wilfong, G.: Motion planning in the presence of movable obstacles. In: SCG 1988: Proceedings of the Fourth Annual Symposium on Computational Geometry, pp. 279–288. ACM, New York (1988)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Martin Levihn .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2013 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Levihn, M., Scholz, J., Stilman, M. (2013). Hierarchical Decision Theoretic Planning for Navigation Among Movable Obstacles. In: Frazzoli, E., Lozano-Perez, T., Roy, N., Rus, D. (eds) Algorithmic Foundations of Robotics X. Springer Tracts in Advanced Robotics, vol 86. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-36279-8_2

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-36279-8_2

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-36278-1

  • Online ISBN: 978-3-642-36279-8

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics