Skip to main content

Linearization in Motion Planning under Uncertainty

  • Chapter
  • First Online:
Algorithmic Foundations of Robotics XII

Part of the book series: Springer Proceedings in Advanced Robotics ((SPAR,volume 13))

Abstract

Motion planning under uncertainty is essential to autonomous robots. Over the past decade, the scalability of such planners have advanced substantially. Despite these advances, the problem remains difficult for systems with non-linear dynamics. Most successful methods for planning perform forward search that relies heavily on a large number of simulation runs. Each simulation run generally requires more costly integration for systems with non-linear dynamics. Therefore, for such problems, the entire planning process remains relatively slow. Not surprisingly, linearization-based methods for planning under uncertainty have been proposed. However, it is not clear how linearization affects the quality of the generated motion strategy, and more importantly where to and where not to use such a simplification. This paper presents our preliminary work towards answering such questions. In particular, we propose a measure, called Statistical-distance-based Non-linearity Measure (SNM), to identify where linearization can and where it should not be performed. The measure is based on the distance between the distributions that represent the original motion-sensing models and their linearized version. We show that when the planning problem is framed as the Partially Observable Markov Decision Process (POMDP), the difference between the value of the optimal strategy generated if we plan using the original model and if we plan using the linearized model, can be upper bounded by a function linear in SNM. We test the applicability of this measure in simulation via two venues. First, we compare SNM with a negentropy-based Measure of Non-Gaussianity (MoNG) –a measure that has recently been shown to be a suitable measure of non-linearity for stochastic systems [1]. We compare their performance in measuring the difference between a general POMDP solver [2] that computes motion strategies using the original model and a solver that uses the linearized model (adapted from [3]) on various scenarios. Our results indicate that SNM is more suitable in taking into account the effect that obstacles have on the effectiveness of linearization. In the second set of tests, we use a local estimate of SNM to develop a simple on-line planner that switches between using the original and the linearized model. Simulation results on a car-like robot with second order dynamics and a 4-DOFs and 6-DOFs manipulator with torque control indicate that our simple planner appropriately decides if and when linearization should be used.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

eBook
USD 16.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Duník, J., Straka, O.,Šimandl, M.: Nonlinearity and non-gaussianity measures for stochastic dynamic systems. In: Information Fusion (FUSION), IEEE (2013) 204–211

    Google Scholar 

  2. Kurniawati, H., Yadav, V.: An online POMDP solver for uncertainty planning in dynamic environment. In: ISRR. (2013)

    Google Scholar 

  3. Sun,W., Patil, S., Alterovitz, R.: High-frequency replanning under uncertainty using parallel sampling-based motion planning. IEEE Transactions on Robotics 31(1) (2015) 104–116

    Google Scholar 

  4. Canny, J., Reif, J.: New lower bound techniques for robot motion planning problems. In: Foundations of Computer Science, 1987., 28th Annual Symposium on, IEEE (1987) 49–60

    Google Scholar 

  5. Natarajan, B.: The complexity of fine motion planning. The International journal of robotics research 7(2) (1988) 36–42

    Google Scholar 

  6. Kaelbling, L., Littman, M., Cassandra, A.: Planning and acting in partially observable stochastic domains. AI 101 (1998) 99–134

    Google Scholar 

  7. Drake, A.W.: Observation of a Markov process through a noisy channel. PhD thesis, Massachusetts Institute of Technology (1962)

    Google Scholar 

  8. Horowitz, M., Burdick, J.: Interactive Non-Prehensile Manipulation for Grasping Via POMDPs. In: ICRA. (2013)

    Google Scholar 

  9. Temizer, S., Kochenderfer, M., Kaelbling, L., Lozano-Pérez, T., Kuchar, J.: Unmanned aircraft collision avoidance using partially observable markov decision processes. Project Report ATC-356, MIT Lincoln Laboratory, Advanced Concepts Program, Lexington, Massachusetts, USA (September 2009)

    Google Scholar 

  10. Silver, D., Veness, J.: Monte-Carlo Planning in Large POMDPs. In: NIPS. (2010)

    Google Scholar 

  11. Somani, A., Ye, N., Hsu, D., Lee, W.S.: DESPOT: Online POMDP planning with regularization. In: NIPS. (2013) 1772–1780

    Google Scholar 

  12. Seiler, K., Kurniawati, H., Singh, S.: An online and approximate solver for pomdps with continuous action space. In: ICRA. (2015)

    Google Scholar 

  13. Agha-Mohammadi, A.A., Chakravorty, S., Amato, N.M.: Firm: Sampling-based feedback motion planning under motion uncertainty and imperfect measurements. IJRR (2013)

    Google Scholar 

  14. Berg, J., Abbeel, P., Goldberg, K.: LQG-MP: Optimized Path Planning for Robots with Motion Uncertainty and Imperfect State Information. In: RSS. (2010)

    Google Scholar 

  15. Berg, J., Wilkie, D., Guy, S., Niethammer, M., Manocha, D.: LQG-Obstacles: Feedback Control with Collision Avoidance for Mobile Robots with Motion and Sensing Uncertainty. In: ICRA. (2012)

    Google Scholar 

  16. Prentice, S., Roy, N.: The belief roadmap: Efficient planning in linear pomdps by factoring the covariance. In: Robotics Research. Springer (2010) 293–305

    Google Scholar 

  17. Li, X.R.: Measure of nonlinearity for stochastic systems. In: Information Fusion (FUSION), 2012 15th International Conference on, IEEE (2012) 1073–1080

    Google Scholar 

  18. Bates, D.M., Watts, D.G.: Relative curvature measures of nonlinearity. Journal of the Royal Statistical Society. Series B (Methodological) (1980) 1–25

    Google Scholar 

  19. Beale, E.: Confidence regions in non-linear estimation. Journal of the Royal Statistical Society. Series B (Methodological) (1960) 41–88

    Google Scholar 

  20. Emancipator, K., Kroll, M.H.: A quantitative measure of nonlinearity. Clinical chemistry 39(5) (1993) 766–772

    Google Scholar 

  21. Mastin, A., Jaillet, P.: Loss bounds for uncertain transition probabilities in markov decision processes. In: CDC, IEEE (2012) 6708–6715

    Google Scholar 

  22. Müller, A.: How does the value function of a markov decision process depend on the transition probabilities? Mathematics of Operations Research 22(4) (1997) 872–885

    Google Scholar 

  23. Arulampalam, M.S., Maskell, S., Gordon, N., Clapp, T.: A tutorial on particle filters for online nonlinear/non-gaussian bayesian tracking. IEEE Transactions on signal processing 50(2) (2002) 174–188

    Google Scholar 

  24. Spong, M.W., Hutchinson, S., Vidyasagar, M.: Robot Modeling and Control. Volume 3. Wiley New York (2006)

    Google Scholar 

  25. Kurniawati, H., Patrikalakis, N.: Point-Based Policy Transformation: Adapting Policy to Changing POMDP Models. In: WAFR. (2012)

    Google Scholar 

  26. Gibbs, A.L., Su, F.E.: On choosing and bounding probability metrics. International statistical review 70(3) (2002) 419–435

    Google Scholar 

  27. Lavalle, S.M., Kuffner Jr, J.J.: Rapidly-exploring random trees: Progress and prospects. In: Algorithmic and Computational Robotics: New Directions, Citeseer (2000)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Marcus Hoerger .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Hoerger, M., Kurniawati, H., Bandyopadhyay, T., Elfes, A. (2020). Linearization in Motion Planning under Uncertainty. In: Goldberg, K., Abbeel, P., Bekris, K., Miller, L. (eds) Algorithmic Foundations of Robotics XII. Springer Proceedings in Advanced Robotics, vol 13. Springer, Cham. https://doi.org/10.1007/978-3-030-43089-4_18

Download citation

Publish with us

Policies and ethics