Abstract
Motion planning under uncertainty is essential to autonomous robots. Over the past decade, the scalability of such planners have advanced substantially. Despite these advances, the problem remains difficult for systems with non-linear dynamics. Most successful methods for planning perform forward search that relies heavily on a large number of simulation runs. Each simulation run generally requires more costly integration for systems with non-linear dynamics. Therefore, for such problems, the entire planning process remains relatively slow. Not surprisingly, linearization-based methods for planning under uncertainty have been proposed. However, it is not clear how linearization affects the quality of the generated motion strategy, and more importantly where to and where not to use such a simplification. This paper presents our preliminary work towards answering such questions. In particular, we propose a measure, called Statistical-distance-based Non-linearity Measure (SNM), to identify where linearization can and where it should not be performed. The measure is based on the distance between the distributions that represent the original motion-sensing models and their linearized version. We show that when the planning problem is framed as the Partially Observable Markov Decision Process (POMDP), the difference between the value of the optimal strategy generated if we plan using the original model and if we plan using the linearized model, can be upper bounded by a function linear in SNM. We test the applicability of this measure in simulation via two venues. First, we compare SNM with a negentropy-based Measure of Non-Gaussianity (MoNG) –a measure that has recently been shown to be a suitable measure of non-linearity for stochastic systems [1]. We compare their performance in measuring the difference between a general POMDP solver [2] that computes motion strategies using the original model and a solver that uses the linearized model (adapted from [3]) on various scenarios. Our results indicate that SNM is more suitable in taking into account the effect that obstacles have on the effectiveness of linearization. In the second set of tests, we use a local estimate of SNM to develop a simple on-line planner that switches between using the original and the linearized model. Simulation results on a car-like robot with second order dynamics and a 4-DOFs and 6-DOFs manipulator with torque control indicate that our simple planner appropriately decides if and when linearization should be used.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Duník, J., Straka, O.,Šimandl, M.: Nonlinearity and non-gaussianity measures for stochastic dynamic systems. In: Information Fusion (FUSION), IEEE (2013) 204–211
Kurniawati, H., Yadav, V.: An online POMDP solver for uncertainty planning in dynamic environment. In: ISRR. (2013)
Sun,W., Patil, S., Alterovitz, R.: High-frequency replanning under uncertainty using parallel sampling-based motion planning. IEEE Transactions on Robotics 31(1) (2015) 104–116
Canny, J., Reif, J.: New lower bound techniques for robot motion planning problems. In: Foundations of Computer Science, 1987., 28th Annual Symposium on, IEEE (1987) 49–60
Natarajan, B.: The complexity of fine motion planning. The International journal of robotics research 7(2) (1988) 36–42
Kaelbling, L., Littman, M., Cassandra, A.: Planning and acting in partially observable stochastic domains. AI 101 (1998) 99–134
Drake, A.W.: Observation of a Markov process through a noisy channel. PhD thesis, Massachusetts Institute of Technology (1962)
Horowitz, M., Burdick, J.: Interactive Non-Prehensile Manipulation for Grasping Via POMDPs. In: ICRA. (2013)
Temizer, S., Kochenderfer, M., Kaelbling, L., Lozano-Pérez, T., Kuchar, J.: Unmanned aircraft collision avoidance using partially observable markov decision processes. Project Report ATC-356, MIT Lincoln Laboratory, Advanced Concepts Program, Lexington, Massachusetts, USA (September 2009)
Silver, D., Veness, J.: Monte-Carlo Planning in Large POMDPs. In: NIPS. (2010)
Somani, A., Ye, N., Hsu, D., Lee, W.S.: DESPOT: Online POMDP planning with regularization. In: NIPS. (2013) 1772–1780
Seiler, K., Kurniawati, H., Singh, S.: An online and approximate solver for pomdps with continuous action space. In: ICRA. (2015)
Agha-Mohammadi, A.A., Chakravorty, S., Amato, N.M.: Firm: Sampling-based feedback motion planning under motion uncertainty and imperfect measurements. IJRR (2013)
Berg, J., Abbeel, P., Goldberg, K.: LQG-MP: Optimized Path Planning for Robots with Motion Uncertainty and Imperfect State Information. In: RSS. (2010)
Berg, J., Wilkie, D., Guy, S., Niethammer, M., Manocha, D.: LQG-Obstacles: Feedback Control with Collision Avoidance for Mobile Robots with Motion and Sensing Uncertainty. In: ICRA. (2012)
Prentice, S., Roy, N.: The belief roadmap: Efficient planning in linear pomdps by factoring the covariance. In: Robotics Research. Springer (2010) 293–305
Li, X.R.: Measure of nonlinearity for stochastic systems. In: Information Fusion (FUSION), 2012 15th International Conference on, IEEE (2012) 1073–1080
Bates, D.M., Watts, D.G.: Relative curvature measures of nonlinearity. Journal of the Royal Statistical Society. Series B (Methodological) (1980) 1–25
Beale, E.: Confidence regions in non-linear estimation. Journal of the Royal Statistical Society. Series B (Methodological) (1960) 41–88
Emancipator, K., Kroll, M.H.: A quantitative measure of nonlinearity. Clinical chemistry 39(5) (1993) 766–772
Mastin, A., Jaillet, P.: Loss bounds for uncertain transition probabilities in markov decision processes. In: CDC, IEEE (2012) 6708–6715
Müller, A.: How does the value function of a markov decision process depend on the transition probabilities? Mathematics of Operations Research 22(4) (1997) 872–885
Arulampalam, M.S., Maskell, S., Gordon, N., Clapp, T.: A tutorial on particle filters for online nonlinear/non-gaussian bayesian tracking. IEEE Transactions on signal processing 50(2) (2002) 174–188
Spong, M.W., Hutchinson, S., Vidyasagar, M.: Robot Modeling and Control. Volume 3. Wiley New York (2006)
Kurniawati, H., Patrikalakis, N.: Point-Based Policy Transformation: Adapting Policy to Changing POMDP Models. In: WAFR. (2012)
Gibbs, A.L., Su, F.E.: On choosing and bounding probability metrics. International statistical review 70(3) (2002) 419–435
Lavalle, S.M., Kuffner Jr, J.J.: Rapidly-exploring random trees: Progress and prospects. In: Algorithmic and Computational Robotics: New Directions, Citeseer (2000)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this chapter
Cite this chapter
Hoerger, M., Kurniawati, H., Bandyopadhyay, T., Elfes, A. (2020). Linearization in Motion Planning under Uncertainty. In: Goldberg, K., Abbeel, P., Bekris, K., Miller, L. (eds) Algorithmic Foundations of Robotics XII. Springer Proceedings in Advanced Robotics, vol 13. Springer, Cham. https://doi.org/10.1007/978-3-030-43089-4_18
Download citation
DOI: https://doi.org/10.1007/978-3-030-43089-4_18
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-43088-7
Online ISBN: 978-3-030-43089-4
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)