Advertisement

Autonomous Robots

, Volume 43, Issue 1, pp 37–62 | Cite as

AMPLE: an anytime planning and execution framework for dynamic and uncertain problems in robotics

  • Caroline Ponzoni Carvalho Chanel
  • Alexandre Albore
  • Jorrit T’Hooft
  • Charles LesireEmail author
  • Florent Teichteil-Königsbuch
Article
  • 228 Downloads

Abstract

Acting in robotics is driven by reactive and deliberative reasonings which take place in the competition between execution and planning processes. Properly balancing reactivity and deliberation is still an open question for harmonious execution of deliberative plans in complex robotic applications. We propose a flexible algorithmic framework to allow continuous real-time planning of complex tasks in parallel of their executions. Our framework, named AMPLE, is oriented towards robotic modular architectures in the sense that it turns planning algorithms into services that must be generic, reactive, and valuable. Services are optimized actions that are delivered at precise time points following requests from other modules that include states and dates at which actions are needed. To this end, our framework is divided in two concurrent processes: a planning thread which receives planning requests and delegates action selection to embedded planning softwares in compliance with the queue of internal requests, and an execution thread which orchestrates these planning requests as well as action execution and state monitoring. We show how the behavior of the execution thread can be parametrized to achieve various strategies which can differ, for instance, depending on the distribution of internal planning requests over possible future execution states in anticipation of the uncertain evolution of the system, or over different underlying planners to take several levels into account. We demonstrate the flexibility and the relevance of our framework on various robotic benchmarks and real experiments that involve complex planning problems of different natures which could not be properly tackled by existing dedicated planning approaches which rely on the standard plan-then-execute loop.

Keywords

Automated planning Planning and execution Anytime framework Autonomous robots 

References

  1. Abeywickrama, D., & Zambonelli, F. (2012). Model checking goal-oriented requirements for self-adaptive systems. In International conference and workshops on engineering of computer based systems (ECBS), Novi Sad, Serbia.Google Scholar
  2. Albore, A., Palacios, H., & Geffner, H. (2009) A. translation-based approach to contingent planning. In International joint conference on artificial intelligence (IJCAI), Providence, RI, USA.Google Scholar
  3. Albore, A., Palacios, H., & Geffner, H. (2010). Compiling uncertainty away in non-deterministic conformant planning. In European conference on artificial intelligence (ECAI), Lisbon, Portugal.Google Scholar
  4. Albore, A., Ramirez, M., & Geffner, H. (2011). Effective heuristics and belief tracking for planning with incomplete information. In International conference on automated planning and scheduling (ICAPS), Freiburg, Germany.Google Scholar
  5. Baral, C., Kreinovich, V., & Trejo, R. (2000). Computational complexity of planning and approximate planning in the presence of incompleteness. Artificial Intelligence, 122(1–2), 241–267.MathSciNetzbMATHGoogle Scholar
  6. Barreiro, J., Boyce, M., Do, M., Frank, J., Iatauro, M., Kichkaylo, T., et al. (2012). EUROPA: A platform for AI planning, scheduling, constraint programming, and optimization. In International competition on knowledge engineering for planning and scheduling (ICKEPS), Sao Paulo, Brazil.Google Scholar
  7. Barto, A., Bradtke, S., & Singh, S. (1995). Learning to act using real-time dynamic programming. Artificial Intelligence Journal (AIJ), 72(1–2), 81–138.Google Scholar
  8. Bertoli, P., Cimatti, A., Roveri, M., & Traverso, P. (2006). Strong planning under partial observability. Artificial Intelligence, 170(4–5), 337–384.MathSciNetzbMATHGoogle Scholar
  9. Blum, A., & Furst, M. (1995). Fast planning through planning graph analysis. In International joint conference on artificial intelligence (IJCAI), Quebec, Canada.Google Scholar
  10. Blythe, J. (1999). Decision-theoretic planning. AI Magazine, 20(2), 37.Google Scholar
  11. Bonet, B., & Geffner, H. (2000). Planning with incomplete information as heuristic search in belief space. In International conference on artificial intelligence planning and scheduling systems (AIPS), Breckenridge, CO, USA.Google Scholar
  12. Bonet, B., & Geffner, H. (2001). Planning as heuristic search. Artificial Intelligence, 129(1), 5–33.MathSciNetzbMATHGoogle Scholar
  13. Bonet, B., & Geffner, H. (2003). Labeled RTDP: Improving the convergence of real-time dynamic programming. In International conference on automated planning and scheduling (ICAPS), Trento, Italy.Google Scholar
  14. Bonet, B., & Geffner, H. (2009) Solving POMDPs: RTDP-bel vs. point-based algorithms. In International joint conference on artificial intelligence (IJCAI), San Francisco, CA, USA.Google Scholar
  15. Bonet, B., Palacios, H., & Geffner, H. (2009). Automatic derivation of memoryless policies and finite-state controllers using classical planners. In International conference on automated planning and scheduling (ICAPS), Thessaloniki, Greece.Google Scholar
  16. Boutilier, C., Dean, T., & Hanks, S. (1999). Decision-theoretic planning: Structural assumptions and computational leverage. Journal of Artificial Intelligence Research (JAIR), 11(1), 94.MathSciNetzbMATHGoogle Scholar
  17. Brafman, R., & Hoffmann, J. (2004). Conformant planning via heuristic forward search: A new approach. In International conference on automated planning and scheduling (ICAPS), Whistler, Canada.Google Scholar
  18. Bryce, D., Kambhampati, S., & Smith, D. E. (2006). Planning graph heuristics for belief space search. Journal of Artificial Intelligence Research (JAIR), 26, 35–99.zbMATHGoogle Scholar
  19. Burns, E., Ruml, W., & Do, M. B. (2013). Heuristic search when time matters. Journal of Artificial Intelligence Research (JAIR), 47, 697–740.MathSciNetzbMATHGoogle Scholar
  20. Cannon, J., Rose, K., & Ruml, W. (2012). Real-time motion planning with dynamic obstacles. In Symposium on Combinatorial Search (SOCS), Niagara Falls, Canada.Google Scholar
  21. Carrillo, H., Dames, P., Kumar, V., & Castellanos, J. A. (2015). Autonomous robotic exploration using occupancy grid maps and graph slam based on shannon and rényi entropy. In International conference on robotics and automation (ICRA), Seattle, WA, USA.Google Scholar
  22. Carvalho Chanel, C. P., & Teichteil-Königsbuch, F. (2013). Properly acting under partial observability with action feasibility constraints. In European conference on machine learning and knowledge discovery in databases (ECML), Prague, Czech Republic.Google Scholar
  23. Carvalho Chanel, C. P., Teichteil-Königsbuch, F., & Lesire, C. (2013). Multi-target detection and recognition by UAVs using online POMDPs. In AAAI conference on artificial intelligence (AAAI), Bellevue, WA, USA.Google Scholar
  24. Carvalho Chanel, C. P., Teichteil-Königsbuch, F., & Lesire, C. (2014). A robotic execution framework for online probabilistic (re) planning. In International conference on automated planning and scheduling (ICAPS), Portsmouth, NH, USA.Google Scholar
  25. De Giacomo, G., Gerevini, A. E., Patrizio, F., Saetti, A., & Sardina, S. (2016). Agent planning programs. Artificial Intelligence, 231, 64–106.MathSciNetzbMATHGoogle Scholar
  26. Domshlak, C., Hoffmann, J., & Katz, M. (2015). Red-black planning: A new systematic approach to partial delete relaxation. Artificial Intelligence, 221, 73–114.MathSciNetzbMATHGoogle Scholar
  27. Echeverria, G., Lemaignan, S., Degroote, A., Lacroix, S., Karg, M., Koch, P., et al. (2012). Simulating complex robotic scenarios with morse. In International conference on simulation, modeling, and programming autonomous robots (SIMPAR), Tsukuba, Japan.Google Scholar
  28. Ghallab, M., Nau, D. S., & Traverso, P. (2014). The actor’s view of automated planning and acting: A position paper. Artificial Intelligence, 208, 1–17.Google Scholar
  29. Hansen, E., & Zilberstein, S. (2001). LAO$^{*}$: A heuristic search algorithm that finds solutions with loops. Artificial Intelligence Journal (AIJ), 129(1–2), 35–62.MathSciNetzbMATHGoogle Scholar
  30. Harland, J., Morley, D. N., Thangarajah, J., & Yorke-Smith, N. (2014). An operational semantics for the goal life-cycle in BDI agents. Autonomous Agents and Multi-Agent Systems (JAAMAS), 28(4), 682–719.Google Scholar
  31. Hart, P. E., Nilsson, N. J., & Raphael, B. (1968). A formal basis for the heuristic determination of minimum cost paths. IEEE Transactions on Systems, Science and Cybernetics, 4(2), 100–107.Google Scholar
  32. Haslum, P., & Jonsson, P. (1999). Some results on the complexity of planning with incomplete information. In European conference on planning (ECP), Durham, UK.Google Scholar
  33. Helmert, M., Röger, G., et al. (2008). How good is almost perfect? In AAAI conference on artificial intelligence (AAAI), Chicago, IL, USA.Google Scholar
  34. Helmert, M., Röger, G., & Karpas, E. (2011). Fast downward stone soup: A baseline for building planner portfolios. In: Inernational conference on automated planning and scheduling (ICAPS) workshop on planning and learning, Freiburg, Germany.Google Scholar
  35. Hoey, J., St-Aubin, R., Hu, A., & Boutilier, C. (1999). SPUDD: Stochastic planning using decision diagrams. In International conference on unvertainty in artificial intelligence (UAI), Stockholm, Sweden.Google Scholar
  36. Hoffmann, J., & Brafman, R. (2005). Contingent planning via heuristic forward search with implicit belief states. In International conference on automated planning and scheduling (ICAPS), Monterey, CA, USA.Google Scholar
  37. Hoffmann, J., & Nebel, B. (2001). The FF planning system: Fast plan generation through heuristic search. Journal of Artificial Intelligence Research (JAIR), 14, 253–302.zbMATHGoogle Scholar
  38. Hoffmann, J., Porteous, J., & Sebastia, L. (2004). Ordered landmarks in planning. Journal of Artificial Intelligence Research (JAIR), 22, 215–278.MathSciNetzbMATHGoogle Scholar
  39. Hornung, A., Wurm, K. M., Bennewitz, M., Stachniss, C., & Burgard, W. (2013). OctoMap: An efficient probabilistic 3D mapping framework based on octrees. Autonomous Robots, 34(3), 189–206.Google Scholar
  40. Ingrand, F., & Ghallab, M. (2014). Robotics and artificial intelligence: A perspective on deliberation functions. AI Communications, 27(1), 63–80.MathSciNetGoogle Scholar
  41. Kaelbling, L., Littman, M., & Cassandra, A. (1998). Planning and acting in partially observable stochastic domains. Aritificial Intelligence, 101(1–2), 99–134.MathSciNetzbMATHGoogle Scholar
  42. Katz, M., Lipovetzky, N., Moshkovich, D., & Tuisov, A. (2017). Adapting novelty to classical planning as heuristic search. In International conference on automated planning and scheduling (ICAPS), Pittsburgh, PA, USA.Google Scholar
  43. Kautz, H., & Selman, B. (1996). Pushing the envelope: Planning, propositional logic, and stochastic search. In National conference on artificial intelligence (AAAI), Portland, OR, USA.Google Scholar
  44. Keller, T., & Eyerich, P. (2012). PROST: Probabilistic Planning Based on UCT. In International conference on automated planning and scheduling (ICAPS), Sao Paulo, BrazilGoogle Scholar
  45. Keyder, E., Hoffmann, J., & Haslum, P. (2014). Improving delete relaxation heuristics through explicitly represented conjunctions. Journal of Artificial Intelligence Research (JAIR), 50, 487–533.MathSciNetzbMATHGoogle Scholar
  46. Knight, S., Rabideau, G., Chien, S., Engelhardt, B., & Sherwood, R. (2001). Casper: Space exploration through continuous planning. IEEE Intelligent Systems, 16(5), 70–75.Google Scholar
  47. Koenig, S., & Likhachev, M. (2001). Incremental A*. In Conference on neural information processing systems (NIPS), Vancouver, Canada.Google Scholar
  48. Korf, R. (1990). Real-time heuristic search. Artificial Intelligence, 42(2–3), 189–211.zbMATHGoogle Scholar
  49. Kurniawati, H., Hsu, D., & Lee, W. S. (2008). SARSOP: Efficient point-based POMDP planning by approximating optimally reachable belief spaces. In Robotics: science and systems (RSS), Zurich, Switzerland.Google Scholar
  50. Kuter, U., Nau, D., Reisner, E., & Goldman, R. (2008). Using classical planners to solve nondeterministic planning problems. In International conference on automated planning and scheduling (ICAPS), Sydney, Australia.Google Scholar
  51. Le Saux, B., & Sanfourche, M. (2011). Robust vehicle categorization from aerial images by 3D-template matching and multiple classifier system. In International symposium on image and signal processing and analysis (ISPA), Dubrovnik, Croatia.Google Scholar
  52. Lemai, S., & Ingrand, F. (2004). Interleaving temporal planning and execution in robotics domains. In AAAI conference on artificial intelligence (AAAI), San Jose, CA, USA.Google Scholar
  53. Likhachev, M., Ferguson, D., Gordon, G., Stentz, A., & Thrun, S. (2008). Anytime search in dynamic graphs. Artificial Intelligence, 172(14), 1613–1643.MathSciNetzbMATHGoogle Scholar
  54. Lin, C., Kolobov, A., Kamar, E., & Horvitz, E. (2015). Metareasoning for planning under uncertainty. In International joint conference on artificial intelligence (IJCAI), Buenos Aires, Argentina.Google Scholar
  55. Lipovetzky, N., & Geffner, H. (2012). Width and serialization of classical planning problems. In European conference on artificial intelligence (ECAI), Montpellier, France.Google Scholar
  56. Lipovetzky, N., & Geffner, H. (2017). A polynomial planning algorithm that beats LAMA and FF. In International conference on automated planning and scheduling (ICAPS), Pittsburg, PA, USA.Google Scholar
  57. Littman, M. L., Cassandra, A. R., & Kaelbling, L. P. (1995). Learning policies for partially observable environments: Scaling up. In International conference on machine learning (ICML), Tahoe City, CA, USA.Google Scholar
  58. Makarenko, A. A., Williams, S. B., Bourgault, F., & Durrant-Whyte, H. F. (2002). An experiment in integrated exploration. In International conference on intelligent robots and systems (IROS), Lausanne, Switzerland.Google Scholar
  59. McGann, C., Py, F., Rajan, K., Thomas, H., Henthorn, R., & McEwen, R. (2008). A deliberative architecture for AUV control. In International conference on robotics and automation (ICRA), Pasadena, CA, USA.Google Scholar
  60. Meuleau, N., Benazera, E., Brafman, R., Hansen, E., & Mausam, M. (2009). A heuristic search approach to planning with continuous resources in stochastic domains. Jounal of Artificial Intelligence Research (JAIR), 34(1), 27–59.zbMATHGoogle Scholar
  61. Munoz-Avila, H., Wilson, M. A., & Aha, D. W. (2015). Guiding the ass with goal motivation weights. In Annual conference on advances in cognitive systems: Workshop on goal reasoning, Atlanta, GA, USA.Google Scholar
  62. Nau, D., Ghallab, M., & Traverso, P. (2004). Automated planning: Theory and practice. Amsterdam: Elsevier.zbMATHGoogle Scholar
  63. Nau, D.S., Ghallab, M., & Traverso, P. (2015). Blended planning and acting: Preliminary approach, research challenges. In AAAI conference on artificial intelligence (AAAI), Austin, TX, USA.Google Scholar
  64. Palacios, H., & Geffner, H. (2009). Compiling uncertainty away in conformant planning problems with bounded width. Journal of Artificial Intelligence Research (JAIR), 35, 623–675.MathSciNetzbMATHGoogle Scholar
  65. Pineau, J., Gordon, G., & Thrun, S. (2006). Anytime point-based approximations for large POMDPs. Jounal of Artificial Intelligence Research (JAIR), 27, 335–380.zbMATHGoogle Scholar
  66. Puterman, M. (1994). Markov decision processes: Discrete stochastic dynamic programming. New York, NY: Wiley.zbMATHGoogle Scholar
  67. Richter, S., Helmert, M., & Westphal, M. (2008). Landmarks revisited. In AAAI conference on artificial intelligence (AAAI), Chicago, IL, USA.Google Scholar
  68. Richter, S., Thayer, J. T., & Ruml, W. (2010). The joy of forgetting: Faster anytime search via restarting. In International conference on automated planning and scheduling (ICAPS), Toronto, Canada.Google Scholar
  69. Richter, S., & Westphal, M. (2010). The LAMA planner: Guiding cost-based anytime planning with landmarks. Journal of Artificial Intelligence Research (JAIR), 39(1), 127–177.zbMATHGoogle Scholar
  70. Rintanen, J. (2004). Complexity of planning with partial observability. In International conference on automated planning and scheduling (ICAPS), Whistler, Canada.Google Scholar
  71. Rintanen, J., Heljanko, K., & Niemelä, I. (2006). Planning as satisfiability: Parallel plans and algorithms for plan search. Artificial Intelligence, 170(12–13), 1031–1080.MathSciNetzbMATHGoogle Scholar
  72. Roberts, M., Shivashankar, V., Alford, R., Leece, M., Gupta, S., & Aha, D. (2016). Goal reasoning, planning, and acting with ActorSim, the Actor Simulator. In Annual conference on advances in cognitive systems (CogSys), Evanston, IL, USA.Google Scholar
  73. Ross, S., & Chaib-Draa, B. (2007). AEMS: An anytime online search algorithm for approximate policy refinement in large POMDPs. In International joint conference on artificial intelligence (IJCAI), Hyderabad, India.Google Scholar
  74. Ross, S., Pineau, J., Paquet, S., & Chaib-Draa, B. (2008). Online planning algorithms for POMDPs. Journal of Artificial Intelligence Research (JAIR), 32(1), 663–704.MathSciNetzbMATHGoogle Scholar
  75. Sabbadin, R., Lang, J., & Ravoanjanahary, N. (2007). Purely epistemic Markov decision processes. In AAAI conference on artificial intelligence (AAAI), Vancouver, Canada.Google Scholar
  76. Sanfourche, M., Vittori, V., & Besnerais, G. L. (2013). eVO: A realtime embedded stereo odometry for MAV applications. In International conference on intelligent robots and systems (IROS), Tokyo, Japan.Google Scholar
  77. Smallwood, R., & Sondik, E. (1973). The optimal control of partially observable Markov processes over a finite horizon. Operations Research, 21(5), 1071–1088.zbMATHGoogle Scholar
  78. Smith, T., & Simmons, R. (2004). Heuristic search value iteration for POMDPs. In International conference on uncertainty in artificial intelligence (UAI), Banff, Canada.Google Scholar
  79. Sondik, E. J. (1978). The optimal control of partially observable Markov processes over the infinite horizon: Discounted costs. Operations Research, 26(2), 282–304.MathSciNetzbMATHGoogle Scholar
  80. Spaan, M., & Vlassis, N. (2005). Perseus: Randomized point-based value iteration for pomdps. Journal of Artificial Intelligence Research (JAIR), 24(1), 195–220.zbMATHGoogle Scholar
  81. Stentz, A. (1995). The focussed D* algorithm for real-time replanning. In International joint conference on artificial intelligence (IJCAI), Quebec, Canada.Google Scholar
  82. Teichteil-Königsbuch, F., Kuter, U., & Infantes, G. (2010). Incremental plan aggregation for generating policies in MDPs. In International conference on autonomous agents and multiagent systems (AAMAS), Toronto, Canada.Google Scholar
  83. Teichteil-Konigsbuch, F., Lesire, C., & Infantes, G. (2011). A generic framework for anytime execution-driven planning in robotics. In International conference on robotics and automation (ICRA), Shangai, China.Google Scholar
  84. To, S., Son, T., & Pontelli, E. (2010). A new approach to conformant planning using CNF. In International conference on automated planning and scheduling (ICAPS), Toronto, Canada.Google Scholar
  85. To, S., Son, T., & Pontelli, E. (2011). Contingent planning as AND/OR forward search with disjunctive representation. In International conference on automated planning and scheduling (ICAPS), Freiburg, Germany.Google Scholar
  86. Turner, H. (2002). Polynomial-length planning spans the polynomial hierarchy. In European conference on logics in AI (JELIA), Cosenza, Italy.Google Scholar
  87. Vattam, S., Klenk, M., Molineaux, M., & Aha, D. (2013). Breadth of approaches to goal reasoning: A research survey. (Tech. Rep. CS-TR-5029). Department of Computer Science, University of Maryland, College Park.Google Scholar
  88. Verma, V., Jónsson, A., Pasareanu, C., & Iatauro, M. (2006). Universal executive and PLEXIL: Engine and language for robust spacecraft control and operations. In AIAA space conference, San Jose, CA, USA.Google Scholar
  89. Wilson, M.A., Molineaux, M., & Aha, D. W. (2013). Domain-independent heuristics for goal formulation. In Florida artificial intelligence research society conference (FLAIRS), St. Pete Beach, FL, USA.Google Scholar
  90. Young, J., & Hawes, N. (2012). Evolutionary learning of goal priorities in a real-time strategy game. In AAAI conference on artificial intelligence and interactive digital entertainment (AIIDE), Palo Alto, CA, USA.Google Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2018

Authors and Affiliations

  1. 1.Université de Toulouse, ISAE-SUPAEROToulouseFrance
  2. 2.ONERAToulouseFrance
  3. 3.IRT Saint-ÉxupéryToulouseFrance
  4. 4.Airbus Central Research and TechnologyToulouseFrance

Personalised recommendations