Advertisement

Learn-and-Optimize: A Parameter Tuning Framework for Evolutionary AI Planning

  • Mátyás Brendel
  • Marc Schoenauer
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7401)

Abstract

Learn-and-Optimize  (LaO)  is  a  generic  surrogate  based method for parameter tuning combining learning and optimization. In this paper LaO is used to tune Divide-and-Evolve (DaE), an Evolutionary Algorithm for AI Planning. The LaO framework makes it possible to learn the relation between some features describing a given instance and the optimal parameters for this instance, thus it enables to extrapolate this relation to unknown instances in the same domain. Moreover, the learned knowledge is used as a surrogate-model to accelerate the search for the optimal parameters. The proposed implementation of LaO uses an Artificial Neural Network for learning the mapping between features and optimal parameters, and the Covariance Matrix Adaptation Evolution Strategy for optimization. Results demonstrate that LaO is capable of improving the quality of the DaE results even with only a few iterations. The main limitation of the DaE case-study is the limited amount of meaningful features that are available to describe the instances. However, the learned model reaches almost the same performance on the test instances, which means that it is capable of generalization.

Keywords

Mean Square Error Covariance Matrix Adaptation Evolution Strategy General Optimization Problem Unknown Instance Mean Square Error Error 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Bardenet, R., Kégl, B.: Surrogating the surrogate: accelerating gaussian-process-based global optimization with a mixture cross-entropy algorithm. In: Proceedings of the 27th International Conference on Machine Learning, ICML 2010 (2010)Google Scholar
  2. 2.
    Bartz-Beielstein, T., Lasarczyk, C., Preuss, M.: Sequential parameter optimization. In: McKay, B. (ed.) Proc. CEC 2005, pp. 773–780. IEEE Press (2005)Google Scholar
  3. 3.
    Bibai, J., Savéant, P., Schoenauer, M., Vidal, V.: On the generality of parameter tuning in evolutionary planning. In: Branke, J., et al. (eds.) Genetic and Evolutionary Computation Conference (GECCO), pp. 241–248. ACM Press (July 2010)Google Scholar
  4. 4.
    Birattari, M., Stützle, T., Paquete, L., Varrentrapp, K.: A Racing Algorithm for Configuring Metaheuristics. In: GECCO 2002, pp. 11–18. Morgan Kaufmann (2002)Google Scholar
  5. 5.
    Brié, A.H., Morignot, P.: Genetic Planning Using Variable Length Chromosomes. In: Proc. ICAPS (2005)Google Scholar
  6. 6.
    Fox, M., Long, D.: PDDL2.1: An Extension to PDDL for Expressing Temporal Planning Domains. JAIR 20, 61–124 (2003)zbMATHGoogle Scholar
  7. 7.
    Hansen, N., Niederberger, S., Guzzella, L., Koumoutsakos, P.: A method for handling uncertainty in evolutionary optimization with an application to feedback control of combustion. IEEE Transactions on Evolutionary Computation 13(1), 180–197 (2009)CrossRefGoogle Scholar
  8. 8.
    Hansen, N., Ostermeier, A.: Completely derandomized self-adaptation in evolution strategies. Evolutionary Computation 9(2), 159–195 (2001)CrossRefGoogle Scholar
  9. 9.
    Hart, W., Krasnogor, N., Smith, J. (eds.): Recent Advances in Memetic Algorithms. STUDFUZZ, vol. 166. Springer, Heidelberg (2005)zbMATHGoogle Scholar
  10. 10.
    Hutter, F., Hoos, H.H., Leyton-Brown, K., Stützle, T.: ParamILS: an automatic algorithm configuration framework. Journal of Artificial Intelligence Research 36, 267–306 (2009)zbMATHGoogle Scholar
  11. 11.
    Igel, C., Glasmachers, T., Heidrich-Meisner, V.: Shark. Journal of Machine Learning Research 9, 993–996 (2008)zbMATHGoogle Scholar
  12. 12.
    Bibai, J., Savéant, P., Schoenauer, M., Vidal, V.: An evolutionary metaheuristic based on state decomposition for domain-independent satisficing planning. In: ICAPS 2010, pp. 18–25. AAAI Press (2010)Google Scholar
  13. 13.
    Bibai, J., Savéant, P., Schoenauer, M., Vidal, V.: On the Benefit of Sub-optimality within the Divide-and-Evolve Scheme. In: Cowling, P., Merz, P. (eds.) EvoCOP 2010. LNCS, vol. 6022, pp. 23–34. Springer, Heidelberg (2010)CrossRefGoogle Scholar
  14. 14.
    Lobo, F., Lima, C., Michalewicz, Z. (eds.): Parameter Setting in Evolutionary Algorithms. Springer, Berlin (2007)zbMATHGoogle Scholar
  15. 15.
    Muslea, I.: SINERGY: A Linear Planner Based on Genetic Programming. In: Steel, S. (ed.) ECP 1997. LNCS, vol. 1348, pp. 312–324. Springer, Heidelberg (1997)CrossRefGoogle Scholar
  16. 16.
    Nannen, V., Smit, S.K., Eiben, A.E.: Costs and Benefits of Tuning Parameters of Evolutionary Algorithms. In: Rudolph, G., Jansen, T., Lucas, S., Poloni, C., Beume, N. (eds.) PPSN 2008. LNCS, vol. 5199, pp. 528–538. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  17. 17.
    Nissen, N.: Implementation of a Fast Artificial Neural Network Library (FANN). Technical report, Department of Computer Science University of Copenhagen, DIKU (2003)Google Scholar
  18. 18.
    Schoenauer, M., Savéant, P., Vidal, V.: Divide-and-Evolve: A New Memetic Scheme for Domain-Independent Temporal Planning. In: Gottlieb, J., Raidl, G.R. (eds.) EvoCOP 2006. LNCS, vol. 3906, pp. 247–260. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  19. 19.
    Schoenauer, M., Savéant, P., Vidal, V.: Divide-and-Evolve: a Sequential Hybridization Strategy using Evolutionary Algorithms. In: Michalewicz, Z., Siarry, P. (eds.) Advances in Metaheuristics for Hard Optimization, pp. 179–198. Springer (2007)Google Scholar
  20. 20.
    Spector, L.: Genetic Programming and AI Planning Systems. In: Proc. AAAI 1994, pp. 1329–1334. AAAI/MIT Press (1994)Google Scholar
  21. 21.
    Vidal, V.: A lookahead strategy for heuristic search planning. In: Proceedings of the 14th International Conference on Automated Planning and Scheduling (ICAPS 2004), Whistler, BC, Canada, pp. 150–159. AAAI Press (June 2004)Google Scholar
  22. 22.
    Westerberg, C.H., Levine, J.: “GenPlan”: Combining Genetic Programming and Planning. In: Garagnani, M. (ed.) 19th PLANSIG Workshop (2000)Google Scholar
  23. 23.
    Westerberg, C.H., Levine, J.: Investigation of Different Seeding Strategies in a Genetic Planner. In: Boers, E.J.W., Gottlieb, J., Lanzi, P.L., Smith, R.E., Cagnoni, S., Hart, E., Raidl, G.R., Tijink, H. (eds.) EvoWorkshops 2001. LNCS, vol. 2037, pp. 505–514. Springer, Heidelberg (2001)CrossRefGoogle Scholar
  24. 24.
    Whitley, D., Rana, S., Heckendorn, R.B.: The island model genetic algorithm: On separability, population size and convergence. Journal of Computing and Information Technology 7, 33–47 (1998)Google Scholar
  25. 25.
    Yu, T., Davis, L., Baydar, C., Roy, R. (eds.): Evolutionary Computation in Practice. SCI, vol. 88. Springer, Heidelberg (2008)zbMATHGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • Mátyás Brendel
    • 1
  • Marc Schoenauer
    • 1
  1. 1.Projet TAOINRIA Saclay & LRIFrance

Personalised recommendations