Skip to main content

Optimisation and Generalisation: Footprints in Instance Space

  • Conference paper
Parallel Problem Solving from Nature, PPSN XI (PPSN 2010)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 6238))

Included in the following conference series:

Abstract

The chief purpose of research in optimisation is to understand how to design (or choose) the most suitable algorithm for a given distribution of problem instances. Ideally, when an algorithm is developed for specific problems, the boundaries of its performance should be clear, and we expect estimates of reasonably good performance within and (at least modestly) outside its ‘seen’ instance distribution. However, we show that these ideals are highly over-optimistic, and suggest that standard algorithm-choice scenarios will rarely lead to the best algorithm for individual instances in the space of interest. We do this by examining algorithm ‘footprints’, indicating how performance generalises in instance space. We find much evidence that typical ways of choosing the ‘best’ algorithm, via tests over a distribution of instances, are seriously flawed. Also, understanding how footprints in instance spaces vary between algorithms and across instance space dimensions, may lead to a future platform for wiser algorithm-choice decisions.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Hoos, H.H., Stutzle, T.: Stochastic Local Search Foundations & Applications. Morgan Kaufmann, San Francisco (2005)

    MATH  Google Scholar 

  2. Hutter, F., Hamadi, Y., Leyton-Brown, K., Hoos, H.H.: Performance prediction and automated tuning [...]. In: CP 2006, pp. 213–228 (2006)

    Google Scholar 

  3. Hutter, F., Hoos, H.H., Stutzle, T.: Automatic algorithm configuration based on local search. In: Proc. National Conf. on AI, vol. 2, pp. 1152–1157. MIT Press, Cambridge (2007)

    Google Scholar 

  4. Hutter, F., Hoos, H.H., Leyton-Brown, K., Stutzle, T.: ParamILS: An Automatic Algorithm Configuration Framework. JAIR 36, 267–306 (2009)

    MATH  Google Scholar 

  5. Gratch, J., Chien, S.A.: Adaptive problem-solving for large-scale scheduling problems: A case study. JAIR 4, 365–396 (1996)

    Google Scholar 

  6. Minton, S.: Automatically configuring constraint satisfaction programs: A case study. Constraints 1(1), 1–40 (1996)

    Article  MathSciNet  Google Scholar 

  7. Johnson, D.S.: A theoretician guide to the experimental analysis of algorithms. In: Data Structures, Near Neighbor Searches and Methodology: Fifth and Sixth DIMACS Implementation Challenges, pp. 215–250. AMS, Providence (2002)

    Google Scholar 

  8. Birattari, M.: The Problem of Tuning Metaheuristics as Seen from a Machine Learning Perspective. PhD thesis, ULB, Belgium (2004)

    Google Scholar 

  9. Bartz-Beielstein, T., Lasarczyk, C., Preuss, M.: Sequential parameter optimization. In: IEEE CEC 2005, pp. 773–780 (2005)

    Google Scholar 

  10. Nannen, V., Eiben, A.E.: A Method for Parameter Calibration and Relevance Estimation in Evolutionary Algorithms. In: GECCO, pp. 183–190. ACM, New York (2006)

    Google Scholar 

  11. Nannen, V., Eiben, A.E.: Relevance Estimation and Value Calibration of Evolutionary Algorithm Parameters. In: IJCAI 1997, pp. 975–980 (1997)

    Google Scholar 

  12. Horvitz, E., Ruan, Y., Gomes, C.P., Kautz, H., Selman, B., Chickering, D.M.: A Bayesian approach [...]. In: UAI 2001, pp. 235–244. Morgan Kaufmann, San Francisco (2001)

    Google Scholar 

  13. Patterson, D.J., Kautz, H.: Auto-WalkSAT: a self-tuning implementation of WalkSAT. In: Electronic Notes in Discrete Mathematics (ENDM), vol. 9 (2001)

    Google Scholar 

  14. Carchrae, T., Beck, J.C.: Applying machine learning to low-knowledge control of optimization algorithms. Computational Intelligence 21(4), 372–387 (2005)

    Article  MathSciNet  Google Scholar 

  15. Xu, L., Hutter, F., Hoos, H.H., Leyton-Brown, K.: SATzilla: portfolio-based algorithm selection for SAT. JAIR 32, 565–606 (2008)

    Google Scholar 

  16. Preuss, M.: Adaptability of ALgorithms for Real-Valued Parameter Optimization. In: Evoworkshops 2009. LNCS, pp. 665–674. Springer, Heidelberg (2009)

    Google Scholar 

  17. Smith-Miles, K., James, R.J., Giffin, J., Tu, Y.: Understanding the Relationship between (Structure and Performance). In: Proc. LION (2009)

    Google Scholar 

  18. Edgington, E.S.: Randomization Tests. Marcel Dekker AG, USA, 147 p. (1995)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2010 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Corne, D.W., Reynolds, A.P. (2010). Optimisation and Generalisation: Footprints in Instance Space. In: Schaefer, R., Cotta, C., Kołodziej, J., Rudolph, G. (eds) Parallel Problem Solving from Nature, PPSN XI. PPSN 2010. Lecture Notes in Computer Science, vol 6238. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-15844-5_3

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-15844-5_3

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-15843-8

  • Online ISBN: 978-3-642-15844-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics