Advertisement

RASH: A Self-adaptive Random Search Method

  • Mauro Brunato
  • Roberto Battiti
Part of the Studies in Computational Intelligence book series (SCI, volume 136)

Summary

A variation of an adaptive random search algorithm for the optimization of functions of continuous variables is presented. The scheme does not require any assumptions about the function to be optimized, apart from the availability of evaluations at selected test points. The main design criterion of the Reactive Affine Shaker (RASH) scheme consists of the adaptation of a search region by an affine transformation. The modification takes into account the local knowledge derived from trial points generated with a uniform probability in the search region. The aim is to scout for local minima in the attraction basin where the initial point falls, by adapting the step size and direction to maintain heuristically the largest possible movement per function evaluation. The design is complemented by the analysis of some strategic choices, like the double-shot strategy and the initialization, and by experimental results showing that, in spite of its simplicity, RASH is a promising building block to consider for the development of more complex optimization algorithms.

Keywords

Stochastic search adaptive random search mathematical programming  

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Battiti, R., Tecchiolli, G.: Learning with first, second and no derivatives: A case study in high energy physics. Neurocomp. 6, 181–206 (1994)CrossRefGoogle Scholar
  2. 2.
    Battiti, R., Tecchiolli, G.: The reactive tabu search. ORSA Journal on Computing 6(2), 126–140 (1994)zbMATHGoogle Scholar
  3. 3.
    Brunelli, R., Tecchiolli, G.: On random minimization of functions. Biological Cybernetics 65(6), 501–506 (1991)CrossRefMathSciNetGoogle Scholar
  4. 4.
    Chelouah, R., Siarry, P.: Tabu search applied to global optimization. European Journal of Operational Research 123, 256–270 (2000)zbMATHCrossRefMathSciNetGoogle Scholar
  5. 5.
    Corana, A., Marchesi, M., Martini, C., Ridella, S.: Minimizing multimodal functions of continuous variables with the “simulated annealing” algorithm. ACM Trans. Math. Softw. 13(3), 262–280 (1987)zbMATHCrossRefMathSciNetGoogle Scholar
  6. 6.
    Dixon, L.C.W., Szegő, G.P. (eds.): Towards Global Optimization 2. North-Holland, Amsterdam (1978)Google Scholar
  7. 7.
    Glover, F.W., Kochenberger, G.A.: Handbook of Metaheuristics. International Series in Operations Research and Management Science, vol. 57. Kluwer Academic Publishers, Norwell (2003)zbMATHGoogle Scholar
  8. 8.
    Goldberg, D.E.: Genetic Algorithms in Search, Optimization and Machine Learning. Kluwer Academic Publishers, Boston (1989)zbMATHGoogle Scholar
  9. 9.
    Gomes, C.P., Selman, B.: Algorithm portfolios. Artif. Intell. 126(1-2), 43–62 (2001)zbMATHCrossRefMathSciNetGoogle Scholar
  10. 10.
    Hooke, R., Jeeves, T.A.: Direct search solution of numerical and statistical problems. J. ACM 8(2), 212–229 (1961)zbMATHCrossRefGoogle Scholar
  11. 11.
    Hoos, H.H., Stützle, T.: Stochastic Local Search Foundations and Applications. Morgan Kaufmann / Elsevier (2004)Google Scholar
  12. 12.
    Pardalos, P.M., Resende, M.G.C. (eds.): Handbook of Applied Optimization. Oxford University Press, NY, USA (2002)zbMATHGoogle Scholar
  13. 13.
    Siarry, P., Berthiau, G., Durbin, F., Haussy, J.: Enhanced simulated annealing for globally minimizing functions of many-continuous variables. ACM Transactions on Mathematical Software 23(2), 209–228 (1997)zbMATHCrossRefMathSciNetGoogle Scholar
  14. 14.
    Solis, F.J., Wets, R.J.-B.: Minimization by random search techniques. Mathematics of Operations Research 6(1), 19–30 (1981)zbMATHMathSciNetCrossRefGoogle Scholar
  15. 15.
    Tsoi, A.C., Lim, M.: Improved simulated annealing technique. In: Proceedings of the IEEE International Conference on Systems, Man and Cybernetics, Piscataway, NJ (USA), pp. 594–597. IEEE Press, Los Alamitos (1988)CrossRefGoogle Scholar
  16. 16.
    Wolpert, D.H., Macready, W.G.: No free lunch theorems for optimization. IEEE Transactions on Evolutionary Computation 1(1), 67–82 (1997)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2008

Authors and Affiliations

  • Mauro Brunato
    • 1
  • Roberto Battiti
    • 1
  1. 1.Dipartimento di Ingegneria e Scienza dell’InformazioneUniversità di TrentoTrentoItaly

Personalised recommendations