Advertisement

Globalization and Parallelization of Nelder-Mead and Powell Optimization Methods

  • A. Koscianski
  • M.A. Luersen
Conference paper

Abstract

Optimization problems in engineering involve very often nonlinear functions with multiple minima or discontinuities, or the simulation of a system in order to determine its parameters. Global search methods can compute a set of points and provide alternative design answers to a problem, but are computationally expensive. A solution for the CPU dependency is parallelization, which leads to the need to control the sampling of the search space. This paper presents a parallel implementation of two free-derivative optimization methods (Nelder-Mead and Powell), combined with two restart strategies to globalize the search. The first is based on a probability density function, while the second uses a fast algorithm to uniformly sample the space. The implementation is suited to a faculty network, avoiding special hardware requirements, complex installation or coding details.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. [1]
    T. G. Kolda, R. M. Lewis, and V. Torczon, “Optimization by Direct Search: New Perspectives on Some Classical and Modern Methods,”SIAM Review, vol. 45, pp. 385-482, 2003.zbMATHCrossRefMathSciNetGoogle Scholar
  2. [2]
    A. Grosso, M. Locatelli, and F. Schoen, “A Population-based Approach for Hard Global Optimization Problems based on Dissimilarity Measures,”Mathematical Programming: Series A and B, vol. 110, pp. 373-404, 2003.CrossRefMathSciNetGoogle Scholar
  3. [3]
    M. Mitchell,An Introduction to Genetic Algorithms (Complex Adaptive Systems): The MIT Press, 1998.Google Scholar
  4. [4]
    P. Moscato, “On Evolution, Search, Optimization, Genetic Algorithms and Martial Arts: Towards Memetic Algorithms.,” Caltech Concurrent Computation Program C3P Report 826, 1989.Google Scholar
  5. [5]
    M. Pogu and J. E. Souza de Cursi, “Global Optimization by Random Perturbation of the Gradient Method with a Fixed Parameter.,”Journal of Global Optimization, vol. 5, pp. 159-180, 1994.zbMATHCrossRefMathSciNetGoogle Scholar
  6. [6]
    S. Kirkpatrick, C. D. Gelatt, and M. P. Vecchi, “Optimization by Simulated Annealing,”Science, vol. 220, pp. 671-680, 1983.CrossRefMathSciNetGoogle Scholar
  7. [7]
    L. Bottou and N. Murata, “Stochastic Approximations and Efficient Learning,” inThe Handbook of Brain Theory and Neural Networks, M. A. Arbib, Ed. Cambridge, USA.: Massachusetts Institute of Technology, 2002.Google Scholar
  8. [8]
    E. Cantu-Paz, “A survey of parallel genetic algorithms.,”Calculateurs Parallèles, Reseaux et Systèmes Répartis., vol. 10, pp. 141-171, 1998.Google Scholar
  9. [9]
    A. Bürmen, J. Puhan, T. Tuma, I. Fajfar, and A. Nussdorfer, “Parallel simplex algorithm for circuit optimisation,” inERK 2001, Portorož, Slovenija, 2001.Google Scholar
  10. [10]
    J. E. Dennis and V. Torczon, “Direct search methods on parallel machines,”SIAM J. Optimization, vol. 1, pp. 448-474, 1991.zbMATHCrossRefMathSciNetGoogle Scholar
  11. [11]
    A. Lewis, “Parallel Optimisation Algorithms for Continuous Non-Linear Numerical Simulations,” inFaculty of Engineering and Information Technology. vol. Doctor Brisbane, Australia: Griffith University, 2004.Google Scholar
  12. [12]
    R. Lewis, V. Torczon, and M. Trosset, “Direct search methods: then and now,”Journal of Computational and Applied Mathematics, vol. 124, 2000.Google Scholar
  13. [13]
    R. P. Brent, “Algorithms for Minimization without Derivatives,v inAlgorithms for Minimization without Derivatives Englewood Cliffs, NJ, EUA: Prentice-Hall, 1973.Google Scholar
  14. [14]
    W. T. Vetterling and B. P. Flannery,Numerical Recipes in C++: The Art of Scientific Computing: Cambridge University Press, 2002.Google Scholar
  15. [15]
    M. Matsumoto and T. Nishimura, “Mersenne Twister: a 623-dimensionally equidistributed uniform pseudorandom number generator.,vACM Trans. on Modeling and Computer Simulation, vol. 8, pp. 3-30, 1998.zbMATHCrossRefGoogle Scholar
  16. [16]
    M. A. Luersen and R. L. Riche, “Globalized Nelder-Mead for engineering optimization,”Computer&Structures, vol. 82, pp. 2251-2260, 2004.CrossRefGoogle Scholar
  17. [17]
    E. Parzen, “On estimation of a probability density function and mode.,”Annals of Mathematical Statistics, vol. 33, pp. 1065-1076, 1962.zbMATHCrossRefMathSciNetGoogle Scholar
  18. [18]
    D. Marshall, “Nearest Neighbour Searching in High Dimensional Metric Spaces,” inDepartment of Computer Sciences. vol. Master Thesis in Information Technology: Australian National University, 2006.Google Scholar
  19. [19]
    S. A. Nene and S. K. Nayar, “A simple algorithm for nearest neighbor search in high dimensions,”IEEE Transactions on Pattern Analysis and Machine Intelligence,pp. 989-1003, 1997.Google Scholar
  20. [20]
    J. E. Souza de Cursi and A. Koscianski, “Physically Constrained Neural Network Models for Simulation,” inAdvances and Innovations in Systems, Computing, Sciences and Software Engineering, K. Elleithy, Ed. Dordrecht, The Netherlands: Springer, 2007.Google Scholar
  21. [21]
    M. J. D. Powell, “Direct search algorithms for optimization calculations,”Acta Numerica, vol. 7, pp. 287-336, 1998.MathSciNetCrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media B.V. 2008

Authors and Affiliations

  • A. Koscianski
    • 1
  • M.A. Luersen
    • 2
  1. 1.UTFPRAv. Monteiro Lobato s/nPonta GrossaBrasil
  2. 2.UTFPRAv. Sete de SetembroBrasil

Personalised recommendations