Oscillation, heuristic ordering and pruning in neighborhood search

  • Jean-Marc Labat
  • Laurent Mynard
Session 7b
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1330)


This paper describes a new algorithm for combinatorial optimization problems and presents the results of our experiments. HOLSA — Heuristic Oscillating Local Search Algorithm- is a neighborhood search algorithm using an evaluation function f inspired from A*, a best-first strategy, a pruning of states as in B&B and operators performing variable steps. All these caracteristics lead to an oscillation principle whereby the search alternates between improving the economic function and satisfying the constraints. We specify how to compute the start state, the evaluation function and the variable steps in order to implement the general outline of HOLSA. Its performance is tested on the multidimensional knapsack problem, using randomly generated problems and classical test problems of the litterature. The experiments show that HOLSA is very efficient, according to the quality of the solutions as well as the search speed, at least on the class of problems studied in this paper. Moreover with large problems, and a limited number of generated nodes, we show that it is better than Branch & Bound, simulated annealing, tabu search and GRASP, both for the quality of the solution and the computational time.


Simulated Annealing Tabu Search Feasible State Stop Condition Outer Operator 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Drexl, A.: A Simulated Annealing Approach to the Multiconstraint Zero-One Knapsack Problem. Computing 40 (1987) 1–8.Google Scholar
  2. 2.
    Feo, T. A., Resende, M. G. C.: Greedy Randomized Adaptative Search Procedures. Journal of Global Optimization 6 (1995) 109–133.Google Scholar
  3. 3.
    Glover, F.: Tabu Search — Part I. ORSA Journal on Computing 1(3) (1989) 190–206.Google Scholar
  4. 4.
    Glover, F., Kochenberger, G. A.: Critical Event Tabu Search For Multidimensional Knapsack Problems. in Meta-Heuristics: Theory and Applications (I.H. Osman and J.P. Kelly Eds) Kluwer Academic Publishers (1996) 407–427.Google Scholar
  5. 5.
    Kirkpatrick, S., Gellat, C. D., Vecchi, M. P.: Optimization by Simulated Annealing. Science 220 (1983) 671–680.Google Scholar
  6. 6.
    Korf, R.: Depth-First Iterative Deepening: an Optimal Admissible Tree Search. Articial Intelligence 27 (1985).Google Scholar
  7. 7.
    Nau, D., Kumar, V., Kanal, L.: General Branch and Bound and its Relation to A* and AO*. Artificial Intelligence 23 (1984) 29–59.Google Scholar
  8. 8.
    Nilsson, N.: Problem-Solving Methods in Artificial Intelligence. MacGraw Hill (1971).Google Scholar
  9. 9.
    Papadimitriou, C. H., Steiglitz, K.: Combinatorial optimization: algorithms and complexity. Prentice-Hall (1982).Google Scholar
  10. 10.
    Pearl, J.: Heuristics: Intelligent Search Strategies for Computer Problem Solving. Addison-Wesley (1984).Google Scholar
  11. 11.
    Pomerol, J.-Ch., Labat, J.-M., Futtersack, M.: Les algorithmes Branch and Bound et A* sont-ils identiques? in Premier congrès biennal de l'AFCET 1 (1993) 163–173.Google Scholar
  12. 12.
    Reeves, C. R.: Modern Heuristic Techniques for Combinatorial Problems. Blackwell Scientific Publications (1993).Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1997

Authors and Affiliations

  • Jean-Marc Labat
    • 1
  • Laurent Mynard
    • 1
  1. 1.LIP6UPMCParis Cedex 05France

Personalised recommendations