Advertisement

Improving the Performance of the Hopfield Network By Using A Relaxation Rate

  • Xinchuan Zeng
  • Tony R. Martinez

Abstract

In the Hopfield network a solution of an optimization problem is obtained after the network is relaxed to an equilibrium state. This paper shows that the performance of the Hopfield network can be improved by using a relaxation rate to control the relaxation process. Analysis suggests that the relaxation process has an important impact on the quality of a solution. A relaxation rate is then introduced to control the relaxation process in order to achieve solutions with better quality. Two types of relaxation rate (constant and dynamic) are proposed and evaluated through simulations based on 200 randomly generated city distributions of the 10-city traveling salesman problem. The result shows that using a relaxation rate can decrease the error rate by 9.87% and increase the percentage of valid tours by 14.0% as compared to those without using a relaxation rate. Using a dynamic relaxation rate can further decrease the error 3rate by 4.2% and increase the percentage of valid tours by 0.4% as compared to those using a constant relaxation rate.

Keywords

Relaxation Process Relaxation Rate Travel Salesman Problem Ation Rate Tour Length 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. [1]
    Hopfield, J. J. and Tank, D. W.: Neural Computations of Decisions in Optimization Problems. Biological Cybernetics, vol. 52, pp. 141–152, 1985.MathSciNetMATHGoogle Scholar
  2. [2]
    Wilson, G. V. and Pawley, G. S.: On the Stability of the Traveling Salesman Problem Algorithm of Hopfield and Tank. Biological Cybernetics, vol. 58, pp. 63–70, 1988.MathSciNetMATHCrossRefGoogle Scholar
  3. [3]
    Brandt, R. D., Wang, Y., Laub, A. J. and Mitra, S. K.: Alternative Networks for Solving the Traveling Salesman Problem and the List-Matching Problem. Proceedings of IEEE International Conference on Neural Networks, San Diego, CA. II: 333–340, 1988.CrossRefGoogle Scholar
  4. [4]
    Aiyer, S. V. B., Niranjan, M. and Fallside, F.: A Theoretical Investigation into the Performance of the Hopfield Model. IEEE Transactions on Neural Networks, vol. 1, no. 2, pp. 204–215, 1990.CrossRefGoogle Scholar
  5. [5]
    Li, S. Z.: Improving Convergence and Solution Quality of Hopfield-Type Neural Networks with Augmented Lagrange Multipliers. IEEE Transactions On Neural Networks, vol. 7, no. 6, pp. 1507–1516, 1996.CrossRefGoogle Scholar
  6. [6]
    Catania, V., Cavalieri, S. and Russo, M.: Tuning Hopfield Neural Network by a Fuzzy Approach. Proceedings of IEEE International Conference on Neural Networks, pp. 1067–1072, 1996.Google Scholar
  7. [7]
    Liang, Y.:Combinatorial Optimization by Hopfield Networks Using Adjusting Neurons. Information Sciences, vol. 94, pp. 261–276, 1996.Google Scholar
  8. [8]
    Cooper, B. S.: Higher Order Neural Networks-Can they help us Optimise?. Proceedings of the Sixth Australian Conference on Neural Networks (ACNN′95), pp. 29–32, 1995.Google Scholar
  9. [9]
    Van den Bout, D. E. and Miller, T. K.: Improving the Performance of the Hopfield-Tank Neural Network Through Normalization and Annealing. Biological Cybernetics, vol. 62, pp. 129–139, 1989.CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Wien 1999

Authors and Affiliations

  • Xinchuan Zeng
    • 1
  • Tony R. Martinez
    • 1
  1. 1.Computer Science DepartmentBrigham Young UniversityProvoUSA

Personalised recommendations