Numerical Analysis and Applications

, Volume 3, Issue 4, pp 381–388 | Cite as

Construction of Hamiltonian cycles by recurrent neural networks in graphs of distributed computer systems

Article

Abstract

An algorithm based on a recurrent neural Wang’s network and the WTA (“Winner takes all”) principle is applied to the construction of Hamiltonian cycles in graphs of distributed computer systems (CSs). The algorithm is used for: 1) regular graphs (2D- and 3D-tori, and hypercubes) of distributed CSs and 2) 2D-tori disturbed by removing an arbitrary edge. The neural network parameters for the construction of Hamiltonian cycles and suboptimal cycles with a length close to that of Hamiltonian ones are determined. Our experiments show that the iterative method (Jacobi, Gauss-Seidel, or SOR) used for solving the system of differential equations describing a neural network strongly affects the process of cycle construction and depends on the number of torus nodes.

Key words

Recurrent neural networks distributed computer systems parallel algorithms Hamiltonian cycle graphs torus hypercube Jacobi Gauss-Seidel SOR methods 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Tarkov, M.S., Nesting of Structures of Parallel Programs to Structures of Survivable Distributed Computer Systems, Avtometriya, 2003, vol. 39, no. 3, pp. 84–96.Google Scholar
  2. 2.
    Tarkov, M.S., Decentralized Control of Resources and Tasks in Survivable Distributed Computer Systems, Avtometriya, 2005, vol. 41, no. 5, pp. 81–91.Google Scholar
  3. 3.
    Palmer, J.F., The NCUBE Family of Parallel Supercomputers, Proc. IEEE Int. Conf. on Computer Design, ICCD-86, 1986, pp. 107.Google Scholar
  4. 4.
    Peterson, C., Sutton, J., and Wiley, P., iWarp: A 100-MOPS VLIW Microprocessor for Multicomputers, IEEE MICRO, 1991, vol. 11, no. 3, pp. 26–87.CrossRefGoogle Scholar
  5. 5.
  6. 6.
    Yu, H., I-Hsin, Chung, and Moreira, J., Topology Mapping for Blue Gene/L Supercomputer, Proc. 2006 ACM/IEEE SC’06 Conf. (SC’06), 2006, Tampa, Florida.Google Scholar
  7. 7.
    Abramov, S.M., Zadneprovskii, V.F., Shmelev, A.B., and Moskovskii, A.A., Supercomputer of Series 4 of the SKIF Family: Attacking the Peak of Supercomputer Technologies, Vest. Nizhegor. Univ., 2009, no. 5, pp. 200–210.Google Scholar
  8. 8.
    Osovskii, S., Neyronnye seti dlya obrabotki informatsii (Neural Networks for Information Processing), Moscow: Finansy i Statistika, 2002.Google Scholar
  9. 9.
    Khaikin, S., Neyronnye seti: polnyi kurs (Neural Networks: Complete Course), Moscow: Vil’yams, 2006.Google Scholar
  10. 10.
    Tarkov, M.S., Neyrokomp’yuternye sistemy (Neural Computer Systems), Moscow: Internet-Univ. Komp. Tekhnol., Binom. Labor. Znanii, 2006.Google Scholar
  11. 11.
    Hopfield, J.J. and Tank, D.W., “Neural” Computation of Decisions in Optimization Problems, Biol. Cyber., 1985, vol. 52, no. 3, pp. 141–152.MATHMathSciNetGoogle Scholar
  12. 12.
    Melamed, I.I., Neural Networks and Combinatorial Optimization, Avtom. Telemekh., 1994, no. 4, pp. 3–40.Google Scholar
  13. 13.
    Smith, K.A., Neural Networks for Combinatorial Optimization: A Review of More than a Decade of Research, INFORMS J. Comput., 1999, vol. 11, no. 1, pp. 15–34.MATHCrossRefMathSciNetGoogle Scholar
  14. 14.
    Hung, D.L. and Wang, J., Digital Hardware Realization of a Recurrent Neural Network for Solving the Assignment Problem, Neurocomputing, 2003, no. 51, pp. 447–461.Google Scholar
  15. 15.
    Siqueira, P.H., Steiner, M.T., and Scheer, S., A New Approach to Solve the Travelling Salesman Problem, Neurocomputing, 2007, no. 70, pp. 1013–1021.Google Scholar
  16. 16.
    Feng, G. and Douligeris, C., The Convergence and Parameter Relationship for Discrete-Time Continuous-State Hopfield Networks, Proc. Int. Joint Conf. on Neural Networks, 2001, pp. 376–381.Google Scholar
  17. 17.
    Ortega, J.M., Introduction to Parallel and Vector Solution of Linear Systems, New York: Plenum, 1988.MATHGoogle Scholar

Copyright information

© Pleiades Publishing, Ltd. 2010

Authors and Affiliations

  1. 1.Institute of Semiconductor Physics, Siberian BranchRussian Academy of SciencesNovosibirskRussia

Personalised recommendations