Construction of Hamiltonian cycles by recurrent neural networks in graphs of distributed computer systems
An algorithm based on a recurrent neural Wang’s network and the WTA (“Winner takes all”) principle is applied to the construction of Hamiltonian cycles in graphs of distributed computer systems (CSs). The algorithm is used for: 1) regular graphs (2D- and 3D-tori, and hypercubes) of distributed CSs and 2) 2D-tori disturbed by removing an arbitrary edge. The neural network parameters for the construction of Hamiltonian cycles and suboptimal cycles with a length close to that of Hamiltonian ones are determined. Our experiments show that the iterative method (Jacobi, Gauss-Seidel, or SOR) used for solving the system of differential equations describing a neural network strongly affects the process of cycle construction and depends on the number of torus nodes.
Key wordsRecurrent neural networks distributed computer systems parallel algorithms Hamiltonian cycle graphs torus hypercube Jacobi Gauss-Seidel SOR methods
Unable to display preview. Download preview PDF.
- 1.Tarkov, M.S., Nesting of Structures of Parallel Programs to Structures of Survivable Distributed Computer Systems, Avtometriya, 2003, vol. 39, no. 3, pp. 84–96.Google Scholar
- 2.Tarkov, M.S., Decentralized Control of Resources and Tasks in Survivable Distributed Computer Systems, Avtometriya, 2005, vol. 41, no. 5, pp. 81–91.Google Scholar
- 3.Palmer, J.F., The NCUBE Family of Parallel Supercomputers, Proc. IEEE Int. Conf. on Computer Design, ICCD-86, 1986, pp. 107.Google Scholar
- 6.Yu, H., I-Hsin, Chung, and Moreira, J., Topology Mapping for Blue Gene/L Supercomputer, Proc. 2006 ACM/IEEE SC’06 Conf. (SC’06), 2006, Tampa, Florida.Google Scholar
- 7.Abramov, S.M., Zadneprovskii, V.F., Shmelev, A.B., and Moskovskii, A.A., Supercomputer of Series 4 of the SKIF Family: Attacking the Peak of Supercomputer Technologies, Vest. Nizhegor. Univ., 2009, no. 5, pp. 200–210.Google Scholar
- 8.Osovskii, S., Neyronnye seti dlya obrabotki informatsii (Neural Networks for Information Processing), Moscow: Finansy i Statistika, 2002.Google Scholar
- 9.Khaikin, S., Neyronnye seti: polnyi kurs (Neural Networks: Complete Course), Moscow: Vil’yams, 2006.Google Scholar
- 10.Tarkov, M.S., Neyrokomp’yuternye sistemy (Neural Computer Systems), Moscow: Internet-Univ. Komp. Tekhnol., Binom. Labor. Znanii, 2006.Google Scholar
- 12.Melamed, I.I., Neural Networks and Combinatorial Optimization, Avtom. Telemekh., 1994, no. 4, pp. 3–40.Google Scholar
- 14.Hung, D.L. and Wang, J., Digital Hardware Realization of a Recurrent Neural Network for Solving the Assignment Problem, Neurocomputing, 2003, no. 51, pp. 447–461.Google Scholar
- 15.Siqueira, P.H., Steiner, M.T., and Scheer, S., A New Approach to Solve the Travelling Salesman Problem, Neurocomputing, 2007, no. 70, pp. 1013–1021.Google Scholar
- 16.Feng, G. and Douligeris, C., The Convergence and Parameter Relationship for Discrete-Time Continuous-State Hopfield Networks, Proc. Int. Joint Conf. on Neural Networks, 2001, pp. 376–381.Google Scholar