Abstract
Among the many possible approaches for the parallelization of self-organizing networks, and in particular of growing self-organizing networks, perhaps the most common one is producing an optimized, parallel implementation of the standard sequential algorithms reported in the literature. In this chapter we explore an alternative approach, based on a new algorithm variant specifically designed to match the features of the large-scale, fine-grained parallelism of GPUs, in which multiple input signals are processed at once. Comparative tests have been performed, using both parallel and sequential implementations of the new algorithm variant, in particular for a growing self-organizing network that reconstructs surfaces from point clouds. The experimental results show that this approach allows harnessing in a more effective way the intrinsic parallelism that the self-organizing networks algorithms seem intuitively to suggest, obtaining better performances even with networks of smaller size.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
An aging mechanism is also applied to connections (see for instance [9]).
References
Piastra, M.: Self-organizing adaptive map: autonomous learning of curves and surfaces from point samples. Neural Netw. 41, 96–112 (2012)
Lawrence, R., Almasi, G., Rushmeier, H.: A scalable parallel algorithm for self-organizing maps with applications to sparse data mining problems. Data Min. Knowl. Disc. 3, 171–195 (1999)
Kohonen, T.: Self-Organizing Maps. Springer series in information sciences, vol. 30. Springer, Berlin (2001)
Orts, S., Garcia-Rodriguez, J., Viejo, D., Cazorla, M., Morell, V.: Gpgpu implementation of growing neural gas: application to 3d scene reconstruction. J. Parallel Distrib. Comput. 72(10), 1361–1372 (2012)
García-Rodríguez, J., Angelopoulou, A., Morell, V., Orts, S., Psarrou, A., García-Chamizo, J.: Fast image representation with gpu-based growing neural gas. Adv. Comput. Intell., Lect. Notes Comput. Sci 6692, 58–65 (2011)
Luo, Z., Liu, H., Wu, X.: Artificial neural network computation on graphic process unit. In: Proceedings of the IEEE International Joint Conference on Neural Networks, IJCNN’05. IEEE 2005, vol. 1, pp. 622–626 (2005)
Campbell, A., Berglund, E., Streit, A.: Graphics hardware implementation of the parameter-less self-organising map. In: Intelligent Data Engineering and Automated Learning-IDEAL 2005, pp. 5–14 (2005)
Owens, J., Luebke, D., Govindaraju, N., Harris, M., Krüger, J., Lefohn, A., Purcell, T.: A survey of general-purpose computation on graphics hardware. Comput. Graphics Forum 26, 80–113 (2007) (Wiley Online Library)
Fritzke, B.: A growing neural gas network learns topologies. In: Advances in Neural Information Processing Systems 7, MIT Press (1995)
Marsland, S., Shapiro, J., Nehmzow, U.: A self-organising network that grows when required. Neural Netw. 15, 1041–1058 (2002)
Martinetz, T., Schulten, K.: Topology representing networks. Neural Netw. 7, 507–522 (1994)
Owens, J., Houston, M., Luebke, D., Green, S., Stone, J., Phillips, J.: Gpu computing. Proc. IEEE 96, 879–899 (2008)
McCool, M.: Data-parallel programming on the cell be and the gpu using the rapidmind development platform. In: GSPx Multicore Applications Conference. vol. 9. (2006)
Papakipos, M.: The Peakstream Platform: High-Productivity Software Development for Multi-Core Processors. PeakStream Inc., Redwood City (2007)
NVIDIA Corporation: CUDA C programming guide, version 4.0. Santa Clara (2011)
Hensley, J.: AMD CTM overview. In: ACM SIGGRAPH 2007 courses, p. 7. ACM, New York (2007)
Stone, J., Gohara, D., Shi, G.: Opencl: a parallel programming standard for heterogeneous computing systems. Comput. Sci. Eng. 12, 66 (2010)
Buck, I., Foley, T., Horn, D., Sugerman, J., Fatahalian, K., Houston, M., Hanrahan, P.: Brook for gpus: stream computing on graphics hardware. ACM Trans. Graph. 23, 777–786 (2004)
Harris, M.: Optimizing parallel reduction in cuda. CUDA SDK Whitepaper (2007)
Liu, S., Flach, P., Cristianini, N.: Generic multiplicative methods for implementing machine learning algorithms on mapreduce. Arxiv, preprint arXiv:1111.2111 (2011)
Zhang, C., Li, F., Jestes, J.: Efficient parallel knn joins for large data in mapreduce. In: Proceedings of 15th International Conference on Extending Database Technology (EDBT 2012). (2012)
Edelsbrunner, H.: GGeometry and topology for mesh generation. Cambridge Monographs on Applied and Computational Mathematics, vol. 7. Cambridge University Press, Cambridge (2001)
Amenta, N., Bern, M.: Surface reconstruction by voronoi filtering. Discrete Comput. Geom. 22, 481–504 (1999)
Hockney, R.W., Eastwood, J.W.: Computer Simulation Using Particles. Taylor & Francis Inc., Bristol (1988)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2014 Springer International Publishing Switzerland
About this chapter
Cite this chapter
Parigi, G., Stramieri, A., Pau, D., Piastra, M. (2014). A Multi-Signal Variant for the GPU-Based Parallelization of Growing Self-Organizing Networks. In: Ferrier, JL., Bernard, A., Gusikhin, O., Madani, K. (eds) Informatics in Control, Automation and Robotics. Lecture Notes in Electrical Engineering, vol 283. Springer, Cham. https://doi.org/10.1007/978-3-319-03500-0_6
Download citation
DOI: https://doi.org/10.1007/978-3-319-03500-0_6
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-03499-7
Online ISBN: 978-3-319-03500-0
eBook Packages: EngineeringEngineering (R0)