Abstract
In this chapter we will present one implementation of Neural Networks using dataflow paradigm. The dataflow paradigm presents a new approach to BigData applications. Existence of BigData is one of biggest application problems in many fields: financial engineering, geophysics, medical analysis, air flow simulations, data mining, and many others. Most of these applications are based on Neural Networks, and that being said, the way a network is implemented is crucial for the application performance. Such applications pay more attention to data than to the process itself. In order to be able to perform correct predictions, Neural Networks should be trained first. In some cases, they spend a lot of execution time on training process. The main challenge is finding a way to process such big quantities of data. Regardless of which level of parallelism is achieved, the execution process is essentially slow. In this chapter, the dataflow paradigm is presented as an alternative paradigm in solving this problem.
References
Akidau T, Bradshaw R, Chambers C, Chernyak S, Fern-Moctezuma RJ, Lax R, McVeety S, Mills D, Perry F, Schmidt E, Whittle S (2015) The dataflow model: a practical approach to balancing correctness, latency, and cost in massive-scale, unbounded, out-of-order data processing. In: Proceedings of the VLDB endowment, vol 8, no 12, pp 1782–1803
Gonzalez JE, Xin RS, Dave A, Crankshaw D, Franklin MJ, Stoica I (2014) GraphX: graph processing in a distributed dataflow framework. In: OSDI
Hurson A, Lee B (1993) Issues in dataflow computing. Adv Comput 37:285–333
Omondi AR, Rajapakse JC (eds) (2006) FPGA implementations of neural networks, vol 365. Springer, Dordrecht
Thimm G, Moerland P, Fiesler E (1996) The interchangeability of learning rate and gain in backpropagation neural networks. Neural Comput 8(2):451–460
Perantonis SJ, Karras DA (1995) An efficient constrained learning algorithm with momentum acceleration. Neural Netw 8(2):237–249
Kamarthi SV, Pittner S (1999) Accelerating neural network training using weight extrapolations. Neural Netw 12(9):1285–1299
Moller MF (1993) A scaled conjugate gradient algorithm for fast supervised learning. Neural Netw 6(4):525–533
Lera G, Pinzolas M (2002) Neighborhood based Levenberg-Marquardt algorithm for neural network training. IEEE Trans Neural Netw 13(5):1200–1203
Maxeler (2013) Multiscale dataflow programming
Milutinovic V, Salom J, Trifunovic N, Giorgi R (2015) Guide to dataflow supercomputing. Springer, Cham
Gustafson J (1988) Reevaluating Amdahls law. Commun ACM 31(5):533
Fu H, Osborne W, Clapp B, Pell O (2008) Accelerating seismic computations on FPGAs from the perspective of number representations. Rome
http://www.top500.org. Cited 17 Nov 2015
Flynn MJ, Mencer O, Milutinovic V, Rakocevic G, Stenstrom P, Trobec R, Valero M (2013) Moving from petaflops to petadata. Commun ACM 56(5):39–42
Trifunovic N, Milutinovic V, Salom J, Kos A (2015) Paradigm shift in big data supercomputing: dataflow vs. control flow. J Big Data 2:1–9
McCulloch WS, Pitts W (1943) A logical calculus of the ideas immanent in nervous activity. Bull Math Biophys 5(4):115–133
https://cs.stanford.edu/people/eroberts/courses/soco/projects/neural-networks/History/history2.html
Asanovic K (2002) Programmable neurocomputing. The MIT Press, Cambridge
Rosenblatt F (1961) Principles of neurodynamics. Perceptrons and the theory of brain mechanisms. No. VG-1196-G-8. CORNELL AERONAUTICAL LAB INC, BUFFALO
Daum H III (2012) A course in machine learning (chapter 5), p 69. http://ciml.info/dl/v0_8/ciml-v0_-all.pdf
http://appgallery.maxeler.com/. Cited 11 Nov 2016
Trifunovic N, Milutinovic V, Korolija N, Gaydadjiev G (2016) An AppGallery for dataflow computing. J Big Data 3:1–30
Maxeler (2011) MaxCompiler White Paper
Blagojevic V et al (2016) A systematic approach to generation of new ideas for PhD research in computing. In: Advances in computers, vol 104. Elsevier, pp 1–19
Acknowledgements
This research was supported by School of Electrical Engineering and Maxeler Technologies, Serbia, Belgrade. I want to thank my family and colleagues who provide insight and expertise that greatly assisted the research.
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2017 Springer International Publishing AG
About this chapter
Cite this chapter
Milutinovic, V., Kotlar, M., Stojanovic, M., Dundic, I., Trifunovic, N., Babovic, Z. (2017). Implementing Neural Networks by Using the DataFlow Paradigm. In: DataFlow Supercomputing Essentials. Computer Communications and Networks. Springer, Cham. https://doi.org/10.1007/978-3-319-66125-4_1
Download citation
DOI: https://doi.org/10.1007/978-3-319-66125-4_1
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-66124-7
Online ISBN: 978-3-319-66125-4
eBook Packages: Computer ScienceComputer Science (R0)