Skip to main content

Implementing Neural Networks by Using the DataFlow Paradigm

  • Chapter
  • First Online:
DataFlow Supercomputing Essentials

Abstract

In this chapter we will present one implementation of Neural Networks using dataflow paradigm. The dataflow paradigm presents a new approach to BigData applications. Existence of BigData is one of biggest application problems in many fields: financial engineering, geophysics, medical analysis, air flow simulations, data mining, and many others. Most of these applications are based on Neural Networks, and that being said, the way a network is implemented is crucial for the application performance. Such applications pay more attention to data than to the process itself. In order to be able to perform correct predictions, Neural Networks should be trained first. In some cases, they spend a lot of execution time on training process. The main challenge is finding a way to process such big quantities of data. Regardless of which level of parallelism is achieved, the execution process is essentially slow. In this chapter, the dataflow paradigm is presented as an alternative paradigm in solving this problem.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

References

  1. Akidau T, Bradshaw R, Chambers C, Chernyak S, Fern-Moctezuma RJ, Lax R, McVeety S, Mills D, Perry F, Schmidt E, Whittle S (2015) The dataflow model: a practical approach to balancing correctness, latency, and cost in massive-scale, unbounded, out-of-order data processing. In: Proceedings of the VLDB endowment, vol 8, no 12, pp 1782–1803

    Google Scholar 

  2. Gonzalez JE, Xin RS, Dave A, Crankshaw D, Franklin MJ, Stoica I (2014) GraphX: graph processing in a distributed dataflow framework. In: OSDI

    Google Scholar 

  3. Hurson A, Lee B (1993) Issues in dataflow computing. Adv Comput 37:285–333

    Article  Google Scholar 

  4. Omondi AR, Rajapakse JC (eds) (2006) FPGA implementations of neural networks, vol 365. Springer, Dordrecht

    Google Scholar 

  5. Thimm G, Moerland P, Fiesler E (1996) The interchangeability of learning rate and gain in backpropagation neural networks. Neural Comput 8(2):451–460

    Article  Google Scholar 

  6. Perantonis SJ, Karras DA (1995) An efficient constrained learning algorithm with momentum acceleration. Neural Netw 8(2):237–249

    Article  Google Scholar 

  7. Kamarthi SV, Pittner S (1999) Accelerating neural network training using weight extrapolations. Neural Netw 12(9):1285–1299

    Article  Google Scholar 

  8. Moller MF (1993) A scaled conjugate gradient algorithm for fast supervised learning. Neural Netw 6(4):525–533

    Article  Google Scholar 

  9. Lera G, Pinzolas M (2002) Neighborhood based Levenberg-Marquardt algorithm for neural network training. IEEE Trans Neural Netw 13(5):1200–1203

    Article  Google Scholar 

  10. Maxeler (2013) Multiscale dataflow programming

    Google Scholar 

  11. Milutinovic V, Salom J, Trifunovic N, Giorgi R (2015) Guide to dataflow supercomputing. Springer, Cham

    Book  Google Scholar 

  12. Gustafson J (1988) Reevaluating Amdahls law. Commun ACM 31(5):533

    Google Scholar 

  13. Fu H, Osborne W, Clapp B, Pell O (2008) Accelerating seismic computations on FPGAs from the perspective of number representations. Rome

    Book  Google Scholar 

  14. http://www.top500.org. Cited 17 Nov 2015

  15. Flynn MJ, Mencer O, Milutinovic V, Rakocevic G, Stenstrom P, Trobec R, Valero M (2013) Moving from petaflops to petadata. Commun ACM 56(5):39–42

    Article  Google Scholar 

  16. Trifunovic N, Milutinovic V, Salom J, Kos A (2015) Paradigm shift in big data supercomputing: dataflow vs. control flow. J Big Data 2:1–9

    Article  Google Scholar 

  17. McCulloch WS, Pitts W (1943) A logical calculus of the ideas immanent in nervous activity. Bull Math Biophys 5(4):115–133

    Article  MathSciNet  MATH  Google Scholar 

  18. https://cs.stanford.edu/people/eroberts/courses/soco/projects/neural-networks/History/history2.html

  19. Asanovic K (2002) Programmable neurocomputing. The MIT Press, Cambridge

    Google Scholar 

  20. Rosenblatt F (1961) Principles of neurodynamics. Perceptrons and the theory of brain mechanisms. No. VG-1196-G-8. CORNELL AERONAUTICAL LAB INC, BUFFALO

    Google Scholar 

  21. Daum H III (2012) A course in machine learning (chapter 5), p 69. http://ciml.info/dl/v0_8/ciml-v0_-all.pdf

  22. http://appgallery.maxeler.com/. Cited 11 Nov 2016

  23. Trifunovic N, Milutinovic V, Korolija N, Gaydadjiev G (2016) An AppGallery for dataflow computing. J Big Data 3:1–30

    Article  Google Scholar 

  24. Maxeler (2011) MaxCompiler White Paper

    Google Scholar 

  25. Blagojevic V et al (2016) A systematic approach to generation of new ideas for PhD research in computing. In: Advances in computers, vol 104. Elsevier, pp 1–19

    Google Scholar 

Download references

Acknowledgements

This research was supported by School of Electrical Engineering and Maxeler Technologies, Serbia, Belgrade. I want to thank my family and colleagues who provide insight and expertise that greatly assisted the research.

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Milutinovic, V., Kotlar, M., Stojanovic, M., Dundic, I., Trifunovic, N., Babovic, Z. (2017). Implementing Neural Networks by Using the DataFlow Paradigm. In: DataFlow Supercomputing Essentials. Computer Communications and Networks. Springer, Cham. https://doi.org/10.1007/978-3-319-66125-4_1

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-66125-4_1

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-66124-7

  • Online ISBN: 978-3-319-66125-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics