Skip to main content

Abstract

Neural networks are usually considered as naturally parallel computing models. But the number of operators and the complex connection graph of standard neural models can not be handled by digital hardware devices. Though programmable digital hardware now stand as a real opportunity for flexible hardware implementations of neural networks, many area and topology problems arise when standard neural models are implemented onto programmable circuits such as FPGAs, so that the fast FPGA technology improvements can not be fully exploited. The theoretical and practical framework first introduced in [21] reconciles simple hardware topologies with complex neural architectures, thanks to some configurable hardware principles applied to neural computation: Field Programmable Neural Arrays (FPNA) lead to powerful neural architectures that are easy to map onto FPGAs, by means of a simplified topology and an original data exchange scheme. This two-chapter study gathers the different results that have been published about the FPNA concept, as well as some unpublished ones. This first part focuses on definitions and theoretical aspects. Starting from a general two-level definition of FPNAs, all proposed computation schemes are together described and compared. Their correctness and partial equivalence is justified. The computational power of FPNA-based neural networks is characterized through the concept of underparameterized convolutions.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 169.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. D. Abramson, K. Smith, P. Logothetis, and D. Duke. FPGA based implementation of a hopfield neural network for solving constraint satisfaction problems. In Proc. EuroMicro, 1998.

    Google Scholar 

  2. D. Anguita, S. Bencetti, A. De Gloria, G. Parodi, D. Ricci, and S. Ridella. FPGA implementation of high precision feedforward networks. In Proc. MicroNeuro, pages 240–243, 1997.

    Google Scholar 

  3. N. Avellana, A. Strey, R. Holgado, A. Fernandez, R. Capillas, and E. Valderrama. Design of a low-cost and high-speed neurocomputer system. In Proc. MicroNeuro, pages 221–226, 1996.

    Google Scholar 

  4. S.L. Bade and B.L. Hutchings. FPGA-based stochastic neural networks-implementation. In Proceedings of the IEEE Workshop on FPGAs for Custom Computing Machines, pages 189–198, 1994.

    Google Scholar 

  5. R. Baron and B. Girau. Parameterized normalization: application to wavelet networks. In Proc. IJCNN, volume 2, pages 1433–1437. IEEE, 1998.

    Google Scholar 

  6. J.-L. Beuchat. Conception d’un neuroprocesseur reconfigurable proposant des algorithmes d’apprentissage et d’elagage: une premiere etude. In Proc. NSI Neurosciences et Sciences de l’Ingenieur, 1998.

    Google Scholar 

  7. Y. Boniface. A parallel simulator to build distributed neural algorithms. In International Joint Conference on Neural Networks-IJCNN’01, Washington, USA, 2001.

    Google Scholar 

  8. N.M. Botros and M. Abdul-Aziz. Hardware implementation of an artificial neural network. In Proc. ICNN, volume 3, pages 1252–1257, 1993.

    Google Scholar 

  9. Y.K. Choi, K.H. Ahn, and S.-Y. Lee. Effects of multiplier output offsets on on-chip learning for analog neuro-chips. Neural Processing Letters, 4:1–8, 1996.

    Article  MATH  Google Scholar 

  10. V.F. Cimpu. Hardware FPGA implementation of a neural network. In Proc. Int. Conf. Technical Informatics, volume 2, pages 57–68, 1996.

    Google Scholar 

  11. J.G. Eldredge and B.L. Hutchings. RRANN: a hardware implementation of the backpropagation algorithm using reconfigurable FPGAs. In Proceedings of the IEEE World Conference on Computational Intelligence, 1994.

    Google Scholar 

  12. A. Elisseeff and H. Paugam-Moisy. Size of multilayer networks for exact learning: analytic approach. Technical Report 96-16, LIP-ENSL, 1996.

    Google Scholar 

  13. W. Eppler, T. Fisher, H. Gemmeke, T. Becher, and G. Kock. High speed neural network chip on PCI-board. In Proc. MicroNeuro, pages 9–17, 1997.

    Google Scholar 

  14. S.K. Foo, P. Saratchandran, and N. Sundararajan. Parallel implementation of backpropagation neural networks on a heterogeneous array of transputers. IEEE Trans. on Systems, Man, and Cybernetics—Part B: Cybernetics, 27(1):118–126, 1997.

    Google Scholar 

  15. D. Franco and L. Carro. FPGA architecture comparison for nonconventional signal processing. In Proc. IJCNN, 2000.

    Google Scholar 

  16. K.-I. Funahashi. On the approximate realization of continuous mappings by neural networks. Neural Networks, 2:183–192, 1989.

    Article  Google Scholar 

  17. R. Gadea, J. Cerda, F. Ballester, and A. Mocholi. Artificial neural network implementation on a single FPGA of a pipelined on-line backpropagation. In Proc. ISSS, pages 225–230, 2000.

    Google Scholar 

  18. C. Gegout, B. Girau, and F. Rossi. A general feedforward neural network model. Technical report NC-TR-95-041, NeuroCOLT, Royal Holloway, University of London, 1995.

    Google Scholar 

  19. C. Gegout, B. Girau, and F. Rossi. Generic back-propagation in arbitrary feedforward neural networks. In Artificial Neural Nets and Genetic Algorithms — Proc. of ICANNGA, pages 168–171. Springer-Verlag, 1995.

    Google Scholar 

  20. B. Girau. Dependencies of composite connections in Field Programmable Neural Arrays. Research report NC-TR-99-047, NeuroCOLT, Royal Holloway, University of London, 1999.

    Google Scholar 

  21. B. Girau. Du parallelisme des modeles connexionnistes a leur implantation parallele. PhD thesis n° 99ENSL0116, ENS Lyon, 1999.

    Google Scholar 

  22. B. Girau. Building a 2D-compatible multilayer neural network. In Proc. IJCNN. IEEE, 2000.

    Google Scholar 

  23. B. Girau. Conciliating connectionism and parallel digital hardware. Parallel and Distributed Computing Practices, special issue on Unconventional parallel architectures, 3(2):291–307, 2000.

    Google Scholar 

  24. B. Girau. Digital hardware implementation of 2D compatible neural networks. In Proc. IJCNN. IEEE, 2000.

    Google Scholar 

  25. B. Girau. FPNA: interaction between FPGA and neural computation. Int. Journal on Neural Systems, 10(3):243–259, 2000.

    Google Scholar 

  26. B. Girau. Neural networks on FPGAs: a survey. In Proc. Neural Computation, 2000.

    Google Scholar 

  27. B. Girau. Simplified neural architectures for symmetric boolean functions. In Proc. ESANN European Symposium on Artificial Neural Networks, pages 383–388, 2000.

    Google Scholar 

  28. B. Girau. On-chip learning of FPGA-inspired neural nets. In Proc. IJCNN. IEEE, 2001.

    Google Scholar 

  29. B. Girau and A. Tisserand. MLP computing and learning on FPGA using on-line arithmetic. Int. Journal on System Research and Information Science, special issue on Parallel and Distributed Systems for Neural Computing, 9(2–4), 2000.

    Google Scholar 

  30. C. Grassmann and J.K. Anlauf. Fast digital simulation of spiking neural networks and neuromorphic integration with spikelab. International Journal of Neural Systems, 9(5):473–478, 1999.

    Article  Google Scholar 

  31. M. Gschwind, V. Salapura, and O. Maisch berger. A generic building block for Hopfield neural networks with on-chip learning. In Proc. ISCAS, 1996.

    Google Scholar 

  32. H. Hikawa. Frequency-based multilayer neural network with on-chip learning and enhanced neuron characteristics. IEEE Trans. on Neural Networks, 10(3):545–553, 1999.

    Article  Google Scholar 

  33. R. Hoffmann, H.F. Restrepo, A. Perez-Uribe, C. Teuscher, and E. Sanchez. Implémentation d’un réseau de neurones sur un réseau de fpga. In Proc. Sympa’6, 2000.

    Google Scholar 

  34. P.W. Hollis, J.S. Harper, and J.J. Paulos. The effects of precision constraints in a backpropagation learning algorithm. Neural Computation, 2:363–373, 1990.

    Google Scholar 

  35. K. Hornik. Approximation capabilities of multilayer feedforward networks. Neural Networks, 4:251–257, 1991.

    Article  Google Scholar 

  36. K. Hornik, M. Stinchcombe, and H. White. Multilayer feedforward networks are universal approximators. Neural Networks, 2:359–366, 1989.

    Article  Google Scholar 

  37. N. Izeboudjen, A. Farah, S. Titri, and H. Boumeridja. Digital implementation of artificial neural networks: From VHDL description to FPGA implementation. In Proc. IWANN, 1999.

    Google Scholar 

  38. A. Johannet, L. Personnaz, G. Dreyfus, J.D. Gascuel, and M. Weinfeld. Specification and implementation of a digital Hopfield-type associative memory with on-chip training. IEEE Trans. on Neural Networks, 3, 1992.

    Google Scholar 

  39. J. Kennedy and J. Austin. A parallel architecture for binary neural networks. In Proc. MicroNeuro, pages 225–231, 1997.

    Google Scholar 

  40. K. Kollmann, K. Riemschneider, and H.C. Zeidler. On-chip backpropagation training using parallel stochastic bit streams. In Proc. MicroNeuro, pages 149–156, 1996.

    Google Scholar 

  41. A. Kramer. Array-based analog computation: principles, advantages and limitations. In Proc. MicroNeuro, pages 68–79, 1996.

    Google Scholar 

  42. V. Kumar, S. Shekhar, and M.B. Amin. A scalable parallel formulation of the back-propagation algorithm for hypercubes and related architectures. IEEE Transactions on Parallel and Distributed Systems, 5(10):1073–1090, October 1994.

    Article  Google Scholar 

  43. P. Lysaght, J. Stockwood, J. Law, and D. Girma. Artificial neural network implementation on a fine-grained FPGA. In Proc. FPL, pages 421–432, 1994.

    Google Scholar 

  44. Y. Maeda and T. Tada. FPGA implementation of a pulse density neural network using simultaneous perturbation. In Proc. IJCNN, 2000.

    Google Scholar 

  45. S. McLoone and G.W. Irwin. Fast parallel off-line training of multilayer perceptrons. IEEE Trans. on Neural Networks, 8(3):646–653, 1997.

    Article  Google Scholar 

  46. I. Milosavlevich, B. Flower, and M. Jabri. PANNE: a parallel computing engine for connectionist simulation. In Proc. MicroNeuro, pages 363–368, 1996.

    Google Scholar 

  47. P.D. Moerland and E. Fiesler. Hardware-friendly learning algorithms for neural networks: an overview. In Proc. MicroNeuro, 1996.

    Google Scholar 

  48. A. Montalvo, R. Gyurcsik, and J. Paulos. Towards a general-purpose analog VLSI neural network with on-chip learning. IEEE Trans. on Neural Networks, 8(2):413–423, 1997.

    Article  Google Scholar 

  49. U.A. M-uller, A. Gunzinger, and W. Guggenb-uhl. Fast neural net simulation with a DSP processor array. IEEE Trans. on Neural Networks, 6(1):203–213, 1995.

    Article  Google Scholar 

  50. T. Nordstrom and B. Svensson. Using and designing massively parallel computers for artificial neural networks. Journal of Parallel and Distributed Computing, 14(3):260–285, 1992.

    Article  Google Scholar 

  51. R. Ostermark. A flexible multicomputer algorithm for artificial neural networks. Neural Networks, 9(1):169–178, 1996.

    Article  Google Scholar 

  52. J. Park and I.W. Sandberg. Universal approximation using radial-basisfunction networks. Neural Computation, 3:246–257, 1991.

    Google Scholar 

  53. H. Paugam-Moisy. Optimal speedup conditions for a parallel back-propagation algorithm. In CONPAR, pages 719–724, 1992.

    Google Scholar 

  54. A. Perez-Uribe and E. Sanchez. FPGA implementation of an adaptablesize neural network. In Proc. ICANN. Springer-Verlag, 1996.

    Google Scholar 

  55. A. Petrowski. Choosing among several parallel implementations of the backpropagation algorithm. In Proc. ICNN, pages 1981–1986, 1994.

    Google Scholar 

  56. M. Rossmann, A. Buhlmeier, G. Manteuffel, and K. Goser. short-and long-term dynamics in a stochastic pulse stream neuron implemented in FPGA. In Proc. ICANN, LNCS, 1997.

    Google Scholar 

  57. M. Rossmann, T. Jost, A. Goser, K. B-uhlmeier, and G. Manteuffel. Exponential hebbian on-line lerarning implemented in FPGAs. In Proc. ICANN, 1996.

    Google Scholar 

  58. S. Sakaue, T. Kohda, H. Yamamoto, S. Maruno, and Shimeki Y. Reduction of required precision bits for back-propagation applied to pattern recognition. IEEE Trans. on Neural Networks, 4(2):270–275, 1993.

    Article  Google Scholar 

  59. V. Salapura. Neural networks using bit-stream arithmetic: a space efficient implementation. In Proc. IEEE Int. Conf. on Circuits and Systems, 1994.

    Google Scholar 

  60. V. Salapura, M. Gschwind, and O. Maisch berger. A fast FPGA implementation of a general purpose neuron. In Proc. FPL, 1994.

    Google Scholar 

  61. K.M. Sammut and S.R. Jones. Arithmetic unit design for neural accelerators: cost performance issues. IEEE Trans. on Computers, 44(10), 1995.

    Google Scholar 

  62. M. Schaefer, T. Schoenauer, C. Wolff, G. Hartmann, H. Klar, and U. Ruckert. Simulation of spiking neural networks-architectures and implementations. Neurocomputing, (48):647–679, 2002.

    Article  MATH  Google Scholar 

  63. S. Shams and J.-L. Gaudiot. Parallel implementations of neural networks. Int. J. on Artificial Intelligence, 2(4):557–581, 1993.

    Article  Google Scholar 

  64. S. Shams and J.-L. Gaudiot. Implementing regularly structured neural networks on the DREAM machine. IEEE Trans. on Neural Networks, 6(2):407–421, 1995.

    Article  Google Scholar 

  65. K. Siu, V. Roychowdhury, and T. Kailath. Depth-size tradeoffs for neural computation. IEEE Trans. on Computers, 40(12):1402–1412, 1991.

    Article  MathSciNet  Google Scholar 

  66. T. Szabo, L. Antoni, G. Horvath, and B. Feher. A full-parallel digital implementation for pre-trained NNs. In Proc. IJCNN, 2000.

    Google Scholar 

  67. M. van Daalen, P. Jeavons, and J. Shawe-Taylor. A stochastic neural architecture that exploits dynamically reconfigurable FPGAs. In Proc. of IEEE Workshop on FPGAs for Custom Computing Machines, pages 202–211, 1993.

    Google Scholar 

  68. M. Viredaz, C. Lehmann, F. Blayo, and P. Ienne. MANTRA: a multimodel neural network computer. In VLSI for Neural Networks and Artificial Intelligence, pages 93–102. Plenum Press, 1994.

    Google Scholar 

  69. J. Wawrzynek, K. Asanovic, and N. Morgan. The design of a neuromicroprocessor. IEEE Trans. on Neural Networks, 4(3):394–399, 1993.

    Article  Google Scholar 

  70. Xilinx, editor. The Programmable Logic Data Book. Xilinx, 2002.

    Google Scholar 

  71. Q. Zhang and A. Benveniste. Wavelet networks. IEEE Trans. on Neural Networks, 3(6):889–898, Nov. 1992.

    Article  Google Scholar 

  72. X. Zhang, M. McKenna, J.J. Mesirov, and D.L. Waltz. The backpropagation algorithm on grid and hypercube architectures. Parallel Computing, 14:317–327, 1990.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2006 Springer

About this chapter

Cite this chapter

Girau, B. (2006). FPNA: Concepts and Properties. In: Omondi, A.R., Rajapakse, J.C. (eds) FPGA Implementations of Neural Networks. Springer, Boston, MA . https://doi.org/10.1007/0-387-28487-7_3

Download citation

  • DOI: https://doi.org/10.1007/0-387-28487-7_3

  • Publisher Name: Springer, Boston, MA

  • Print ISBN: 978-0-387-28485-9

  • Online ISBN: 978-0-387-28487-3

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics