Abstract
Floating point operations, which find their applications in vast areas such as many mathematical optimization methods, digital signal and image processing algorithms, and Artificial Neural Networks (ANNs), require much area and time for ordinary implementation on Field Programmable Gate Arrays (FPGAs). However, meaningful floating point arithmetic implementation on FPGAs is quite difficult with low level design specifications due to mapping difficulties and the complexity of floating point arithmetic. Design and implementation of floating point arithmetic and mapping of this into an FPGA become easier with the emergence of new generation FPGAs and development of high level languages such as VHDL tools. This paper presents the implementation methodologies of various floating point arithmetic operations such as addition, subtraction, multiplication, and division using 32-bit IEEE 754 floating point format. The implementation is performed using Xilinxs Spartan 3 FPGAs. The algorithms and implementation steps used for different operations are discussed in detail. As an example, an ANN application is presented using these algorithms.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Zhu, J., Gunther, B.K.: Towards an FPGA based reconfigurable computing environment for neural network implementations. Proc. IEEE the 9th International Conference on Artificial Neural Networks (ICANN’99), IEE Conference Proceedings, 470, 661–667 (1999)
Ligon, W. B., McMillan, S., Mpnn, G., Stivers, F., Underwood, K.D.: A re-evaluation of the practicality of floating point operations on FPGAs. Proc. IEEE Symposium on Field-Programmable Custom Computing Machines (Napa, CA), 206215 (1998)
Louca, L., Johnson, W.H., Cook, T.A.: Implementation of IEEE single precision floating point addition and multiplication on FPGAs. Proc. IEEE Workshop on FPGAs for Custom Computing Machines (Napa, CA), 107–116 (1996)
IEEE754 Floating Point Core. Nallatech Inc., http://www.nallatech.com/products/ip/, (2001)
Ashenden, P.J.: VHDL standards. IEEE Design Test of Computers, 18(6), 122–123 (2001)
DF Poliac, M., Zanetti, J., Salerno, D.: Performance mesuraments of seismocardiogram interpretation using neural networks. IEEE Computer in Cardiology, 573–576 (1993)
Rucket, U., Funke, A., Pintaske, C.: Accelerator board for neural associative memories. Neurocomputing, 5(1), 3949 (1993)
Haykin, S.: Neural Networks: A Comprehensive Foundation. Prentice Hall, New Jersey, (1999)
Xilinx Inc.: The Programmable Logic Data Book. San Jose, California (1993)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2007 Springer
About this paper
Cite this paper
Sahin, S., Kavak, A., Becerikli, Y., Demiray, H.E. (2007). Implementation of floating point arithmetics using an FPGA. In: TaÅŸ, K., Tenreiro Machado, J.A., Baleanu, D. (eds) Mathematical Methods in Engineering. Springer, Dordrecht. https://doi.org/10.1007/978-1-4020-5678-9_39
Download citation
DOI: https://doi.org/10.1007/978-1-4020-5678-9_39
Publisher Name: Springer, Dordrecht
Print ISBN: 978-1-4020-5677-2
Online ISBN: 978-1-4020-5678-9
eBook Packages: EngineeringEngineering (R0)