Abstract
Fixed-point numbers implement floating-point operations using ordinary integers. This is accomplished through clever use of scaling and allows systems without explicit floating-point hardware to work with floating-point values effectively. In this chapter we explore how to define and store fixed-point numbers, how to perform signed arithmetic with fixed-point numbers, how to implement common trigonometric and transcendental functions using fixed-point numbers, and, lastly, discuss when one might wish to use fixed-point numbers in place of floating-point numbers including an example of an emerging use case involving modern neural networks.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
TMS320C64x DSP Library Programmer’s Reference Literature Number: SPRU565B, Texas Instruments, October 2003.
Gentle, J. E. Random number generation and Monte Carlo methods. Springer, 2003.
Doom. Id Software. 1993. Video game.
Courbariaux, Matthieu, Jean-Pierre David, and Yoshua Bengio. “Training deep neural networks with low precision multiplications.” arXiv preprint arXiv:1412.7024 (2014).
Gupta, Suyog, et al. “Deep learning with limited numerical precision.” CoRR, abs/1502.02551 392 (2015).
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2017 Springer International Publishing AG
About this chapter
Cite this chapter
Kneusel, R.T. (2017). Fixed-Point Numbers. In: Numbers and Computers. Springer, Cham. https://doi.org/10.1007/978-3-319-50508-4_6
Download citation
DOI: https://doi.org/10.1007/978-3-319-50508-4_6
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-50507-7
Online ISBN: 978-3-319-50508-4
eBook Packages: Computer ScienceComputer Science (R0)