Abstract
The derivative of a function f(x) is defined in theory by
Therefore we can obtain a numerical approximation using
for small values of h The accuracy will clearly depend on the size of h. We can improve the approximation by letting h become small, but it is obvious we cannot let it become zero since we are dealing with numerical values. If h became zero we would be dividing by zero. However, before this becomes a problem the difference between f(x) and f(x+h) would become indistinguishable on the computer and this would lead to gross inaccuracies. Therefore, by reducing h in the classical theoretical manner, we can only achieve an accuracy within the limits of that allowed by the computer we are using.
Preview
Unable to display preview. Download preview PDF.
Author information
Authors and Affiliations
Copyright information
© 1987 E. J. Redfern
About this chapter
Cite this chapter
Redfern, E.J. (1987). Numerical Calculus. In: Introduction to Pascal for Computational Mathematics. Macmillan Computer Science Series. Palgrave, London. https://doi.org/10.1007/978-1-349-18977-9_9
Download citation
DOI: https://doi.org/10.1007/978-1-349-18977-9_9
Publisher Name: Palgrave, London
Print ISBN: 978-0-333-44431-3
Online ISBN: 978-1-349-18977-9
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)