Finite Differences
Resources
- Numerical Analysis Notes
- https://tobydriscoll.net/fnc-julia/localapprox/fd-converge.html
- Burden, Richard L., and J. Douglas Faires. 2010. Numerical Analysis. 9th ed. Cengage Learning.
Related
Main Idea
Finite differences are a way to discretely approximate derivatives. They can be interpreted as secant line approximations, as fitting a polynomial and analytically differentiating this polynomial, or even as a special case of The Finite Element Method (at least in 1D).
Error Analysis
For a finite difference method approximating a
Or equivalently,
E.g., for a first-order derivative, the optimal mesh is
and in the more general case,
Changing from 32 bit to 64 bit Floating Point representations is like multiplying by this factor (the constants should cancel out). See the table below for a break-down of the change to the optimal
Floating Point Error Details
We wish to estimate
where
Thus,
Recalling the relative Condition Number,
and using a finite difference to approximate the numerator,
The resulting error is a combination of truncation and floating point error:
where for instance
Taking
Practical Floating Point Error
How much does the best case error increase by switching from Float32 to Float64? Deep Learning commonly relies on Float32 (or even Automatic Mixed Precision) precision to reduce memory requirements or allow more parameters. This decision often trickles into scientific machine learning, but at what accuracy cost? For a given precision, what is the optimal
We can continue, and rearrange
For
Let's suppose with
In other words, by using double precision, we can use
Example results for
| Derivative Order | Truncation Order | Predicted Factor | Observed Factor | 32bit |
|---|---|---|---|---|
| 1 | 2 | 500 | 924 | 727 |
| 2 | 2 | 100 | 150 | 126 |
| 3 | 2 | 40 | 42 | 92 |
| 1 | 4 | 40 | 41 | 92 |
| 2 | 4 | 22 | 29 | 41 |
Here, these "Factors" are