Numerical Differentiation
Approximate derivatives from discrete data using finite difference formulas. Understand the trade-off between truncation and rounding errors, and learn higher-order methods for improved accuracy.
- Derive finite difference formulas using Taylor series
- Understand forward, backward, and central differences
- Analyze truncation error and accuracy order
- Balance truncation and rounding errors for optimal step size
- Apply higher-order formulas for improved accuracy
- Use interpolation polynomials for derivative computation
1. Basic Finite Difference Formulas
The derivative is defined as a limit: . For numerical computation, we use finite .
Definition 5.1: Forward Difference
Using Taylor expansion:
Error:
Definition 5.2: Backward Difference
Error:
Definition 5.3: Central Difference
Error:
Theorem 5.1: Central Difference Derivation
From Taylor series:
Subtracting and dividing by gives the central difference with error.
Example: Comparing Difference Formulas
For at with :
| Method | Formula | Approximation | Error |
|---|---|---|---|
| Forward | 1.0517 | 0.0517 | |
| Backward | 0.9516 | 0.0484 | |
| Central | 1.0017 | 0.0017 |
Exact value: . Central difference is most accurate.
2. Second Derivative Formula
Definition 5.4: Central Difference for Second Derivative
Error:
Theorem 5.2: Derivation
Adding the Taylor expansions for and :
Solving for gives the formula.
Remark:
Higher derivatives can be approximated similarly, but numerical differentiation becomes increasingly unstable. Whenever possible, use analytical derivatives or automatic differentiation instead.
3. Error Analysis and Optimal Step Size
Numerical differentiation involves two error sources: truncation error (from the approximation) and rounding error (from finite-precision arithmetic).
Theorem 5.3: Total Error
For forward difference with rounding error in :
Theorem 5.4: Optimal Step Size
Minimizing total error by differentiating with respect to :
For double precision (), optimal .
Example: Optimal Step Size
For central difference (error ):
Note:
Key insight: Making too small actually increases error! This is why numerical differentiation is called ill-conditioned.
4. Higher-Order Formulas
Definition 5.5: Five-Point Central Difference
Error:
Definition 5.6: Five-Point Second Derivative
Error:
Theorem 5.5: Richardson Extrapolation
If approximates with error , then:
approximates with error .
Example: Richardson Extrapolation Applied
For central difference :
The extrapolated value is much closer to the exact value of 1.
5. Interpolation-Based Differentiation
Given data at unequally-spaced points, construct an interpolating polynomial and differentiate it.
Definition 5.7: Differentiation via Interpolation
Given data , let be the interpolating polynomial. Then:
Example: Three-Point Formula
For equally-spaced points :
All have error .
Remark:
Endpoint formulas: At boundaries where central differences aren't available, use one-sided formulas. The three-point forward difference achieves accuracy at the left endpoint.
Practice Quiz
Frequently Asked Questions
Why is numerical differentiation considered unstable?
Division by small amplifies any error in function values. If and each have rounding error , the error in is roughly .
As , this rounding error grows without bound, while truncation error decreases. There's an optimal that minimizes total error.
When should I use forward vs. central differences?
Central difference: Default choice for interior points (higher accuracy).
Forward/backward difference: Use at boundaries or in real-time applications where future data isn't available.
How do I choose the step size h?
For double precision: use for methods, for methods. These are rough guidelines; the optimal depends on the function's derivatives.
How does automatic differentiation compare?
Automatic differentiation (AD) computes exact derivatives (to machine precision) by applying the chain rule systematically. It avoids both truncation error (unlike finite differences) and symbolic complexity (unlike symbolic differentiation). AD is preferred when available.
Can I use complex step differentiation?
Yes! The complex step method uses which has accuracy without subtraction cancellation. You can use very small (like ) without rounding error issues.