MathIsimple
SC-05
Course 5

Numerical Differentiation

Approximate derivatives from discrete data using finite difference formulas. Understand the trade-off between truncation and rounding errors, and learn higher-order methods for improved accuracy.

Learning Objectives
  • Derive finite difference formulas using Taylor series
  • Understand forward, backward, and central differences
  • Analyze truncation error and accuracy order
  • Balance truncation and rounding errors for optimal step size
  • Apply higher-order formulas for improved accuracy
  • Use interpolation polynomials for derivative computation

1. Basic Finite Difference Formulas

The derivative is defined as a limit: f(x)=limh0f(x+h)f(x)hf'(x) = \lim_{h \to 0} \frac{f(x+h) - f(x)}{h}. For numerical computation, we use finite hh.

Definition 5.1: Forward Difference

f(x)f(x+h)f(x)hf'(x) \approx \frac{f(x+h) - f(x)}{h}

Using Taylor expansion: f(x+h)=f(x)+hf(x)+h22f(ξ)f(x+h) = f(x) + hf'(x) + \frac{h^2}{2}f''(\xi)

Error: h2f(ξ)=O(h)\frac{h}{2}f''(\xi) = O(h)

Definition 5.2: Backward Difference

f(x)f(x)f(xh)hf'(x) \approx \frac{f(x) - f(x-h)}{h}

Error: O(h)O(h)

Definition 5.3: Central Difference

f(x)f(x+h)f(xh)2hf'(x) \approx \frac{f(x+h) - f(x-h)}{2h}

Error: h26f(ξ)=O(h2)-\frac{h^2}{6}f'''(\xi) = O(h^2)

Theorem 5.1: Central Difference Derivation

From Taylor series:

f(x+h)=f(x)+hf(x)+h22f(x)+h36f(x)+O(h4)f(x+h) = f(x) + hf'(x) + \frac{h^2}{2}f''(x) + \frac{h^3}{6}f'''(x) + O(h^4)
f(xh)=f(x)hf(x)+h22f(x)h36f(x)+O(h4)f(x-h) = f(x) - hf'(x) + \frac{h^2}{2}f''(x) - \frac{h^3}{6}f'''(x) + O(h^4)

Subtracting and dividing by 2h2h gives the central difference with O(h2)O(h^2) error.

Example: Comparing Difference Formulas

For f(x)=exf(x) = e^x at x=0x = 0 with h=0.1h = 0.1:

MethodFormulaApproximationError
Forward(e0.11)/0.1(e^{0.1} - 1)/0.11.05170.0517
Backward(1e0.1)/0.1(1 - e^{-0.1})/0.10.95160.0484
Central(e0.1e0.1)/0.2(e^{0.1} - e^{-0.1})/0.21.00170.0017

Exact value: f(0)=1f'(0) = 1. Central difference is most accurate.

2. Second Derivative Formula

Definition 5.4: Central Difference for Second Derivative

f(x)f(x+h)2f(x)+f(xh)h2f''(x) \approx \frac{f(x+h) - 2f(x) + f(x-h)}{h^2}

Error: h212f(4)(ξ)=O(h2)-\frac{h^2}{12}f^{(4)}(\xi) = O(h^2)

Theorem 5.2: Derivation

Adding the Taylor expansions for f(x+h)f(x+h) and f(xh)f(x-h):

f(x+h)+f(xh)=2f(x)+h2f(x)+h412f(4)(ξ)+O(h6)f(x+h) + f(x-h) = 2f(x) + h^2 f''(x) + \frac{h^4}{12}f^{(4)}(\xi) + O(h^6)

Solving for f(x)f''(x) gives the formula.

Remark:

Higher derivatives can be approximated similarly, but numerical differentiation becomes increasingly unstable. Whenever possible, use analytical derivatives or automatic differentiation instead.

3. Error Analysis and Optimal Step Size

Numerical differentiation involves two error sources: truncation error (from the approximation) and rounding error (from finite-precision arithmetic).

Theorem 5.3: Total Error

For forward difference with rounding error ϵ\epsilon in ff:

Total Errorh2f(ξ)truncation+2ϵhrounding\text{Total Error} \approx \underbrace{\frac{h}{2}|f''(\xi)|}_{\text{truncation}} + \underbrace{\frac{2\epsilon}{h}}_{\text{rounding}}

Theorem 5.4: Optimal Step Size

Minimizing total error by differentiating with respect to hh:

hopt=2ϵfϵh_{\text{opt}} = 2\sqrt{\frac{\epsilon}{|f''|}} \approx \sqrt{\epsilon}

For double precision (ϵ1016\epsilon \approx 10^{-16}), optimal h108h \approx 10^{-8}.

Example: Optimal Step Size

For central difference (error O(h2)O(h^2)):

Total Errorh26f+2ϵh\text{Total Error} \approx \frac{h^2}{6}|f'''| + \frac{2\epsilon}{h}
hopt(3ϵf)1/3ϵ1/3105h_{\text{opt}} \approx \left(\frac{3\epsilon}{|f'''|}\right)^{1/3} \approx \epsilon^{1/3} \approx 10^{-5}
Note:

Key insight: Making hh too small actually increases error! This is why numerical differentiation is called ill-conditioned.

4. Higher-Order Formulas

Definition 5.5: Five-Point Central Difference

f(x)f(x+2h)+8f(x+h)8f(xh)+f(x2h)12hf'(x) \approx \frac{-f(x+2h) + 8f(x+h) - 8f(x-h) + f(x-2h)}{12h}

Error: O(h4)O(h^4)

Definition 5.6: Five-Point Second Derivative

f(x)f(x+2h)+16f(x+h)30f(x)+16f(xh)f(x2h)12h2f''(x) \approx \frac{-f(x+2h) + 16f(x+h) - 30f(x) + 16f(x-h) - f(x-2h)}{12h^2}

Error: O(h4)O(h^4)

Theorem 5.5: Richardson Extrapolation

If D(h)D(h) approximates f(x)f'(x) with error O(h2)O(h^2), then:

D(h)=4D(h/2)D(h)3D^*(h) = \frac{4D(h/2) - D(h)}{3}

approximates f(x)f'(x) with error O(h4)O(h^4).

Example: Richardson Extrapolation Applied

For central difference D(h)=f(x+h)f(xh)2hD(h) = \frac{f(x+h) - f(x-h)}{2h}:

D(0.2)=1.0067D(0.2) = 1.0067

D(0.1)=1.0017D(0.1) = 1.0017

D=4(1.0017)1.00673=1.00003D^* = \frac{4(1.0017) - 1.0067}{3} = 1.00003

The extrapolated value is much closer to the exact value of 1.

5. Interpolation-Based Differentiation

Given data at unequally-spaced points, construct an interpolating polynomial and differentiate it.

Definition 5.7: Differentiation via Interpolation

Given data (x0,f0),,(xn,fn)(x_0, f_0), \ldots, (x_n, f_n), let Pn(x)P_n(x) be the interpolating polynomial. Then:

f(x)Pn(x)f'(x) \approx P_n'(x)

Example: Three-Point Formula

For equally-spaced points x0,x1=x0+h,x2=x0+2hx_0, x_1 = x_0+h, x_2 = x_0+2h:

f(x0)3f0+4f1f22hf'(x_0) \approx \frac{-3f_0 + 4f_1 - f_2}{2h}
f(x1)f0+f22hf'(x_1) \approx \frac{-f_0 + f_2}{2h}
f(x2)f04f1+3f22hf'(x_2) \approx \frac{f_0 - 4f_1 + 3f_2}{2h}

All have error O(h2)O(h^2).

Remark:

Endpoint formulas: At boundaries where central differences aren't available, use one-sided formulas. The three-point forward difference3f0+4f1f22h\frac{-3f_0 + 4f_1 - f_2}{2h} achieves O(h2)O(h^2) accuracy at the left endpoint.

Practice Quiz

Numerical Differentiation Quiz
10
Questions
0
Correct
0%
Accuracy
1
The forward difference approximation f(x)f(x+h)f(x)hf'(x) \approx \frac{f(x+h) - f(x)}{h} has error:
Easy
Not attempted
2
The central difference f(x)f(x+h)f(xh)2hf'(x) \approx \frac{f(x+h) - f(x-h)}{2h} has error:
Easy
Not attempted
3
For the second derivative, the formula f(x)f(x+h)2f(x)+f(xh)h2f''(x) \approx \frac{f(x+h) - 2f(x) + f(x-h)}{h^2} has error:
Easy
Not attempted
4
Why is numerical differentiation considered ill-conditioned?
Medium
Not attempted
5
For optimal accuracy in numerical differentiation, the step size hh should:
Medium
Not attempted
6
The five-point formula for f(x)f'(x) using f(x2h),f(xh),f(x),f(x+h),f(x+2h)f(x-2h), f(x-h), f(x), f(x+h), f(x+2h) has error:
Medium
Not attempted
7
Richardson extrapolation can improve a derivative approximation from O(h2)O(h^2) to:
Hard
Not attempted
8
The backward difference f(x)f(x)f(xh)hf'(x) \approx \frac{f(x) - f(x-h)}{h} is preferred when:
Medium
Not attempted
9
Using polynomial interpolation through (x0,f0),...,(xn,fn)(x_0, f_0), ..., (x_n, f_n), the derivative f(xk)f'(x_k) can be computed by:
Hard
Not attempted
10
For equally-spaced data, the error in computing f(x0)f'(x_0) using Lagrange interpolation on n+1n+1 points is:
Hard
Not attempted

Frequently Asked Questions

Why is numerical differentiation considered unstable?

Division by small hh amplifies any error in function values. Iff(x+h)f(x+h) and f(x)f(x) each have rounding error ϵ\epsilon, the error in (f(x+h)f(x))/h(f(x+h) - f(x))/h is roughly 2ϵ/h2\epsilon/h.

As h0h \to 0, this rounding error grows without bound, while truncation error decreases. There's an optimal hh that minimizes total error.

When should I use forward vs. central differences?

Central difference: Default choice for interior points (higher accuracy).

Forward/backward difference: Use at boundaries or in real-time applications where future data isn't available.

How do I choose the step size h?

For double precision: use h108h \approx 10^{-8} for O(h)O(h) methods,h105h \approx 10^{-5} for O(h2)O(h^2) methods. These are rough guidelines; the optimal hh depends on the function's derivatives.

How does automatic differentiation compare?

Automatic differentiation (AD) computes exact derivatives (to machine precision) by applying the chain rule systematically. It avoids both truncation error (unlike finite differences) and symbolic complexity (unlike symbolic differentiation). AD is preferred when available.

Can I use complex step differentiation?

Yes! The complex step method uses f(x)Im(f(x+ih))/hf'(x) \approx \text{Im}(f(x + ih))/hwhich has O(h2)O(h^2) accuracy without subtraction cancellation. You can use very small hh (like 1010010^{-100}) without rounding error issues.