MathIsimple
SC-09
Course 9

Nonlinear Equations and ODEs

Solve nonlinear equations f(x)=0f(x) = 0 and initial value problems for ordinary differential equations. From classical bisection to modern Runge-Kutta methods.

Learning Objectives
  • Implement bisection and fixed-point iteration
  • Apply Newton's method and understand its convergence
  • Use secant and hybrid methods when derivatives are unavailable
  • Implement Euler's method for ODE initial value problems
  • Apply Runge-Kutta methods for higher accuracy
  • Understand stability and stiffness in ODEs

1. Bisection Method

The simplest root-finding method: repeatedly halve an interval containing a root.

Algorithm Bisection Method

Input: ff, interval [a,b][a, b] with f(a)f(b)<0f(a)f(b) < 0, tolerance ϵ\epsilon

  1. While ba>ϵb - a > \epsilon:
  2. Compute midpoint c=(a+b)/2c = (a + b)/2
  3. If f(c)=0f(c) = 0, return cc
  4. If f(a)f(c)<0f(a)f(c) < 0, set b=cb = c
  5. Else set a=ca = c
  6. Return (a+b)/2(a + b)/2

Theorem 9.1: Bisection Convergence

After nn iterations, the error satisfies:

xnrba2n+1|x_n - r| \leq \frac{b - a}{2^{n+1}}

Convergence is linear with rate 1/21/2.

Remark:

Bisection is slow but reliable. It's often used to get a good initial guess for faster methods like Newton's method.

2. Fixed-Point Iteration

Definition 9.1: Fixed-Point Problem

Rewrite f(x)=0f(x) = 0 as x=g(x)x = g(x). A fixed point of ggis a value xx^* such that g(x)=xg(x^*) = x^*.

Iterate: xn+1=g(xn)x_{n+1} = g(x_n)

Theorem 9.2: Convergence Criterion

If gC1g \in C^1 on an interval containing xx^* and g(x)<1|g'(x^*)| < 1, then fixed-point iteration converges locally with rate g(x)|g'(x^*)|.

If g(x)>1|g'(x^*)| > 1, the iteration diverges from xx^*.

Example: Choosing g(x)

To solve x3x1=0x^3 - x - 1 = 0:

  • g1(x)=x31g_1(x) = x^3 - 1g1(x)4.5>1|g_1'(x^*)| \approx 4.5 > 1 (diverges)
  • g2(x)=(x+1)1/3g_2(x) = (x + 1)^{1/3}g2(x)0.24<1|g_2'(x^*)| \approx 0.24 < 1 (converges)

3. Newton's Method

Definition 9.2: Newton's Iteration

xn+1=xnf(xn)f(xn)x_{n+1} = x_n - \frac{f(x_n)}{f'(x_n)}

Geometric interpretation: Follow the tangent line at (xn,f(xn))(x_n, f(x_n)) to the x-axis.

Theorem 9.3: Quadratic Convergence

If fC2f \in C^2 near simple root rr and x0x_0 is close enough to rr:

en+1M2men2|e_{n+1}| \leq \frac{M}{2m} |e_n|^2

where M=maxfM = \max|f''|, m=minfm = \min|f'| near rr.

Example: Newton's Method

Find 2\sqrt{2} by solving f(x)=x22=0f(x) = x^2 - 2 = 0:

xn+1=xnxn222xn=12(xn+2xn)x_{n+1} = x_n - \frac{x_n^2 - 2}{2x_n} = \frac{1}{2}\left(x_n + \frac{2}{x_n}\right)
nnxnx_nError
01.00.414
11.50.086
21.416670.0025
31.414212×1062 \times 10^{-6}

Definition 9.3: Secant Method

Replace f(xn)f'(x_n) with finite difference approximation:

xn+1=xnf(xn)xnxn1f(xn)f(xn1)x_{n+1} = x_n - f(x_n) \frac{x_n - x_{n-1}}{f(x_n) - f(x_{n-1})}

Convergence order: ϕ=(1+5)/21.618\phi = (1 + \sqrt{5})/2 \approx 1.618 (golden ratio).

4. Euler Method for ODEs

Consider the initial value problem: y=f(t,y)y' = f(t, y), y(t0)=y0y(t_0) = y_0.

Definition 9.4: Forward Euler Method

yn+1=yn+hf(tn,yn)y_{n+1} = y_n + h f(t_n, y_n)

where hh is the step size and tn+1=tn+ht_{n+1} = t_n + h.

Theorem 9.4: Euler Error

Local truncation error: O(h2)O(h^2)

Global error: O(h)O(h) (Euler is a first-order method)

Definition 9.5: Backward Euler (Implicit)

yn+1=yn+hf(tn+1,yn+1)y_{n+1} = y_n + h f(t_{n+1}, y_{n+1})

Requires solving for yn+1y_{n+1} at each step. More stable for stiff problems.

Example: Euler Method

Solve y=yy' = -y, y(0)=1y(0) = 1 with h=0.1h = 0.1:

yn+1=yn+0.1(yn)=0.9yny_{n+1} = y_n + 0.1(-y_n) = 0.9 y_n

At t=1t = 1: y10=0.9100.349y_{10} = 0.9^{10} \approx 0.349(exact: e10.368e^{-1} \approx 0.368)

5. Runge-Kutta Methods

Runge-Kutta methods achieve higher accuracy by evaluating ff at multiple points within each step.

Definition 9.6: Second-Order Runge-Kutta (Midpoint)

k1=f(tn,yn)k_1 = f(t_n, y_n)
k2=f(tn+h2,yn+h2k1)k_2 = f\left(t_n + \frac{h}{2}, y_n + \frac{h}{2}k_1\right)
yn+1=yn+hk2y_{n+1} = y_n + h k_2

This is a second-order method with local error O(h3)O(h^3).

Definition 9.7: Classical Fourth-Order Runge-Kutta (RK4)

k1=f(tn,yn)k_1 = f(t_n, y_n)
k2=f(tn+h2,yn+h2k1)k_2 = f\left(t_n + \frac{h}{2}, y_n + \frac{h}{2}k_1\right)
k3=f(tn+h2,yn+h2k2)k_3 = f\left(t_n + \frac{h}{2}, y_n + \frac{h}{2}k_2\right)
k4=f(tn+h,yn+hk3)k_4 = f(t_n + h, y_n + h k_3)
yn+1=yn+h6(k1+2k2+2k3+k4)y_{n+1} = y_n + \frac{h}{6}(k_1 + 2k_2 + 2k_3 + k_4)

Theorem 9.5: RK4 Error

RK4 is a fourth-order method: local error O(h5)O(h^5), global error O(h4)O(h^4).

Multistep Methods

Adams-Bashforth (explicit) and Adams-Moulton (implicit) use previous solution values:

yn+1=yn+hj=0kbjfnjy_{n+1} = y_n + h\sum_{j=0}^{k} b_j f_{n-j}

Stiff Equations

Stiff ODEs have widely varying time scales. Explicit methods require tiny steps. Use implicit methods (backward Euler, BDF, implicit RK) which are A-stable.

Remark:

Modern ODE solvers use adaptive step size control, comparing solutions at different orders to estimate error and adjust hh automatically.

Practice Quiz

Nonlinear Equations and ODEs Quiz
10
Questions
0
Correct
0%
Accuracy
1
The bisection method guarantees convergence if:
Easy
Not attempted
2
Newton's method has convergence order:
Easy
Not attempted
3
The Euler method for y=f(t,y)y' = f(t, y) has local truncation error:
Easy
Not attempted
4
Fixed-point iteration xn+1=g(xn)x_{n+1} = g(x_n) converges if:
Medium
Not attempted
5
The secant method approximates Newton's method by:
Medium
Not attempted
6
The classical 4th-order Runge-Kutta method evaluates ff how many times per step?
Medium
Not attempted
7
For a multiple root f(x)=(xr)mg(x)f(x) = (x - r)^m g(x) with m>1m > 1, standard Newton's method:
Hard
Not attempted
8
An ODE method is called implicit if:
Hard
Not attempted
9
The order of the secant method convergence is:
Hard
Not attempted
10
Stiff ODEs are best solved with:
Hard
Not attempted

Frequently Asked Questions

When should I use Newton's method vs. bisection?

Newton's method: When you have a good initial guess and can compute derivatives. Fast convergence (quadratic) but may diverge with poor starting point.

Bisection: When you need guaranteed convergence or don't have derivatives. Slower (linear) but always works if initial bracket is valid.

How do I choose step size for ODE solvers?

Use adaptive step size control: estimate local error by comparing solutions at different orders, then adjust hh to keep error within tolerance. Libraries like MATLAB's ode45 or SciPy's solve_ivp do this automatically.

What is a stiff ODE and how do I recognize one?

A stiff ODE has solution components with vastly different time scales. Signs include:

  • Explicit methods require extremely small step sizes
  • Jacobian has eigenvalues with very different magnitudes
  • The solution appears smooth but explicit methods still struggle

Solution: Use implicit methods like backward Euler or BDF.

How do I solve systems of nonlinear equations?

Generalize Newton's method: for F(x)=0F(\mathbf{x}) = \mathbf{0}:

xn+1=xnJ1(xn)F(xn)\mathbf{x}_{n+1} = \mathbf{x}_n - J^{-1}(\mathbf{x}_n) F(\mathbf{x}_n)

where JJ is the Jacobian matrix. In practice, solveJΔx=FJ \Delta \mathbf{x} = -F for the update.

Why is RK4 so popular?

RK4 offers an excellent balance: fourth-order accuracy with only 4 function evaluations per step. It's accurate enough for most non-stiff problems, easy to implement, and has good stability properties. For more demanding problems, use adaptive RK methods like Dormand-Prince (ode45).