MathIsimple
ODE-06
Systems Theory

Linear Systems of ODEs

Learn to solve systems of linear differential equations using matrix exponentials, fundamental matrices, Jordan canonical form, and the powerful Liouville formula.

Learning Objectives

  • Understand and compute matrix exponentials
  • Construct fundamental matrices for linear systems
  • Apply Jordan form to solve systems
  • Use the Liouville formula for Wronskian evolution
  • Solve systems with distinct and repeated eigenvalues
  • Handle complex eigenvalues with real solutions
  • Apply variation of parameters to nonhomogeneous systems
  • Understand the solution space structure

1. Matrix Exponential

The matrix exponential is fundamental to solving linear systems of ODEs. It generalizes the scalar exponential e,ate^,{at} to matrices.

Definition 6.1: Matrix Exponential

For a square matrix AMn(,R,)A \in M_n(\mathbb,{R}, ), the matrix exponential is defined by:

eA=k=0Akk!=I+A+A22!+A33!+e^A = \sum_{k=0}^{\infty} \frac{A^k}{k!} = I + A + \frac{A^2}{2!} + \frac{A^3}{3!} + \cdots

This series converges for any matrix AA.

Theorem 6.1: Convergence of Matrix Exponential

For any matrix AMnA \in M_n, the series k=0Akk!\sum_{k=0}^{\infty} \frac{A^k}{k!} converges absolutely, and e,Ate^,{At} converges uniformly on any finite interval.

Proof of Theorem 6.1:

Using any matrix norm \|\cdot\|:

k=0Akk!k=0Akk!=eA<\left\|\sum_{k=0}^{\infty} \frac{A^k}{k!}\right\| \leq \sum_{k=0}^{\infty} \frac{\|A\|^k}{k!} = e^{\|A\|} < \infty

For uniform convergence on [0,T][0, T], we have eAteAT\|e^{At}\| \leq e^{\|A\|T}. ∎

Theorem 6.2: Properties of Matrix Exponential

1. If AB=BAAB = BA, then e,A+B,=eAeBe^,{A+B}, = e^A e^B

2. eAe^A is always invertible with (eA),1,=e,A(e^A)^,{-1}, = e^,{-A}

3. If PP is invertible, then e,PAP1,=PeAP,1e^,{PAP^{-1}}, = Pe^A P^,{-1}

4. ,ddt,e,At,=Ae,At,=e,At,A\frac,{d}{dt},e^,{At}, = Ae^,{At}, = e^,{At},A

Example 6.1: Matrix Exponential of Diagonal Matrix

Problem: Compute e,Ate^,{At} for A=,diag,(a1,,an)A = \text,{diag},(a_1, \ldots, a_n).

Solution:

For a diagonal matrix, Ak=,diag,(a1k,,ank)A^k = \text,{diag},(a_1^k, \ldots, a_n^k). Therefore:

eAt=k=0Aktkk!=diag(k=0(a1t)kk!,,k=0(ant)kk!)e^{At} = \sum_{k=0}^{\infty} \frac{A^k t^k}{k!} = \text{diag}\left(\sum_{k=0}^{\infty}\frac{(a_1 t)^k}{k!}, \ldots, \sum_{k=0}^{\infty}\frac{(a_n t)^k}{k!}\right)
=diag(ea1t,,eant)= \text{diag}(e^{a_1 t}, \ldots, e^{a_n t})

Example 6.2: Matrix Exponential with Nilpotent Part

Problem: Compute e,Ate^,{At} for A = \begin,{pmatrix}, 2 & 1 \\ 0 & 2 \end,{pmatrix}.

Solution:

Write A=2I+NA = 2I + N where N = \begin,{pmatrix}, 0 & 1 \\ 0 & 0 \end,{pmatrix}. Since N2=0N^2 = 0 (nilpotent):

eNt=I+Nt=(1t01)e^{Nt} = I + Nt = \begin{pmatrix} 1 & t \\ 0 & 1 \end{pmatrix}

Since 2I2I commutes with NN:

eAt=e2IteNt=e2t(1t01)=(e2tte2t0e2t)e^{At} = e^{2It} e^{Nt} = e^{2t} \begin{pmatrix} 1 & t \\ 0 & 1 \end{pmatrix} = \begin{pmatrix} e^{2t} & te^{2t} \\ 0 & e^{2t} \end{pmatrix}

2. Fundamental Matrix

Definition 6.2: Fundamental Matrix

For the homogeneous system ,dxdt,=Ax\frac,{dx}{dt}, = Ax, a fundamental matrixΦ(t)\Phi(t) is an n×nn \times n matrix whose columns are nn linearly independent solutions.

The standard fundamental matrix satisfies Φ(0)=I\Phi(0) = I.

Theorem 6.3: Standard Fundamental Matrix

For the constant coefficient system ,dxdt,=Ax\frac,{dx}{dt}, = Ax, the matrix Φ(t)=e,At\Phi(t) = e^,{At} is the standard fundamental matrix.

Proof of Theorem 6.3:

We verify: (1) Φ(t)=Ae,At,=AΦ(t)\Phi'(t) = Ae^,{At}, = A\Phi(t) ✓, and (2) Φ(0)=e0=I\Phi(0) = e^0 = I ✓.

By the Liouville formula, det(Φ(t))=det(I)e,tr(A)t,0\det(\Phi(t)) = \det(I) \cdot e^,{\text{tr}(A)t}, \neq 0, so columns are linearly independent. ∎

Theorem 6.4: Liouville Formula

For the system ,dxdt,=A(t)x\frac,{dx}{dt}, = A(t)x with fundamental matrix Φ(t)\Phi(t), the Wronskian W(t)=det(Φ(t))W(t) = \det(\Phi(t)) satisfies:

W(t)=W(t0)exp(t0ttr(A(s))ds)W(t) = W(t_0) \exp\left(\int_{t_0}^{t} \text{tr}(A(s))\,ds\right)
Remark:

The Liouville formula shows that the Wronskian is either always zero or never zero. This gives an elegant criterion for linear independence of solutions.

3. Systems with Distinct Eigenvalues

Theorem 6.5: Fundamental Matrix for Distinct Eigenvalues

If AA has nn distinct eigenvalues λ1,,λn\lambda_1, \ldots, \lambda_n with corresponding eigenvectors r1,,rnr_1, \ldots, r_n, then:

Φ(t)=(eλ1tr1eλntrn)\Phi(t) = \begin{pmatrix} e^{\lambda_1 t} r_1 & \cdots & e^{\lambda_n t} r_n \end{pmatrix}

is a fundamental matrix for ,dxdt,=Ax\frac,{dx}{dt}, = Ax.

Proof of Theorem 6.5:

Each column e,lambdakt,rke^,lambda_k t, r_k satisfies:

ddt(eλktrk)=λkeλktrk=A(eλktrk)\frac{d}{dt}(e^{\lambda_k t} r_k) = \lambda_k e^{\lambda_k t} r_k = A(e^{\lambda_k t} r_k)

since Ark=λkrkAr_k = \lambda_k r_k. The columns are linearly independent because eigenvectors corresponding to distinct eigenvalues are independent. ∎

Algorithm 6.1: Solving Systems with Distinct Eigenvalues

Input: System ,dxdt,=Ax\frac,{dx}{dt}, = Ax where AA has distinct eigenvalues

Output: General solution

1. Find eigenvalues λ1,,λn\lambda_1, \ldots, \lambda_n from det(AλI)=0\det(A - \lambda I) = 0

2. For each λk\lambda_k, solve (AλkI)r=0(A - \lambda_k I)r = 0 for eigenvector rkr_k

3. General solution: x(t)=C1e,lambda1t,r1++Cne,lambdant,rnx(t) = C_1 e^,lambda_1 t, r_1 + \cdots + C_n e^,lambda_n t, r_n

Example 6.3: System with Distinct Real Eigenvalues

Problem: Solve \frac,{dx}{dt}, = \begin,{pmatrix}, 6 & -3 \\ 2 & 1 \end,{pmatrix},x.

Solution:

Characteristic equation: det(AλI)=(6λ)(1λ)+6=λ27λ+12=(λ3)(λ4)=0\det(A - \lambda I) = (6-\lambda)(1-\lambda) + 6 = \lambda^2 - 7\lambda + 12 = (\lambda - 3)(\lambda - 4) = 0

Eigenvalues: λ1=3,λ2=4\lambda_1 = 3, \lambda_2 = 4

For λ1=3\lambda_1 = 3: (A3I)r=0(A - 3I)r = 0 gives r1=(1,1)Tr_1 = (1, 1)^T

For λ2=4\lambda_2 = 4: (A4I)r=0(A - 4I)r = 0 gives r2=(3,2)Tr_2 = (3, 2)^T

General solution:

x(t)=C1e3t(11)+C2e4t(32)x(t) = C_1 e^{3t} \begin{pmatrix} 1 \\ 1 \end{pmatrix} + C_2 e^{4t} \begin{pmatrix} 3 \\ 2 \end{pmatrix}

4. Complex Eigenvalues

Theorem 6.6: Real Solutions from Complex Eigenvalues

If λ=α+iβ\lambda = \alpha + i\beta is a complex eigenvalue with eigenvector v=u+iwv = u + iw, then two real linearly independent solutions are:

x1(t)=eαt(ucosβtwsinβt)x_1(t) = e^{\alpha t}(u\cos\beta t - w\sin\beta t)
x2(t)=eαt(usinβt+wcosβt)x_2(t) = e^{\alpha t}(u\sin\beta t + w\cos\beta t)

Example 6.4: System with Complex Eigenvalues

Problem: Solve \frac,{dx}{dt}, = \begin,{pmatrix}, 3 & 5 \\ -5 & 3 \end,{pmatrix},x.

Solution:

Characteristic equation: (3λ)2+25=0(3-\lambda)^2 + 25 = 0, giving λ=3±5i\lambda = 3 \pm 5i

For λ1=3+5i\lambda_1 = 3 + 5i: eigenvector v=(1,i)T=(1,0)T+i(0,1)Tv = (1, i)^T = (1, 0)^T + i(0, 1)^T

So u=(1,0)Tu = (1, 0)^T and w=(0,1)Tw = (0, 1)^T

Real solutions:

x1(t)=e3t(cos5tsin5t),x2(t)=e3t(sin5tcos5t)x_1(t) = e^{3t}\begin{pmatrix} \cos 5t \\ -\sin 5t \end{pmatrix}, \quad x_2(t) = e^{3t}\begin{pmatrix} \sin 5t \\ \cos 5t \end{pmatrix}

General solution:

x(t)=e3t(C1cos5t+C2sin5tC1sin5t+C2cos5t)x(t) = e^{3t}\begin{pmatrix} C_1\cos 5t + C_2\sin 5t \\ -C_1\sin 5t + C_2\cos 5t \end{pmatrix}

5. Jordan Form and Repeated Eigenvalues

Definition 6.3: Jordan Block

A Jordan block Jk(λ)J_k(\lambda) of size kk for eigenvalue λ\lambda is:

Jk(λ)=(λ1λ1λ)k×k=λIk+NkJ_k(\lambda) = \begin{pmatrix} \lambda & 1 & & \\ & \lambda & \ddots & \\ & & \ddots & 1 \\ & & & \lambda \end{pmatrix}_{k \times k} = \lambda I_k + N_k

where NkN_k is nilpotent with Nkk=0N_k^k = 0.

Theorem 6.7: Exponential of Jordan Block

For a Jordan block Jk(λ)=λI+NJ_k(\lambda) = \lambda I + N:

eJk(λ)t=eλt(1tt22!tk1(k1)!1ttk2(k2)!1t1)e^{J_k(\lambda)t} = e^{\lambda t} \begin{pmatrix} 1 & t & \frac{t^2}{2!} & \cdots & \frac{t^{k-1}}{(k-1)!} \\ & 1 & t & \cdots & \frac{t^{k-2}}{(k-2)!} \\ & & \ddots & \ddots & \vdots \\ & & & 1 & t \\ & & & & 1 \end{pmatrix}
Remark:

When AA has repeated eigenvalues but is not diagonalizable, we need generalized eigenvectors. If A=PJP,1A = PJP^,{-1} is the Jordan decomposition, then e,At,=Pe,Jt,P,1e^,{At}, = Pe^,{Jt},P^,{-1}.

6. Nonhomogeneous Systems

Theorem 6.8: Variation of Parameters for Systems

The general solution of ,dxdt,=Ax+f(t)\frac,{dx}{dt}, = Ax + f(t) is:

x(t)=eAtC+t0teA(ts)f(s)dsx(t) = e^{At}C + \int_{t_0}^{t} e^{A(t-s)} f(s)\,ds

where CC is an arbitrary constant vector.

Proof of Theorem 6.8:

The homogeneous solution is xh=e,At,Cx_h = e^,{At},C. For a particular solution, set xp=e,At,C(t)x_p = e^,{At},C^*(t):

eAtdCdt=f(t)    dCdt=eAtf(t)e^{At}\frac{dC^*}{dt} = f(t) \implies \frac{dC^*}{dt} = e^{-At}f(t)

Integrating: C(t)=,t0,te,As,f(s)dsC^*(t) = \int_,{t_0},^t e^,{-As},f(s)ds. Thus xp=e,At,,t0,te,As,f(s)ds=,t0,te,A(ts),f(s)dsx_p = e^,{At},\int_,{t_0},^t e^,{-As},f(s)ds = \int_,{t_0},^t e^,{A(t-s)},f(s)ds. ∎

Example 6.5: Nonhomogeneous System

Problem: Solve \frac,{dx}{dt}, = \begin,{pmatrix}, 1 & 0 \\ 0 & 2 \end,{pmatrix},x + \begin,{pmatrix}, e^t \\ 1 \end,{pmatrix}.

Solution:

For this diagonal system: e,At,=,diag,(et,e,2t,)e^,{At}, = \text,{diag},(e^t, e^,{2t},)

Particular solution:

xp=0t(ets00e2(ts))(es1)ds=0t(ete2(ts))dsx_p = \int_0^t \begin{pmatrix} e^{t-s} & 0 \\ 0 & e^{2(t-s)} \end{pmatrix} \begin{pmatrix} e^s \\ 1 \end{pmatrix} ds = \int_0^t \begin{pmatrix} e^t \\ e^{2(t-s)} \end{pmatrix} ds
=(tet12(e2t1))= \begin{pmatrix} te^t \\ \frac{1}{2}(e^{2t} - 1) \end{pmatrix}

General solution:

x(t)=(C1et+tetC2e2t+12(e2t1))x(t) = \begin{pmatrix} C_1 e^t + te^t \\ C_2 e^{2t} + \frac{1}{2}(e^{2t} - 1) \end{pmatrix}

Practice Quiz

Linear Systems Quiz
10
Questions
0
Correct
0%
Accuracy
1
The matrix exponential eAe^A is defined as:
Easy
Not attempted
2
If AA and BB commute (AB=BAAB = BA), then:
Easy
Not attempted
3
The fundamental matrix Φ(t)\Phi(t) of dxdt=Ax\frac{dx}{dt} = Ax satisfies:
Easy
Not attempted
4
If A=diag(a1,,an)A = \text{diag}(a_1, \ldots, a_n), then eAt=e^{At} =
Medium
Not attempted
5
The Liouville formula states that for Φ(t)=A(t)Φ(t)\Phi'(t) = A(t)\Phi(t):
Medium
Not attempted
6
For a Jordan block J=λI+NJ = \lambda I + N where NN is nilpotent, eJt=e^{Jt} =
Medium
Not attempted
7
If AA has distinct eigenvalues λ1,,λn\lambda_1, \ldots, \lambda_n with eigenvectors r1,,rnr_1, \ldots, r_n, then the fundamental matrix is:
Medium
Not attempted
8
For the nonhomogeneous system dxdt=Ax+f(t)\frac{dx}{dt} = Ax + f(t), the particular solution using variation of parameters is:
Medium
Not attempted
9
For complex eigenvalues α±iβ\alpha \pm i\beta with eigenvector v=u+iwv = u + iw, the real fundamental solutions are:
Hard
Not attempted
10
If P1AP=JP^{-1}AP = J (Jordan form), then eAt=e^{At} =
Hard
Not attempted

Frequently Asked Questions

How do I compute the matrix exponential?

The method depends on the matrix structure: (1) For diagonal matrices, exponentiate each diagonal entry. (2) For diagonalizable matrices, use e,At,=Pe,Dt,P,1e^,{At}, = Pe^,{Dt},P^,{-1}. (3) For Jordan form, use e,Jte^,{Jt} block by block. (4) For 2×2 matrices, explicit formulas exist based on eigenvalue type.

What if eigenvalues are repeated but the matrix is diagonalizable?

If AA has repeated eigenvalues but is still diagonalizable (has a full set of eigenvectors), treat it like the distinct eigenvalue case. The key is whether there are nn linearly independent eigenvectors, not whether eigenvalues are distinct.

How do I convert complex solutions to real solutions?

For complex eigenvalue α+iβ\alpha + i\beta with eigenvector u+iwu + iw, the complex solutione,(α+iβ)t,(u+iw)e^,{(\alpha+i\beta)t},(u+iw) gives real solutions by taking real and imaginary parts:e,αt,(ucosβtwsinβt)e^,{\alpha t},(u\cos\beta t - w\sin\beta t) and e,αt,(usinβt+wcosβt)e^,{\alpha t},(u\sin\beta t + w\cos\beta t).

What is the geometric meaning of the Liouville formula?

The Liouville formula describes how volumes evolve under the flow of a linear system. If,tr,(A)<0\text,{tr},(A) < 0, volumes contract; if ,tr,(A)>0\text,{tr},(A) > 0, they expand. This is related to dissipation in physical systems.

When do I need Jordan form vs. just eigenvalues?

You need Jordan form when the matrix is not diagonalizable—i.e., when there are fewer independent eigenvectors than the algebraic multiplicity of an eigenvalue. In practice, this happens with repeated eigenvalues and defective matrices.

How does this connect to the scalar case?

A single nn-th order ODE is equivalent to an nn-dimensional first-order system. The characteristic equation of the system's matrix equals the characteristic equation of the scalar ODE. Eigenvalues correspond to characteristic roots.