Error Analysis and Numerical Stability
Understanding and controlling errors is fundamental to scientific computing. Learn how errors arise, propagate, and how to design numerically stable algorithms.
- Identify sources of numerical errors
- Calculate absolute and relative errors
- Understand significant digits and their meaning
- Analyze error propagation in arithmetic operations
- Recognize and avoid catastrophic cancellation
- Design numerically stable algorithms
1. Sources of Error
In scientific computing, errors arise from multiple sources. Understanding these sources is the first step toward controlling them.
Definition 1.1: Types of Errors
- Model Error: Simplifications made when creating a mathematical model of a physical problem.
- Measurement Error: Imprecision in measuring physical quantities or input data.
- Truncation Error: Error from approximating an infinite process with a finite one (e.g., truncating a Taylor series).
- Rounding Error: Error from representing real numbers with finite precision (floating-point representation).
Example: Truncation vs Rounding Error
Consider computing using the Taylor series:
- Truncation error: Stopping after n terms introduces error from the omitted terms.
- Rounding error: Each arithmetic operation introduces small errors due to finite precision.
2. Absolute and Relative Error
Definition 1.2: Absolute Error
Let be the exact value and be an approximation. The absolute error is:
The absolute error bound (or error limit) satisfies.
Definition 1.3: Relative Error
The relative error is the ratio of absolute error to the exact value:
In practice, since is unknown, we often use when the error is small.
Example: Computing Errors
Problem: If and , find the errors.
Solution:
Absolute error:
Relative error:
Remark:
Relative error is often more meaningful than absolute error. An absolute error of 1 meter is negligible when measuring the distance to the Moon, but catastrophic when measuring the width of a human hair.
3. Significant Digits
Definition 1.4: Significant Digits
A digit in a number is significant if its error bound is at most half a unit in that digit position. The significant digits are counted from the first non-zero digit to the last reliable digit.
Equivalently, if is written in scientific notation as where , then has n significant digits if:
Example: Counting Significant Digits
has 5 significant digits
has 3 significant digits (leading zeros don't count)
has 4 significant digits (trailing zeros after decimal count)
has 2, 3, or 4 significant digits (ambiguous without context)
Theorem 1.1: Significant Digits and Relative Error
If approximates with n significant digits, then the relative error satisfies:
4. Error Propagation
When we perform arithmetic operations on approximate numbers, errors propagate. Understanding this propagation is essential for error analysis.
Theorem 1.2: Error Propagation in Arithmetic
Let and be approximations with absolute errors and . Using differential analysis:
Addition/Subtraction:
Multiplication:
Division:
Theorem 1.3: Relative Error Propagation
For relative errors and :
Addition:
Subtraction: (can be large!)
Multiplication/Division:
Note:
For subtraction of nearly equal numbers, is small, making the relative error potentially very large. This is called catastrophic cancellation.
Theorem 1.4: Function Evaluation Error
For a function with input error :
The relative error is:
The factor is called the condition number of the function.
5. Machine Precision and Floating-Point
Definition 1.5: Machine Epsilon
Machine epsilon () is the smallest positive number such that in floating-point arithmetic.
For IEEE 754 double precision:
Example: Floating-Point Representation
In IEEE 754 double precision (64 bits):
- 1 bit for sign
- 11 bits for exponent (range roughly to )
- 52 bits for mantissa (about 15-16 decimal digits of precision)
Remark:
The relative error in representing any real number x as a floating-point number fl(x) satisfies .
6. Techniques to Reduce Errors
1. Avoid Subtracting Nearly Equal Numbers
Reformulate expressions to avoid catastrophic cancellation.
Example: Instead of for small x, use
2. Prevent Large Numbers from Swamping Small Numbers
When summing numbers of vastly different magnitudes, sum from smallest to largest.
Example: Computing where
3. Reduce the Number of Operations
Each operation introduces rounding error. Use efficient algorithms.
Example: Horner's method for polynomials:
4. Avoid Division by Small Numbers
Division by a small number amplifies errors. Rearrange formulas when possible.
5. Use Stable Formulas
Choose mathematically equivalent formulas that are numerically more stable.
Example: For the quadratic formula with , use, then
7. Numerical Stability
Definition 1.6: Numerical Stability
A numerical algorithm is stable if small perturbations in the input (or rounding errors during computation) produce only small changes in the output.
An algorithm is unstable if small errors can grow exponentially during computation.
Example: Stable vs Unstable Recurrence
Consider computing using the recurrence:
Forward recurrence (starting from ):
Errors grow because we multiply the error by n at each step. Unstable!
Backward recurrence (starting from large n):
Errors shrink because we divide by n at each step. Stable!
Remark:
A good numerical algorithm should be both efficient and stable. Sometimes there is a trade-off between the two, and stability should usually take priority.
Practice Quiz
Frequently Asked Questions
What's the difference between truncation error and rounding error?
Truncation error comes from approximating a mathematical process (like using a finite number of terms in a series). It exists even with infinite precision.
Rounding error comes from representing numbers with finite precision. It's inherent to computer arithmetic.
How do I know if my algorithm is numerically stable?
Test your algorithm with:
- Slightly perturbed inputs - do outputs change only slightly?
- Compare results at different precisions (float vs double)
- Check if the backward error is small (is the computed solution exact for a nearby problem?)
Why is subtraction more dangerous than addition?
When subtracting nearly equal numbers, the leading significant digits cancel, leaving only the less significant (more error-prone) digits. This "catastrophic cancellation" can cause huge relative errors even when absolute errors are small.
What is the condition number of a problem?
The condition number measures how sensitive a problem's output is to small changes in the input. A problem with large condition number is ill-conditioned.
For a function , the condition number is approximately.
How many significant digits should I report?
Report only the digits you're confident about. If your computation has relative error of about , you should report about 6 significant digits. Reporting more implies false precision.