MathIsimple
Point Estimation Formulas

Point Estimation Formula Reference

Comprehensive mathematical formulas for point estimation theory, evaluation criteria, construction methods, and efficiency bounds

Complete ReferencePractical ApplicationsTheory & Proofs

Fundamental Concepts

Core definitions and evaluation criteria for point estimators

Point Estimation Definitions
Fundamentals

Key Formulas:

estimator

θ̂ = θ̂(X₁, X₂, ..., Xₙ)

estimate

θ̂(x₁, x₂, ..., xₙ) (numerical value)

bias

Bias_θ[θ̂] = E_θ[θ̂] - θ

mse

MSE_θ[θ̂] = E_θ[(θ̂ - θ)²] = Var_θ[θ̂] + Bias²_θ[θ̂]

Applications:

  • Parameter estimation
  • Statistical inference
  • Decision theory

Parameters

θ ∈ Θ (parameter space)

Unbiasedness Criteria
Evaluation

Key Formulas:

unbiased

E_θ[θ̂] = θ for all θ ∈ Θ

asymptotic unbiased

lim_{n→∞} E_θ[θ̂ₙ] = θ

bias decomposition

E_θ[θ̂] = θ + Bias_θ[θ̂]

bias correction

θ̂* = θ̂ - Bias_θ[θ̂]

Applications:

  • Estimator comparison
  • Bias correction
  • Asymptotic analysis

Parameters

All θ in parameter space Θ

Consistency Properties
Asymptotic

Key Formulas:

weak consistency

θ̂ₙ →^P θ (convergence in probability)

strong consistency

P(lim_{n→∞} θ̂ₙ = θ) = 1

mse consistency

lim_{n→∞} MSE_θ[θ̂ₙ] = 0

probability statement

lim_{n→∞} P_θ(|θ̂ₙ - θ| ≥ ε) = 0

Applications:

  • Large sample theory
  • Convergence analysis
  • Estimator validation

Parameters

For all θ ∈ Θ, ε > 0

Estimation Methods

Mathematical formulations for major parameter estimation approaches

Method of Moments
Construction

Core Formulas:

population moment

μₖ = E[X^k] (k-th population moment)

sample moment

aₙ,ₖ = (1/n)∑ᵢ₌₁ⁿ Xᵢᵏ (k-th sample moment)

central moment

νₖ = E[(X - μ)ᵏ] (k-th central moment)

sample central

mₙ,ₖ = (1/n)∑ᵢ₌₁ⁿ (Xᵢ - X̄)ᵏ

Applications:

  • Simple parameter estimation
  • Initial value estimation
  • Distribution fitting
Maximum Likelihood Estimation
Optimization

Core Formulas:

likelihood

L(θ; x₁, ..., xₙ) = ∏ᵢ₌₁ⁿ p(xᵢ; θ)

log likelihood

ℓ(θ; x₁, ..., xₙ) = ∑ᵢ₌₁ⁿ log p(xᵢ; θ)

likelihood equation

∂ℓ/∂θⱼ = 0, j = 1, ..., k

mle definition

θ̂ = arg max_θ L(θ; x₁, ..., xₙ)

Applications:

  • Parametric estimation
  • Model selection
  • Asymptotic efficiency
Least Squares Estimation
Regression

Core Formulas:

model

Yᵢ = μᵢ(θ) + εᵢ, i = 1, ..., n

objective

Q(θ) = ∑ᵢ₌₁ⁿ (Yᵢ - μᵢ(θ))²

lse definition

θ̂ = arg min_θ Q(θ)

normal equations

∂Q/∂θⱼ = -2∑ᵢ₌₁ⁿ (Yᵢ - μᵢ(θ))∂μᵢ/∂θⱼ = 0

Applications:

  • Linear regression
  • Curve fitting
  • Prediction models

Fisher Information & Efficiency

Information theory and lower bounds for estimator variance

Fisher Information Matrix
Information Theory

Mathematical Forms:

scalar form

I(θ) = E_θ[(∂log p(X;θ)/∂θ)²]

alternative form

I(θ) = -E_θ[∂²log p(X;θ)/∂θ²]

sample information

Iₙ(θ) = nI(θ)

multivariate

I(θ)ⱼₖ = E_θ[∂log p(X;θ)/∂θⱼ · ∂log p(X;θ)/∂θₖ]

Applications:

  • Efficiency bounds
  • Asymptotic variance
  • Optimal design
Cramér-Rao Lower Bound
Efficiency

Mathematical Forms:

scalar bound

Var_θ[ĝ] ≥ [g'(θ)]²/Iₙ(θ)

vector bound

Cov_θ[ĝ] ≥ G(θ)I⁻¹(θ)G^T(θ)

efficiency

eff(ĝ) = [g'(θ)]²/[Iₙ(θ)Var_θ[ĝ]]

asymptotic efficiency

lim_{n→∞} eff(ĝₙ) = 1

Applications:

  • Optimality assessment
  • Bound comparison
  • Design efficiency

UMVUE Theory

Uniformly minimum variance unbiased estimator construction and properties

Rao-Blackwell Theorem
Improvement

Key Results:

conditional expectation

ĝ(T) = E[φ(X)|T]

variance reduction

Var_θ[ĝ(T)] ≤ Var_θ[φ(X)]

unbiasedness preservation

E_θ[ĝ(T)] = E_θ[φ(X)] = g(θ)

improvement condition

Equality iff φ(X) = ĝ(T) a.s.

Applications:

  • Variance reduction
  • Estimator improvement
  • UMVUE construction
Lehmann-Scheffé Theorem
Uniqueness

Key Results:

sufficient complete

S complete sufficient statistic

umvue form

ĝ = E[φ(X)|S] (unique UMVUE)

completeness condition

E_θ[h(S)] = 0 ∀θ ⟹ P_θ(h(S) = 0) = 1

uniqueness

ĝ₁ = ĝ₂ a.s. if both UMVUE

Applications:

  • UMVUE identification
  • Uniqueness proof
  • Optimal estimation

Asymptotic Properties

Large sample behavior and limiting distributions

Asymptotic Normality
Large Sample

Asymptotic Results:

mle asymptotic

√n(θ̂ₙ - θ) →^D N(0, I⁻¹(θ))

general form

√n(θ̂ₙ - θ) →^D N(0, Σ(θ))

delta method

√n(g(θ̂ₙ) - g(θ)) →^D N(0, g'(θ)²σ²)

confidence interval

θ̂ₙ ± z_{α/2}/√(nI(θ̂ₙ))

Applications:

  • Inference procedures
  • Confidence intervals
  • Hypothesis testing

Common Distribution Examples

Standard results for frequently encountered distributions

Estimation Results Summary
MLE, UMVUE, Fisher information, and Cramér-Rao bounds for common distributions
DistributionMLEUMVUEFisher InformationCR Bound
Normal N(μ,σ²)
μ ∈ ℝ, σ² > 0
μ̂ = X̄, σ̂² = Sₙ²μ̂ = X̄, σ̂² = S²I(μ) = 1/σ², I(σ²) = 1/(2σ⁴)Var[μ̂] ≥ σ²/n, Var[σ̂²] ≥ 2σ⁴/n
Exponential E(λ)
λ > 0
λ̂ = 1/X̄λ̂* = (n-1)/(nX̄)I(λ) = 1/λ²Var[λ̂] ≥ λ²/n
Poisson P(λ)
λ > 0
λ̂ = X̄λ̂ = X̄I(λ) = 1/λVar[λ̂] ≥ λ/n
Bernoulli B(1,p)
p ∈ (0,1)
p̂ = X̄p̂ = X̄I(p) = 1/(p(1-p))Var[p̂] ≥ p(1-p)/n

📚 Related Resources

Deepen your understanding with theory, practice, and applications

Learn Theory

Master the theoretical foundations with detailed explanations and proofs

Learn Theory

Practice Problems

Test your knowledge with carefully designed estimation problems

Practice Now

Distribution Theory

Explore distribution families and their estimation properties

Explore Distributions

Master estimation theory