MathIsimple
Formula Reference

Digital Characteristics Formulas

Comprehensive collection of essential formulas for digital characteristics including expectation, variance, covariance, moments, characteristic functions, and their properties

8 Categories52 Formulas
Mathematical Expectation
Fundamental formulas for mathematical expectation and its properties
6 formulas

Discrete Expectation

E[ξ]=k=1xkp(xk)E[\xi] = \sum_{k=1}^{\infty} x_k p(x_k)

Expected value for discrete random variable with distribution P(ξ = xₖ) = p(xₖ)

Continuous Expectation

E[ξ]=xp(x)dxE[\xi] = \int_{-\infty}^{\infty} x p(x) dx

Expected value for continuous random variable with probability density function p(x)

General Form (Stieltjes Integral)

E[ξ]=xdF(x)E[\xi] = \int_{-\infty}^{\infty} x dF(x)

Unified form for both discrete and continuous random variables using distribution function F(x)

Expectation of Function

E[g(ξ)]={kg(xk)p(xk)discreteg(x)p(x)dxcontinuousE[g(\xi)] = \begin{cases} \sum_k g(x_k) p(x_k) & \text{discrete} \\ \int_{-\infty}^{\infty} g(x) p(x) dx & \text{continuous} \end{cases}

Expected value of function g(ξ) of random variable ξ

Linearity of Expectation

E[i=1nciξi+b]=i=1nciE[ξi]+bE\left[\sum_{i=1}^n c_i \xi_i + b\right] = \sum_{i=1}^n c_i E[\xi_i] + b

Linear combinations preserve expectation (independence not required)

Product of Independent Variables

If ξ1,,ξn independent, then E[ξ1ξn]=E[ξ1]E[ξn]\text{If } \xi_1, \ldots, \xi_n \text{ independent, then } E[\xi_1 \cdots \xi_n] = E[\xi_1] \cdots E[\xi_n]

Expectation of product equals product of expectations for independent variables

Variance and Standard Deviation
Variance formulas and properties for measuring dispersion
6 formulas

Variance Definition

Var(ξ)=E[(ξE[ξ])2]\text{Var}(\xi) = E[(\xi - E[\xi])^2]

Variance as expectation of squared deviation from mean

Computational Formula

Var(ξ)=E[ξ2](E[ξ])2\text{Var}(\xi) = E[\xi^2] - (E[\xi])^2

More convenient formula for calculating variance

Standard Deviation

σ(ξ)=Var(ξ)\sigma(\xi) = \sqrt{\text{Var}(\xi)}

Square root of variance, having same units as the random variable

Variance of Linear Transformation

Var(aξ+b)=a2Var(ξ)\text{Var}(a\xi + b) = a^2 \text{Var}(\xi)

Translation doesn't affect variance, scaling affects quadratically

Variance of Sum

Var(i=1nξi)=i=1nVar(ξi)+21i<jnCov(ξi,ξj)\text{Var}\left(\sum_{i=1}^n \xi_i\right) = \sum_{i=1}^n \text{Var}(\xi_i) + 2\sum_{1 \leq i < j \leq n} \text{Cov}(\xi_i, \xi_j)

General formula for variance of sum including covariance terms

Variance of Independent Sum

If ξ1,,ξn independent, then Var(i=1nξi)=i=1nVar(ξi)\text{If } \xi_1, \ldots, \xi_n \text{ independent, then } \text{Var}\left(\sum_{i=1}^n \xi_i\right) = \sum_{i=1}^n \text{Var}(\xi_i)

Variance is additive for independent random variables

Covariance and Correlation
Measures of joint variability and linear dependence between variables
6 formulas

Covariance Definition

Cov(ξ,η)=E[(ξE[ξ])(ηE[η])]\text{Cov}(\xi, \eta) = E[(\xi - E[\xi])(\eta - E[\eta])]

Measures how two variables jointly deviate from their respective means

Computational Covariance

Cov(ξ,η)=E[ξη]E[ξ]E[η]\text{Cov}(\xi, \eta) = E[\xi\eta] - E[\xi]E[\eta]

More convenient formula for calculating covariance

Correlation Coefficient

ρ(ξ,η)=Cov(ξ,η)Var(ξ)Var(η)\rho(\xi, \eta) = \frac{\text{Cov}(\xi, \eta)}{\sqrt{\text{Var}(\xi) \cdot \text{Var}(\eta)}}

Standardized measure of linear dependence, always between -1 and 1

Covariance Properties

Cov(aξ+b,cη+d)=acCov(ξ,η)\text{Cov}(a\xi + b, c\eta + d) = ac \cdot \text{Cov}(\xi, \eta)

Bilinearity of covariance with respect to linear transformations

Bilinearity of Covariance

Cov(i=1mξi,j=1nηj)=i=1mj=1nCov(ξi,ηj)\text{Cov}\left(\sum_{i=1}^m \xi_i, \sum_{j=1}^n \eta_j\right) = \sum_{i=1}^m \sum_{j=1}^n \text{Cov}(\xi_i, \eta_j)

Covariance distributes over sums

Independence Condition

ξ and η independentCov(ξ,η)=0\xi \text{ and } \eta \text{ independent} \Rightarrow \text{Cov}(\xi, \eta) = 0

Independent variables are uncorrelated (converse not generally true)

Important Inequalities
Fundamental probability inequalities involving expectations and variances
5 formulas

Markov Inequality

P(ξε)E[ξ]ε,ε>0P(|\xi| \geq \varepsilon) \leq \frac{E[|\xi|]}{\varepsilon}, \quad \varepsilon > 0

Upper bound for tail probability using first moment

Chebyshev Inequality

P(ξE[ξ]ε)Var(ξ)ε2,ε>0P(|\xi - E[\xi]| \geq \varepsilon) \leq \frac{\text{Var}(\xi)}{\varepsilon^2}, \quad \varepsilon > 0

Probability bound for deviation from mean using variance

Cauchy-Schwarz Inequality

E[ξη]2E[ξ2]E[η2]|E[\xi\eta]|^2 \leq E[\xi^2] \cdot E[\eta^2]

Fundamental inequality relating product expectation to second moments

Jensen Inequality

If g(x) is convex, then g(E[ξ])E[g(ξ)]\text{If } g(x) \text{ is convex, then } g(E[\xi]) \leq E[g(\xi)]

Relates function of expectation to expectation of function for convex functions

Hölder Inequality

E[ξη](E[ξp])1/p(E[ηq])1/q,1p+1q=1E[|\xi\eta|] \leq (E[|\xi|^p])^{1/p} (E[|\eta|^q])^{1/q}, \quad \frac{1}{p} + \frac{1}{q} = 1

Generalization of Cauchy-Schwarz inequality for higher moments

Moments and Generating Functions
Moment formulas and generating function properties
7 formulas

Raw Moments

mk=E[ξk],k=1,2,3,m_k = E[\xi^k], \quad k = 1, 2, 3, \ldots

k-th moment about origin

Central Moments

ck=E[(ξE[ξ])k],k=1,2,3,c_k = E[(\xi - E[\xi])^k], \quad k = 1, 2, 3, \ldots

k-th moment about the mean

Moment Relationships

c1=0,c2=Var(ξ),m1=E[ξ]c_1 = 0, \quad c_2 = \text{Var}(\xi), \quad m_1 = E[\xi]

Special cases of first and second moments

Skewness Coefficient

γ1=c3c23/2=E[(ξE[ξ])3](Var(ξ))3/2\gamma_1 = \frac{c_3}{c_2^{3/2}} = \frac{E[(\xi - E[\xi])^3]}{(\text{Var}(\xi))^{3/2}}

Measures asymmetry of distribution (γ₁ > 0: right-skewed, γ₁ < 0: left-skewed)

Kurtosis Coefficient

γ2=c4c223=E[(ξE[ξ])4](Var(ξ))23\gamma_2 = \frac{c_4}{c_2^2} - 3 = \frac{E[(\xi - E[\xi])^4]}{(\text{Var}(\xi))^2} - 3

Measures peakedness relative to normal distribution (γ₂ > 0: leptokurtic, γ₂ < 0: platykurtic)

Moment Generating Function

Mξ(t)=E[etξ],if existsM_\xi(t) = E[e^{t\xi}], \quad \text{if exists}

Generates all moments through derivatives at origin

MGF Moment Formula

If Mξ(t) exists, then E[ξk]=Mξ(k)(0)\text{If } M_\xi(t) \text{ exists, then } E[\xi^k] = M_\xi^{(k)}(0)

k-th moment equals k-th derivative of MGF evaluated at zero

Characteristic Functions
Characteristic function definitions, properties, and applications
8 formulas

Characteristic Function Definition

fξ(t)=E[eitξ]=eitxdF(x),tRf_\xi(t) = E[e^{it\xi}] = \int_{-\infty}^{\infty} e^{itx} dF(x), \quad t \in \mathbb{R}

Fourier transform of random variable, always exists

Discrete Characteristic Function

fξ(t)=k=1eitxkp(xk)f_\xi(t) = \sum_{k=1}^{\infty} e^{itx_k} p(x_k)

Characteristic function for discrete random variables

Continuous Characteristic Function

fξ(t)=eitxp(x)dxf_\xi(t) = \int_{-\infty}^{\infty} e^{itx} p(x) dx

Characteristic function for continuous random variables

Linear Transformation

If η=aξ+b, then fη(t)=eitbfξ(at)\text{If } \eta = a\xi + b, \text{ then } f_\eta(t) = e^{itb} f_\xi(at)

Characteristic function under linear transformation

Independence Property

If ξ1,,ξn independent and η=i=1nξi, then fη(t)=i=1nfξi(t)\text{If } \xi_1, \ldots, \xi_n \text{ independent and } \eta = \sum_{i=1}^n \xi_i, \text{ then } f_\eta(t) = \prod_{i=1}^n f_{\xi_i}(t)

CF of sum equals product of CFs for independent variables

Moment Generation from CF

If E[ξn]<, then E[ξk]=fξ(k)(0)ik,0kn\text{If } E[|\xi|^n] < \infty, \text{ then } E[\xi^k] = \frac{f_\xi^{(k)}(0)}{i^k}, \quad 0 \leq k \leq n

Extract moments through derivatives of characteristic function at origin

Inversion Formula

F(x2)F(x1)=limT12πTTeitx1eitx2itfξ(t)dtF(x_2) - F(x_1) = \lim_{T \to \infty} \frac{1}{2\pi} \int_{-T}^T \frac{e^{-itx_1} - e^{-itx_2}}{it} f_\xi(t) dt

Recover distribution function from characteristic function

Fourier Inversion

If fξ(t)dt<, then p(x)=12πeitxfξ(t)dt\text{If } \int_{-\infty}^{\infty} |f_\xi(t)| dt < \infty, \text{ then } p(x) = \frac{1}{2\pi} \int_{-\infty}^{\infty} e^{-itx} f_\xi(t) dt

Direct recovery of density from characteristic function

Common Distribution Parameters
Expectation and variance formulas for standard probability distributions
8 formulas

Bernoulli Distribution Ber(p)

E[ξ]=p,Var(ξ)=p(1p),f(t)=peit+(1p)E[\xi] = p, \quad \text{Var}(\xi) = p(1-p), \quad f(t) = pe^{it} + (1-p)

Single trial success/failure with success probability p

Binomial Distribution B(n,p)

E[ξ]=np,Var(ξ)=np(1p),f(t)=(peit+(1p))nE[\xi] = np, \quad \text{Var}(\xi) = np(1-p), \quad f(t) = (pe^{it} + (1-p))^n

Number of successes in n independent Bernoulli trials

Poisson Distribution P(λ)

E[ξ]=λ,Var(ξ)=λ,f(t)=eλ(eit1)E[\xi] = \lambda, \quad \text{Var}(\xi) = \lambda, \quad f(t) = e^{\lambda(e^{it} - 1)}

Rare events with rate parameter λ (mean equals variance)

Geometric Distribution Geo(p)

E[ξ]=1p,Var(ξ)=1pp2,f(t)=peit1(1p)eitE[\xi] = \frac{1}{p}, \quad \text{Var}(\xi) = \frac{1-p}{p^2}, \quad f(t) = \frac{pe^{it}}{1-(1-p)e^{it}}

Number of trials until first success

Uniform Distribution U[a,b]

E[ξ]=a+b2,Var(ξ)=(ba)212,f(t)=eitbeitait(ba)E[\xi] = \frac{a+b}{2}, \quad \text{Var}(\xi) = \frac{(b-a)^2}{12}, \quad f(t) = \frac{e^{itb} - e^{ita}}{it(b-a)}

Equal probability over interval [a,b]

Exponential Distribution Exp(λ)

E[ξ]=1λ,Var(ξ)=1λ2,f(t)=11itλE[\xi] = \frac{1}{\lambda}, \quad \text{Var}(\xi) = \frac{1}{\lambda^2}, \quad f(t) = \frac{1}{1 - \frac{it}{\lambda}}

Continuous analog of geometric distribution (memoryless property)

Normal Distribution N(μ,σ²)

E[ξ]=μ,Var(ξ)=σ2,f(t)=eitμσ2t22E[\xi] = \mu, \quad \text{Var}(\xi) = \sigma^2, \quad f(t) = e^{it\mu - \frac{\sigma^2 t^2}{2}}

Bell-shaped distribution with location parameter μ and scale parameter σ

Gamma Distribution Γ(α,β)

E[ξ]=αβ,Var(ξ)=αβ2,f(t)=(1itβ)αE[\xi] = \frac{\alpha}{\beta}, \quad \text{Var}(\xi) = \frac{\alpha}{\beta^2}, \quad f(t) = \left(1 - \frac{it}{\beta}\right)^{-\alpha}

Generalizes exponential distribution with shape parameter α and rate parameter β

Multivariate Extensions
Extensions to multivariate case and joint characteristics
6 formulas

Joint Expectation

E[g(ξ,η)]={ijg(xi,yj)pijdiscreteg(x,y)p(x,y)dxdycontinuousE[g(\xi, \eta)] = \begin{cases} \sum_i \sum_j g(x_i, y_j) p_{ij} & \text{discrete} \\ \int\int g(x,y) p(x,y) dx dy & \text{continuous} \end{cases}

Expected value of function of two random variables

Covariance Matrix

Σ=E[(ξE[ξ])(ξE[ξ])],Σij=Cov(ξi,ξj)\boldsymbol{\Sigma} = E[(\boldsymbol{\xi} - E[\boldsymbol{\xi}])(\boldsymbol{\xi} - E[\boldsymbol{\xi}])'], \quad \Sigma_{ij} = \text{Cov}(\xi_i, \xi_j)

Matrix of covariances for multivariate random vector

Multivariate Normal CF

fξ(t)=exp{itμ12tΣt}f_{\boldsymbol{\xi}}(\boldsymbol{t}) = \exp\{i\boldsymbol{t}'\boldsymbol{\mu} - \frac{1}{2}\boldsymbol{t}'\boldsymbol{\Sigma}\boldsymbol{t}\}

Characteristic function of multivariate normal distribution

Linear Transformation of Vector

If η=Aξ+b, then E[η]=AE[ξ]+b,Cov(η)=ACov(ξ)A\text{If } \boldsymbol{\eta} = \boldsymbol{A}\boldsymbol{\xi} + \boldsymbol{b}, \text{ then } E[\boldsymbol{\eta}] = \boldsymbol{A}E[\boldsymbol{\xi}] + \boldsymbol{b}, \quad \text{Cov}(\boldsymbol{\eta}) = \boldsymbol{A}\text{Cov}(\boldsymbol{\xi})\boldsymbol{A}'

Mean and covariance under linear transformation

Conditional Expectation

E[ηξ=x]={jyjP(η=yjξ=x)discreteypηξ(yx)dycontinuousE[\eta|\xi = x] = \begin{cases} \sum_j y_j P(\eta = y_j | \xi = x) & \text{discrete} \\ \int y p_{\eta|\xi}(y|x) dy & \text{continuous} \end{cases}

Expected value of η given ξ = x

Law of Total Expectation

E[η]=E[E[ηξ]]E[\eta] = E[E[\eta|\xi]]

Expectation equals expected value of conditional expectation

📐 Key Properties & Theorems

Essential properties and relationships for digital characteristics

Fundamental Properties

  • Linearity: E[aX + bY + c] = aE[X] + bE[Y] + c
  • Independence: E[XY] = E[X]E[Y] if X ⊥ Y
  • Variance scaling: Var(aX + b) = a²Var(X)
  • Covariance bilinearity: Cov(X+Y, Z) = Cov(X,Z) + Cov(Y,Z)

Key Relationships

  • Variance formula: Var(X) = E[X²] - (E[X])²
  • Correlation bound: |ρ(X,Y)| ≤ 1
  • Independence ⇒ uncorrelated: X ⊥ Y ⇒ Cov(X,Y) = 0
  • CF uniqueness: f(t) uniquely determines distribution

🎯 Enhance Your Learning

Apply these formulas with our comprehensive learning resources and interactive tools

Learn Theory

Master the theoretical foundations of digital characteristics and their applications.

Start Learning

Practice Problems

Test your understanding with comprehensive practice problems and detailed solutions.

Start Practice

Use Calculator

Calculate digital characteristics interactively with step-by-step solutions.

Start Calculating

Next Steps

Continue with mathematical statistics and statistical inference methods.

Continue Learning

Explore digital characteristics