Bayes: P(A|B)=?
Total probability law
Inclusion-Exclusion Formula
Expectation general formula (discrete & cont.)
Discrete:
Continuous:
Bernoulli(p) Expectation & Variance
E(X)=p
Var(X)=pq
Binomial(n,p) Expectation & Variance
E(X)=np
Var(X)=npq
Geometric(p) Expectation & Variance
E(X)=1/p
Var(X)=q/(p)^2
Normal(μ,σ^2) Expectation & Variance
E(X)=μ
Var(X)=σ^2
Gamma(λ,n) Expectation & Variance
E(X)=n/λ
Var(X)=n/λ^2
Jenssen inequality
Markov inequality
Independence of Random Variables
Chernoff Bound
Central Limit Theorem
Convergence: which different notions of convergence are implied by others?
Location Family
Exponential Family a(w)
Exponential Family f_w(x)
k-parameter exponential family in canonical form
k-parameter exponential
family
convex function
Differential Identities
Delta Method
Factorization Theorem
Cauchy-Schwarz Inequality
Chebyshev Inequality
Hölders Inequality
where
sample mean of random sample
sample standard deviation of random sample
scale family
sample variance
S^2
x̄
N(µ, σ^2/n)
second order Delta method
how to show sufficiency
either factorization theorem or
show that conditional prob. of X given Y=y is not dependent on theta
Minimal sufficient statistic:
how to prove Y is MSS
how to prove Y is not MSS
if Y is sufficient for theta and for sufficient statistic Z there exists function r so that Y=r(Z)
or
find other sufficient statistic Z and if you cant write Y as Z, THEN Y is not MSS
complete & sufficient-> ?
complete & sufficient-> MSS
nonconstant & complete ->?
nonconstant & complete -> not ancillary
constant -> ?
constant -> ancillary & complete
Weak Law of Large Numbers
Strong Law of Large Numbers
If Y is a complete, sufficient statistic for family {fθ : θ ∈ Θ} of joint pdfs or pmfs, then…
… then Y is MSS for θ
(Bahadur)
If Y is a complete sufficient statistic for {fθ : θ ∈ Θ}, and if Z is ancillary for θ, then …
… then for all θ ∈ Θ, Y and Z are independent with respect to fθ.
(Basu)
E(Y)=E(E(Y|X))=
sum over x E(Y|X=x)P(X=x)
E(X|B)=
relative efficiency
Independence Fisher Information
strictly convex
Rao-Blackwell
Let Z be a sufficient statistic for {fθ : θ ∈ Θ} and let Y be an estimator for g(θ). Define W := Eθ(Y |Z)
ell(θ, y) is convex
UMRU
Y is uniformly minimum risk unbiased (UMRU) for risk function r if, for any other unbiased estimator Z for g(θ), we have
UMVU
Y is uniformly minimum variance unbiased (UMVU) if, for any other unbiased estimator Z for g(θ), we have
Efficiency
Cramer-Rao
Find UMVU
Alternate Characterization of UMVU
Let {fθ : θ ∈ Θ} be a family of distributions and let W be an unbiased estimator for g(θ). Let L2(Ω) be the set of statistics with finite second moment. Then W ∈ L2(Ω) is UMVU for g(θ) if and only if Eθ(WU) = 0 ∀ θ ∈ Θ, for all U ∈ L2(Ω) that are unbiased estimators of 0.
Y unbiased for g(θ)
also: Var(Y)=
Lehemann-Scheffe
Let Z be a complete sufficient statistic for {fθ : θ ∈ Θ} and let Y be an unbiased estimator for g(θ). Define W := Eθ(Y |Z). Assume that l(θ, y) is convex in y, for all θ ∈ Θ. Then W is UMRU for g(θ). If l(θ, y) is strictly convex in y for all θ ∈ Θ, then W is unique.
In particular, W is the unique UMVU for g(θ).
Fisher Information
Step by step find cond. exp.
find joint pdf
find marginal pdf
find conditional pdf
find conditional expectation
E(X)
E(X^2)
Var(X)+(E(X))^2
Sufficient Statistic
Y := t(X_1,...,X_n)
Y is a sufficient statistic for θ if, for every y ∈ Rk and for every θ ∈ Θ, the conditional distribution of (X_1, . . . , X_n) given Y = y (with respect to probabilities given by fθ) does not depend on θ
Minimal Sufficient Statistic
Ancillary Statistic
Complete Statistic
Definition unbiased
Bayes Estimator
Consistency
Method of Moments
Calculate whatever moment that we want to find estimator for (first moment = EX, second moment = EX^2)
rewrite result so that we have: θ=c EX where c is a constant
lets say c= 3/2
the method of moments estimator is then:
Likelihood Function
Log concave
Functional Equivariance of MLE
Likelihood Inequality
Consistency of MLE
Limiting Distribution of MLE
EM-Algorithm
EM Algorithm Improvement
Jackknife Resampling
Bootstrapping
Bootstrapping: are Y_1,…,Y_n independent?
no but they are conditionally independent when conditioned on X1,…X_n
Product Rule
Chain Rule
function:
derivative:
Non unique MLE
Covariance
What can we say about the mean squared error if the estimator is unbiased?
is equal to the variance of the estimator
Gaussian/normal Distribution
example of a distribution where the MOM estimate and the MLE are different
Let X1, . . . , Xn ∼ U(0, θ) be an iid sample, where θ is unknown
Zuletzt geändertvor 2 Jahren