Analyze heavy tails and small sample behavior.
The Student's t-distribution plays a crucial role in statistical inference when the sample size is small ($n < 30$) and the population standard deviation ($\sigma$) is unknown. It is symmetric and bell-shaped like the Normal distribution but has heavier tails, meaning it is more prone to producing values that fall far from the mean.
Where $\nu$ is degrees of freedom and $\Gamma(\cdot)$ is the Gamma function.
1. Convergence to Normal: As $\nu \to \infty$, the t-distribution converges to the Standard Normal Distribution $N(0,1)$.
2. Relation to Normal & Chi-Square: If $Z \sim N(0,1)$ and $V \sim \chi^2_\nu$, then:
$$ T = \frac{Z}{\sqrt{V/\nu}} \sim t_\nu $$
3. Relation to F-Distribution: $T^2 \sim F(1, \nu)$. The square of a t-variable follows an F-distribution.
1. One-Sample t-Test: Used to test if a population mean is equal to a specified value when $\sigma$ is unknown.
2. Confidence Intervals: Constructing intervals for the mean with small samples ($n < 30$).
3. Regression Analysis: Testing the significance of regression coefficients ($\beta$) in linear models.
4. Finance: Modeling asset returns that exhibit "fat tails" (extreme risks) which the Normal distribution underestimates.
import scipy.stats as stats
df = 10 # Degrees of Freedom
t_score = 2.228
# 1. Two-Tailed P-Value
# P(|T| > t) = 2 * (1 - CDF(|t|))
p_val = 2 * (1 - stats.t.cdf(abs(t_score), df))
print(f"Two-Tailed P-Value = {p_val:.4f}")
# 2. Critical Value (Inverse CDF)
# Find t for 95% Confidence (alpha = 0.05, two-tailed)
alpha = 0.05
crit_val = stats.t.ppf(1 - alpha/2, df)
print(f"Critical t-value = {crit_val:.4f}")