En estadística , el teorema de Cochran , ideado por William G. Cochran , [1] es un teorema utilizado para justificar los resultados relacionados con las distribuciones de probabilidad de las estadísticas que se utilizan en el análisis de varianza . [2]
Suponga que U 1 , ..., U N son variables aleatorias estándar distribuidas normalmente y existen matrices semidefinidas positivas
, con
. Supongamos además que
, donde r i es el rango de
. Si escribimos
![{\displaystyle Q_{i}=\sum _{j=1}^{N}\sum _{\ell =1}^{N}U_{j}B_{j,\ell }^{(i)}U_{\ell }}](data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7)
de modo que Q i son formas cuadráticas , entonces el teorema de Cochran establece que Q i son independientes , y cada Q i tiene una distribución chi-cuadrado con r i grados de libertad . [1]
De manera menos formal, es el número de combinaciones lineales incluidas en la suma de cuadrados que definen Q i , siempre que estas combinaciones lineales sean linealmente independientes.
Prueba
Primero mostramos que las matrices B ( i ) se pueden diagonalizar simultáneamente y que sus valores propios distintos de cero son todos iguales a +1. Luego usamos la base vectorial que los diagonaliza para simplificar su función característica y mostrar su independencia y distribución. [3]
Cada una de las matrices B ( i ) tiene rango r i y, por lo tanto, r i valores propios distintos de cero . Para cada i , la suma
tiene como máximo rango
. Desde
, se deduce que C ( i ) tiene exactamente el rango N - r i .
Therefore B(i) and C(i) can be simultaneously diagonalized. This can be shown by first diagonalizing B(i). In this basis, it is of the form:
![{\displaystyle {\begin{bmatrix}\lambda _{1}&0&0&\cdots &\cdots &&0\\0&\lambda _{2}&0&\cdots &\cdots &&0\\0&0&\ddots &&&&\vdots \\\vdots &\vdots &&\lambda _{r_{i}}&&\\\vdots &\vdots &&&0&\\0&\vdots &&&&\ddots \\0&0&\ldots &&&&0\end{bmatrix}}.}](data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7)
Thus the lower
rows are zero. Since
, it follows that these rows in C(i) in this basis contain a right block which is a
unit matrix, with zeros in the rest of these rows. But since C(i) has rank N − ri, it must be zero elsewhere. Thus it is diagonal in this basis as well. It follows that all the non-zero eigenvalues of both B(i) and C(i) are +1. Moreover, the above analysis can be repeated in the diagonal basis for
. In this basis
is the identity of an
vector space, so it follows that both B(2) and
are simultaneously diagonalizable in this vector space (and hence also together with B(1)). By iteration it follows that all B-s are simultaneously diagonalizable.
Thus there exists an orthogonal matrix
such that for all
,
is diagonal, where any entry
with indices
,
, is equal to 1, while any entry with other indices is equal to 0.
Let
denote some specific linear combination of all
after transformation by
. Note that
due to the length preservation of the orthogonal matrix S, that the Jacobian of a linear transformation is the matrix associated with the linear transformation itself, and that the determinant of an orthogonal matrix has modulus 1.
The characteristic function of Qi is:
![{\displaystyle {\begin{aligned}\varphi _{i}(t)={}&(2\pi )^{-N/2}\int du_{1}\int du_{2}\cdots \int du_{N}e^{itQ_{i}}\cdot e^{-u_{1}^{2}/2}\cdot e^{-u_{2}^{2}/2}\cdots e^{-u_{N}^{2}/2}\\={}&(2\pi )^{-N/2}\left(\prod _{j=1}^{N}\int du_{j}\right)e^{itQ_{i}}\cdot e^{-\sum _{j=1}^{N}u_{j}^{2}/2}\\={}&(2\pi )^{-N/2}\left(\prod _{j=1}^{N}\int du_{j}^{\prime }\right)e^{it\cdot \sum _{m=r_{1}+\cdots +r_{i-1}+1}^{r_{1}+\cdots +r_{i}}(u_{m}^{\prime })^{2}}\cdot e^{-\sum _{j=1}^{N}{u_{j}^{\prime }}^{2}/2}\\={}&(2\pi )^{-N/2}\left(\int e^{u^{2}(it-{\frac {1}{2}})}du\right)^{r_{i}}\left(\int e^{-{\frac {u^{2}}{2}}}du\right)^{N-r_{i}}\\={}&(1-2it)^{-r_{i}/2}\end{aligned}}}](data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7)
This is the Fourier transform of the chi-squared distribution with ri degrees of freedom. Therefore this is the distribution of Qi.
Moreover, the characteristic function of the joint distribution of all the Qis is:
![{\displaystyle {\begin{aligned}\varphi (t_{1},t_{2},\ldots ,t_{k})&=(2\pi )^{-N/2}\left(\prod _{j=1}^{N}\int dU_{j}\right)e^{i\sum _{i=1}^{k}t_{i}\cdot Q_{i}}\cdot e^{-\sum _{j=1}^{N}U_{j}^{2}/2}\\&=(2\pi )^{-N/2}\left(\prod _{j=1}^{N}\int dU_{j}^{\prime }\right)e^{i\cdot \sum _{i=1}^{k}t_{i}\sum _{k=r_{1}+\cdots +r_{i-1}+1}^{r_{1}+\cdots +r_{i}}(U_{k}^{\prime })^{2}}\cdot e^{-\sum _{j=1}^{N}{U_{j}^{\prime }}^{2}/2}\\&=(2\pi )^{-N/2}\prod _{i=1}^{k}\left(\int e^{u^{2}(it_{i}-{\frac {1}{2}})}du\right)^{r_{i}}\\&=\prod _{i=1}^{k}(1-2it_{i})^{-r_{i}/2}=\prod _{i=1}^{k}\varphi _{i}(t_{i})\end{aligned}}}](data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7)
From this it follows that all the Qis are independent.
Sample mean and sample variance
If X1, ..., Xn are independent normally distributed random variables with mean μ and standard deviation σ then
![U_{i}={\frac {X_{i}-\mu }{\sigma }}](data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7)
is standard normal for each i. Note that the total Q is equal to sum of squared Us as shown here:
![{\displaystyle \sum _{i}Q_{i}=\sum _{ijk}U_{j}B_{jk}^{(i)}U_{k}=\sum _{jk}U_{j}U_{k}\sum _{i}B_{jk}^{(i)}=\sum _{jk}U_{j}U_{k}\delta _{jk}=\sum _{j}U_{j}^{2}}](data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7)
which stems from the original assumption that
. So instead we will calculate this quantity and later separate it into Qi's. It is possible to write
![\sum _{{i=1}}^{n}U_{i}^{2}=\sum _{{i=1}}^{n}\left({\frac {X_{i}-\overline {X}}{\sigma }}\right)^{2}+n\left({\frac {\overline {X}-\mu }{\sigma }}\right)^{2}](data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7)
(here
is the sample mean). To see this identity, multiply throughout by
and note that
![\sum (X_{i}-\mu )^{2}=\sum (X_{i}-\overline {X}+\overline {X}-\mu )^{2}](data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7)
and expand to give
![\sum (X_{i}-\mu )^{2}=\sum (X_{i}-\overline {X})^{2}+\sum (\overline {X}-\mu )^{2}+2\sum (X_{i}-\overline {X})(\overline {X}-\mu ).](data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7)
The third term is zero because it is equal to a constant times
![\sum (\overline {X}-X_{i})=0,](data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7)
and the second term has just n identical terms added together. Thus
![{\displaystyle \sum (X_{i}-\mu )^{2}=\sum (X_{i}-{\overline {X}})^{2}+n({\overline {X}}-\mu )^{2},}](data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7)
and hence
![{\displaystyle \sum \left({\frac {X_{i}-\mu }{\sigma }}\right)^{2}=\sum \left({\frac {X_{i}-{\overline {X}}}{\sigma }}\right)^{2}+n\left({\frac {{\overline {X}}-\mu }{\sigma }}\right)^{2}=\overbrace {\sum _{i}\left(U_{i}-{\frac {1}{n}}\sum _{j}{U_{j}}\right)^{2}} ^{Q_{1}}+\overbrace {{\frac {1}{n}}\left(\sum _{j}{U_{j}}\right)^{2}} ^{Q_{2}}=Q_{1}+Q_{2}.}](data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7)
Now
with
the matrix of ones which has rank 1. In turn
given that
. This expression can be also obtained by expanding
in matrix notation. It can be shown that the rank of
is
as the addition of all its rows is equal to zero. Thus the conditions for Cochran's theorem are met.
Cochran's theorem then states that Q1 and Q2 are independent, with chi-squared distributions with n − 1 and 1 degree of freedom respectively. This shows that the sample mean and sample variance are independent. This can also be shown by Basu's theorem, and in fact this property characterizes the normal distribution – for no other distribution are the sample mean and sample variance independent.[4]
Distributions
The result for the distributions is written symbolically as
![\sum \left(X_{i}-\overline {X}\right)^{2}\sim \sigma ^{2}\chi _{{n-1}}^{2}.](data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7)
![n(\overline {X}-\mu )^{2}\sim \sigma ^{2}\chi _{1}^{2},](data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7)
Both these random variables are proportional to the true but unknown variance σ2. Thus their ratio does not depend on σ2 and, because they are statistically independent. The distribution of their ratio is given by
![{\frac {n\left(\overline {X}-\mu \right)^{2}}{{\frac {1}{n-1}}\sum \left(X_{i}-\overline {X}\right)^{2}}}\sim {\frac {\chi _{1}^{2}}{{\frac {1}{n-1}}\chi _{{n-1}}^{2}}}\sim F_{{1,n-1}}](data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7)
where F1,n − 1 is the F-distribution with 1 and n − 1 degrees of freedom (see also Student's t-distribution). The final step here is effectively the definition of a random variable having the F-distribution.
Estimation of variance
To estimate the variance σ2, one estimator that is sometimes used is the maximum likelihood estimator of the variance of a normal distribution
![\widehat {\sigma }^{2}={\frac {1}{n}}\sum \left(X_{i}-\overline {X}\right)^{2}.](data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7)
Cochran's theorem shows that
![{\frac {n\widehat {\sigma }^{2}}{\sigma ^{2}}}\sim \chi _{{n-1}}^{2}](data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7)
and the properties of the chi-squared distribution show that
![{\displaystyle {\begin{aligned}E\left({\frac {n{\widehat {\sigma }}^{2}}{\sigma ^{2}}}\right)&=E\left(\chi _{n-1}^{2}\right)\\{\frac {n}{\sigma ^{2}}}E\left({\widehat {\sigma }}^{2}\right)&=(n-1)\\E\left({\widehat {\sigma }}^{2}\right)&={\frac {\sigma ^{2}(n-1)}{n}}\end{aligned}}}](data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7)
The following version is often seen when considering linear regression.[5] Suppose that
is a standard multivariate normal random vector (here
denotes the n-by-n identity matrix), and if
are all n-by-n symmetric matrices with
. Then, on defining
, any one of the following conditions implies the other two:
![\sum _{{i=1}}^{k}r_{i}=n,](data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7)
(thus the
are positive semidefinite)
is independent of
for ![i\neq j.](data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7)