A latex test for homework 1

1. Find a counter example shows that {\forall} n, {P(|Y_n|\leq K^*)\geq 1-\epsilon} does not imply {P(|Y_n|\leq K^*, \forall n)\geq1-\epsilon}

Let {{X_n}} denote a sequence of i.i.d. exp(1) random variables. For {\forall \epsilon} and any n, {\exists K^*} such that {P(|Y_n|\leq K^*)\geq 1-\epsilon}, where {K^*=z_{\epsilon/2}}. But the event {\{|Y_n|\leq K^*, \forall n)\}} is equivalent to {\{\sup\limits_n |Y_n|\leq K^*\}}. From exponential we see that {|Y_n|=Y_n} so {\sup\limits_n|Y_n|} is just the n-th order statistic of the random variables {Y_1, Y_2, \ldots, Y_n}. The CDF of this order statistic is {P_{Y_{(n)}}(x)=F_{Y_1}^n(x)}, so {P(|Y_n|\leq K^*, \forall n)=F^n_{Y_1}(K^*)}. We cannot find any finite {K^*} such that {\lim\limits_n F^n_{Y_1}(K^*)\geq 1-\epsilon} for {\epsilon\in [0,1]}. So a counter example is found.

2. Proof that if {r_n(Y_n-\theta)} converge to random variable {Y}, {\lim\limits r_n=\infty} then {Y_n-\theta} converge to 0 in probability.

By the statement {(vi)} in Portmanteau Lemma in the textbook, for any fixed M satisfies {P(|Y|\geq M)\leq \epsilon}, {P(|r_n(Y_n-\theta)|\geq M)} exceeds {P(|Y|\geq M)} arbitrarily little for large n. Then {\forall \epsilon}, {\exists N} such that when {n\geq N} we have {P(|Y|\geq M)\leq P(|r_n(Y_n-\theta)|\geq M)\leq P(|Y|\geq M)+\epsilon}. By the triangular inequality, we have {P(|r_n(Y_n-\theta)|\geq M)N}. {P(|Y_n-\theta|\geq M/r_n)<2\epsilon}. As n goes to infinity, we have {M/r_n} goes to 0. So {Y_n-\theta} converge to zero in probability.

3. Problem 3.2

Define a function from a subset of {\mathbb{R}^5} to {\mathbb{R}}.

\displaystyle \phi(x_1,x_2,x_3,x_4,x_5)=\frac{x_1-x_2x_3}{(x_4-x_2^2)(x_5-x_3^2)}
Then {\phi(\cdot)} is a continuous function. Let {(\mathbf{X}, \mathbf{Y})} be the bivariate sample of sample size n. Without loss of generality, suppose {\mu_X=\mu_Y=0}, {\sigma_X=\sigma_Y=1} and with {\rho} being the correlation coefficient. Let {\overline{XY}=\frac1n\sum X_iY_i}, {\overline{X}=\frac1n\sum X_i},{\overline{Y}=\frac1n\sum Y_i}, {\overline{XX}=\frac1n\sum{X_i^2}}, {\overline{YY}=\frac1n\sum{Y_i^2}}. Then

\displaystyle \rho=\frac{\rho\sigma_X\sigma_Y-\mu_X\mu_Y}{\sigma_X\sigma_Y}=\frac{\mathsf{E}XY-\mathsf{E}X\mathsf{E}Y}{(\mathsf{E}X^2-n(\mathsf{E}X)^2)(\mathsf{E}Y^2-n(\mathsf{E}Y)^2)}=\phi(\mathsf{E}XY,\mathsf{E}X,\mathsf{E}Y,\mathsf{E}X^2,\mathsf{E}Y^2)

\displaystyle r_n=\phi(\overline{XY},\overline{X},\overline{Y},\overline{XX},\overline{YY})
Here we have {\mathsf{E}\overline{X}=\mathsf{E}\overline{Y}=0}, {\mathsf{E}\overline{XX}=\mathsf{E}\overline{YY}=1}, {\mathsf{E}\overline{XY}=\rho} and {\mbox{var}\overline{X}=\mbox{var}\overline{Y}=\frac1n}. Let {\mathbf{X^*}=(X_1^2, X_2^2,\ldots, X_n^2)^T} and {\mathbf{Y^*}=(Y_1^2, Y_2^2,\ldots, Y_n^2)^T}. Then

\displaystyle \begin{array}{rcl} \mbox{var}(\overline{XX})&=&\frac1{n^2}\mbox{var}\sum(X_i^2-\overline{X^2})^2\\&=&\frac1{n^2}\mbox{var}({X^*}^TAX^*)=\frac1n[\frac1n\mbox{var}({X^*}^TAX^*)] \end{array}

\displaystyle \begin{array}{rcl} \mbox{var}(\overline{YY})&=&\frac1{n^2}\mbox{var}\sum(Y_i^2-\overline{Y^2})^2\\&=&\frac1{n^2}\mbox{var}({Y^*}^TAY^*)=\frac1n[\frac1n\mbox{var}({Y^*}^TAY^*)]\\ \mbox{var}(\overline{XY})&=&\frac1n\mbox{var}(XY) \end{array}
Let {a_1=\frac1n\mbox{var}({X^*}^TAX^*)}, {a_2=\frac1n\mbox{var}({Y^*}^TAY^*)}, {a_3=\mbox{var}(XY)\leq\frac1n\mbox{var}(X)\mbox{var}(Y)<\infty}. A is a certain constant matrix. It is easy to see that {a_1} and {a_2} are linear functions of {\mathsf{E}X^4}, {\mathsf{E}X^3}, {\mathsf{E}X^2} and {\mathsf{E}X}, which are all finite. Thus

\displaystyle \sqrt{n}\:\overline{X}\sim N(0,1)

\displaystyle \sqrt{n}\:\overline{Y}\sim N(0,1)

\displaystyle \sqrt{n}\:\overline{XX}\sim N(1,a_1)

\displaystyle \sqrt{n}\:\overline{YY}\sim N(1,a_2)

\displaystyle \sqrt{n}\:\overline{XY}\sim N(\rho,a_3)
What's more, we will take partial derivative with respect to each variable in the function {\phi} and evaluated at {(x_1=\rho, x_2=0, x_3=0, x_4=1, x_5=1)}, which are

\displaystyle \frac{\partial \phi}{\partial x_1}=\frac{1}{(x_4-x_2^2)(x_5-x_3^2)}=1

\displaystyle \frac{\partial \phi}{\partial x_2}=\frac{-x_3(x_4-x_2^2)(x_5-x_3^2)+2x_2(x_1-x_2x_3)(x_5-x_3^2)}{(x_4-x_2^2)^2(x_5-x_3^2)^2}=0

\displaystyle \frac{\partial \phi}{\partial x_3}=\frac{-x_2(x_4-x_2^2)(x_5-x_3^2)+2x_3(x_1-x_2x_3)(x_4-x_2^2)}{(x_4-x_2^2)^2(x_5-x_3^2)^2}=0

\displaystyle \frac{\partial \phi}{\partial x_4}=\frac{-(x_1-x_2x_3)(x_5-x_3^2)}{(x_4-x_2^2)^2(x_5-x_3^2)^2}=-\rho

\displaystyle \frac{\partial \phi}{\partial x_5}=\frac{-(x_1-x_2x_3)(x_4-x_2^2)}{(x_4-x_2^2)^2(x_5-x_3^2)^2}=-\rho
So according to Delta method, {\sqrt{n}(r_n-\rho)\rightarrow N(0, a_3+\rho^2a_1+\rho^2a_2)}

This entry was posted in Stat/Biostat. Bookmark the permalink.



WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / 變更 )

Twitter picture

You are commenting using your Twitter account. Log Out / 變更 )


You are commenting using your Facebook account. Log Out / 變更 )

Google+ photo

You are commenting using your Google+ account. Log Out / 變更 )

連結到 %s