next up previous
Up: Stat 5101 Homework Solutions

Statistics 5101, Fall 2000, Geyer


Homework Solutions #8

Problem L6-48

(a)

We have a Poisson process with an average interarrival time of two minutes (i. e. $\mu = 2$), and rate parameter $\lambda = 1 / \mu = 1/2$ (per min.). Waiting times are $\text{Exp}(\lambda)$ distributed, so that six minutes will elapse with no customer arrivals is

\begin{displaymath}P(T > t) = e^{- \lambda t} = e^{- .5 \times 6} = e^{-3} = .0498.
\end{displaymath}

(b)

Since the arrival process is Poisson,

\begin{eqnarray*}P(X \leq 2) & = & \sum_{k=0}^2 \frac{e^{- \lambda t} (\lambda t...
... = & e^{- 3} \left[ 1 + 3 + \frac{9}{2} \right] \\
& = & .423
\end{eqnarray*}


(c)

Since the exponential distribution is ``memoryless'':

\begin{displaymath}P(T < t + 2 \mid T > t) = P(T < 2) = 1 - e^{- \lambda t} = 1 - e^{ - .5 \times 2} = .632.\end{displaymath}

(d)

The waiting time until the third customer arrival has a $\text{Gam}(3, \frac{1}{2})$ distribution, so the expected value is $\frac{3}{1/2} = 6$.

(e)

It is the same as average time to the next.

Problem L6-49

(a)

The time to the third failure after any point in time is $\text{Gam}(3, .4)$. Then the mean time to the third failure is $\frac{3}{.4} = 7.5$ days.

(b)

The expected number of failures in 10 days is $\lambda \times t = .4 \times 10 = 4$.

(c)


\begin{displaymath}P(X = 0) = \frac{e^{- \lambda t} (\lambda t)^0}{0!} = e^{-.4} = .670\end{displaymath}

(d)


\begin{eqnarray*}P(X \leq 3) & = & \sum_{k=0}^3 \frac{e^{- \lambda t} (\lambda t...
...+ \frac{(2.8)^2}{2} + \frac{(2.8)^3}{6} \right] \\
& = & .692
\end{eqnarray*}


Problem L6-50


\begin{eqnarray*}P(T < t) & = & 1 - P(T > t) \\
& = & 1 - P(T_1 > t, T_2 > t, ...
...x{since} \hspace{.2cm} \lambda = 1/5 \\
& = & 1 - e^{- .8 t}
\end{eqnarray*}


So the distribution of the time to failure of the system is Exp(0.8).

Problem L6-60


\begin{eqnarray*}E(X) & = & \frac{1}{B(s,t)} \int_0^1 x x^{s - 1} (1 - x)^{t - 1...
...amma(s
+ t)}{\Gamma(s) \Gamma(t)} \\
& = & \frac{s}{s + t}.
\end{eqnarray*}


Problem L6-69

(a)


\begin{eqnarray*}P(X < 11.5) & = & P \left( Z < \frac{11.5 - 10}{2} \right) \\
& = & \Phi(.75) \\
& = & .7734.
\end{eqnarray*}


(b)


\begin{eqnarray*}P(\vert X - 10\vert > 3) & = & 1 - P(\vert X - 10\vert < 3) \\ ...
....5) + \Phi(-1.5) \\
& = & 1 - .9332 + .0668 \\
& = & .1336.
\end{eqnarray*}


c)


\begin{eqnarray*}E(X^2) & = & \mathop{\rm var}\nolimits(X) + E(X)^2 \\
& = & 4 + 10^2 \\
& = & 104.
\end{eqnarray*}


(d)

Using formula (11) page 181 in Lindgren

\begin{eqnarray*}E[(X - 10)^4] & = & \sigma^4 E(Z^4) \\
& = & (2 \times 2 - 1) (2 \times 2 - 3) \sigma^4 \\
& = & 48.
\end{eqnarray*}


(e)


\begin{eqnarray*}E(X^3) & = & E \left[ [(X - 10) + 10]^3 \right] \\
& = & E[(X...
...+ 10^3 \\
& = & 0 + 30 \sigma^2_X + 0 + 1000 \\
& = & 1120.
\end{eqnarray*}


f)

The quartiles of the standard normal distribution are $\pm 0.674$ (or perhaps 0.675, hard to tell) from Table I in Lindgren (p. 576). R says

> qnorm(0.25)
[1] -0.6744898
The quartiles of a $\mathcal{N}(\mu, \sigma^2)$ random variable are $\mu \pm 0.674 \sigma$ or 8.65 and 11.35. This whole problem can be done in one step by computer

> qnorm(0.25, 10, 2)
[1] 8.65102
> qnorm(0.75, 10, 2)
[1] 11.34898

Problem L6-73

The mapping $y = g(x) = \lvert x \rvert$ is two-to-one, so we need to use Theorem 1.8 in the notes. The mapping has two right inverses h+(y) = x and h-(y) = - x. The derivatives are $\pm 1$, so the absolute values of the derivatives ignored. Thus,

\begin{displaymath}f_Y(y) = f_X[h_-(y)] \lvert h_-'(y) \rvert
+ f_X[h_+(y)] \lvert h_+'(y) \rvert
= f_X(- y) + f_X(y)
\end{displaymath}

Since the density fX is symmetric,

fY(y) = 2 fX(y).

Thus

\begin{displaymath}f_Y(y)
=
\frac{2}{\sigma \sqrt{2 \pi}} e^{- \frac{1}{2} y^2/ \sigma^2},
\qquad y > 0,
\end{displaymath}

and

\begin{displaymath}\begin{split}
E(Y)
& =
\frac{1}{ \sigma} \sqrt{ \frac{2...
...^\infty
\\
& = \sigma \sqrt{\frac{2}{ \pi}}
\end{split}
\end{displaymath}

Problem L6-82

Since $\text{Exp}(\lambda) = \text{Gam}(1,\lambda)$,

\begin{displaymath}W = \sum_{i=1}^n X_i \sim \text{Gam}(n,\lambda).
\end{displaymath}

and

\begin{displaymath}Y = 2 \lambda W \sim \text{Gam}(n, 1/2) = \text{Chi}^2(2n)
\end{displaymath}

because the second parameter of the gamma is a scale parameter

\begin{displaymath}f_Y(y) = \frac{1}{2 \lambda} f_W\left(\frac{y}{2 \lambda}\right)
\end{displaymath}

by Theorem 7 of Chapter 3 in Lindgren. Doing the plug-in indeed shows $Y \sim \text{Gam}(n, 1/2)$.

Problem N4-2


\begin{displaymath}E(Y \mid N) = E(X_{1}+X_{2}+\ldots+X_N \mid N) = N E(X_1) = N \mu.
\end{displaymath}

and

\begin{displaymath}E(Y) = E\{ E(Y \mid N) \} = E(N \mu) = \frac{\mu}{p}.
\end{displaymath}


\begin{displaymath}\begin{split}
\mathop{\rm var}\nolimits(Y)
& =
E\{\math...
... =
\frac{\sigma^2}{p} + \frac{\mu^2(1-p)}{p^2}
\end{split}
\end{displaymath}

Problem N4-5

(a)

Write $E(X) = \mu_X$ and $E(Y) = \mu_Y$ so $X \sim \text{Poi}(\mu_X), Y \sim \text{Poi}(\mu_Y)$. Since X and Y are independent, $N = X + Y \sim \text{Poi}(\mu_X+\mu_Y)$ (marginally). Now we need to know the joint distribution of X and N to calculate the conditional, but we aren't given that. What we can easily do is the joint of X and Y

\begin{displaymath}f(x, y)
=
f_X(x) f_Y(y)
=
\frac{\mu_X^x}{x !} e^{- \mu_X} \frac{\mu_Y^y}{y !} e^{- \mu_Y}
\end{displaymath}

Now we do a change of variables. There is no Jacobian for discrete, change of variables, but otherwise much the same plug in y as a function of the new variables, that is, y = n - x, obtaining

\begin{displaymath}f(x, n)
=
\frac{\mu_X^x}{x !} e^{- \mu_X} \frac{\mu_Y^{n - x}}{(n - x) !} e^{- \mu_Y}
\end{displaymath}

Now the conditional is joint over marginal

\begin{displaymath}\begin{split}
f(x \mid n)
& =
\frac{\displaystyle
\fr...
...}
\\
& =
\binom{n}{x} p^x (1 - p)^{n - x}
\end{split}
\end{displaymath}

if we define

\begin{displaymath}p = \frac{\mu_X}{\mu_X + \mu_y}.
\end{displaymath}

(b)

If you didn't try this, ignore this answer. This is just for the people who struggled with this failed problem and want to know what the actual answer was. It is actually fairly obvious when looked at the right way (which the author of the question, Geyer, obviously didn't when writing it). Since X is independent of Y the distribution of Y given X is the same as the marginal distribution of Y

 \begin{displaymath}
f(y \mid x) = \frac{\mu_Y^y}{y !} e^{- \mu_Y},
\qquad y = 0, 1, \ldots,
\end{displaymath} (1)

and the distribution of N = X + Y given X is the same as the distribution of a constant plus a Poisson: X is constant when conditioning on it and Y is Poisson. Thus we get the density of N given X by plugging y = n - x into (1)

\begin{displaymath}f(n \mid x) = \frac{\mu_Y^{n - x}}{(n - x)!} e^{- \mu_Y},
\qquad n = x, x + 1, \ldots .
\end{displaymath}

(c)

The joint distribution of Z and N is

\begin{displaymath}f(z, n)
=
\binom{n}{z} q^z (1 - q)^{n - z} \cdot
\frac{...
...}{n !} e^{- (\mu_X + \mu_Y)},
\qquad 0 \le z \le n < \infty
\end{displaymath}

We find the marginal of Z by summing out N

\begin{displaymath}\begin{split}
f_Z(z)
& =
\sum_{n = z}^\infty
\binom{n...
...\infty
\frac{(1 - q)^k (\mu_X + \mu_Y)^k}{k !}
\end{split}
\end{displaymath}

Now the sum is almost the integral of a $\text{Poi}\bigl((1 - q) (\mu_x + \mu_y)\bigr)$ density. It only needs the exponential factor for that density

\begin{displaymath}\begin{split}
f_Z(z)
& =
\frac{q^z e^{- (\mu_X + \mu_Y)...
...(\mu_X + \mu_Y)]^z}{z !} e^{- q (\mu_X + \mu_Y)}
\end{split}
\end{displaymath}

Hence $Z \sim \text{Poi}\bigl(q (\mu_x + \mu_y)\bigr)$.

Problem N4-6

Logically part (a) comes first, but it is a bit easier to see what's going on if we do part (b) first, remembering that we're not sure yet whether the integral exists.

(b)

If the integral exists

\begin{displaymath}\begin{split}
E(Y)
& =
\int_0^\infty \frac{1}{x} \cdot ...
...a - 1}}
\\
& =
\frac{\lambda}{\alpha - 1}
\end{split}
\end{displaymath}

where we did the integral by recognizing the integrand is an unnormalized $\text{Gam}(\alpha - 1, \lambda)$ density and simplified the ratio of gamma functions using the recursion formula $\Gamma(\alpha) = (\alpha - 1) \Gamma(\alpha - 1)$. Note that the integral is clearly bogus if $\alpha \le 1$, because then $\Gamma(\alpha - 1)$ is not defined. Also the last line gives zero or a negative number for the expectation of a positive random variable when $\alpha \le 1$. So presumably the problem in part (a) is to rule out $\alpha \le 1$ and perhaps other parameter values. We'll see.

(a)

Constants are irrelevant, the question is for what values of $\alpha$ and $\lambda$ (with $\alpha > 0$ and $\lambda > 0$ already required just by definition of the gamma distribution) does the integral

\begin{displaymath}\int_0^\infty
x^{\alpha - 1 - 1} e^{- \lambda x} \, d x
\end{displaymath}

exist. There are two things to check By Lemma 2.41 in the notes, or just by the fact that $e^{- \lambda x}$ goes to zero as x goes to infinity faster than any power of x, there is no problem near infinity. Thus the only problem is near zero, where the integrand behaves like $x^{\alpha - 2}$ and has a singularity if $\alpha < 2$. But Lemma 2.40 in the notes says the singularity if integrable if the exponent is greater than - 1, that is if $\alpha > 1$. So that's the condition: $\alpha > 1$ and $\lambda > 0$.

Problem N4-7

Since $X - Y - Z \sim N(-2, 6)$,

\begin{eqnarray*}P(X-Y-Z>0) & = & P\left(\frac{X-Y-Z+2}{\sqrt{6}} > \frac{2}{\sqrt{6}}\right) \\
&=& 1- \Phi(0.8165) \\
&=& 0.2071\\
\end{eqnarray*}



next up previous
Up: Stat 5101 Homework Solutions
Charles Geyer
2000-11-14