wilcox.test
and wilcox.exact
To do each example, just click the "Submit" button. You do not have to type in any R instructions or specify a dataset. That's already done for you.
r
defined in line four is the (absolute)
ranks.
z
and the
ranks r
stuffed into a matrix so they line up,
each difference over the corresponding rank.
1 - psignrank(tplus - 1, n) psignrank(tplus - 1, n, lower.tail=FALSE) psignrank(n * (n + 1) / 2 - tplus, n)the first line here does exactly the same as line seven in the example, but is less accurate for very small P-values. The second does exactly the same as line seven of the example because of the symmetry of the distribution of the signed rank test statistic.
The Hodges-Lehmann estimator associated with the signed rank test is the median of the Walsh averages, which are the n (n + 1) / 2 averages of pairs of differences
The following somewhat tricky code computes the Walsh averages and their median.
Very similar to the confidence interval associated with the sign test, the confidence interval has the form
where m = n (n + 1) / 2 is the number of Walsh averages, and, as always, parentheses on subscripts indicates order statistics, in this case, of the Walsh averages Wk. That is, one counts in k from each end in the list of sorted Walsh averages to find the confidence interval.
1 - 2 * psignrank(k - 1, n)for different values of
k
. The vectorwise operation of R
functions can give them all at once
k <- seq(1, 100) conf <- 1 - 2 * psignrank(k - 1, n) conf[conf > 1 / 2]If one adds these lines to the form above, one sees that the choice is fairly restricted. There are nine possible achieved levels between 0.99 and 0.80 are
k
to be any integer between
one and n / 2
just before the second to last line in the form
(cat . . .
). A confidence interval with some
achieved confidence level will be produced.
alpha
rather than
alpha / 2
in the fifth line of the form. Then make either
the lower limit minus infinity or the upper limit plus infinity, as desired.
wilcox.test
and wilcox.exact
All of the above can be done in one shot with the R function
wilcox.test
(on-line help).
Only one complaint. It does not report the actual achieved confidence level
(here 96.1%) but rather the confidence level asked for (here 95%, the default).
If you want to know the actual achieved confidence level, you'll have to
use the code in the confidence interval section above.
But you can use wilcox.test
as a convenient check
(the intervals should agree).
Do not use the wilcox.test
function when there are ties or
zeros in the data. See the following section.
There is a function wilcox.exact
explained below that does do hypothesis tests correctly
in the presence of ties and zeros
(if you accept the zero fudge).
Neither wilcox.test
nor wilcox.exact
calculate
point estimates or confidence intervals correctly in the presence of ties
or zeros.
Do not use for confidence intervals or point estimates!
What if the continuity assumption is false and there are tied absolute Z values or zero Z values?
First, neither ties nor zeros should make any difference in calculating point estimators or confidence intervals.
This is a bit of programmer brain damage
(PBD)
in the implementation of the wilcox.test
and wilcox.exact
functions. They
change the way they calculate point estimates and confidence intervals
when there are ties or zeros. But they shouldn't.
What we called the "zero fudge" in the context of the sign test (because it is fairly bogus there) makes much more sense in the context of the signed rank test. Zero values in the vector Z = Y - X of paired differences should get the smallest possible absolute rank because zero is smaller in absolute value than any other number. We might as well give them rank zero, starting counting at zero rather than at one. Then they make no contribution to the sum of ranks statistic.
Here's another way to see the same thing. Let's compare three procedures: Student t, signed rank, and sign.
This analysis explains why the Wilcoxon should treat differences of zero size just like the Student t, that is, they don't count at all.
There is another issue about the zero fudge. As explained in the section about the zero fudge for the sign test, the null hypothesis for the sign test with zero fudge is
which generally is neither true nor interesting for the sign test but which is true under the usual assumptions for the Wilcoxon signed rank test. We are already assuming the distribution of the differences is symmetric, and that implies the probability to either side of the population median μ is the same.
All that having been said, I still feel that there is something wrong with deliberately ignoring data favoring the null hypothesis. It still seems like cheating.
Thus we won't argue (at least not much) with the zero fudge for the signed rank test. We will start our hypothesis test calculations (don't do this for point estimates or confidence intervals) with
z <- y - z z <- z - mu z < z[z != 0]
The preceding section takes care of one kind of violation of the continuity assumption. But there's a second kind of violation that causes another problem.
If there are ties among the magnitudes of the Z values, then
psignrank
and qsignrank
functions in R is no longer correct in the presence
of ties.
So there are really two issues to be resolved.
The standard solution to the first problem is to use so-called "tied ranks"
in which each of a set of tied magnitudes is assigned the average of the
ranks they otherwise would have gotten if they had been slightly different
(and untied). The R rank
function automatically does this.
So the ranks are done the same as before.
And the test statistic is calculated from the ranks the same as before.
r <- rank(abs(z)) tplus <- sum(r[z > 0])
The Wrong Thing (with a capital "W" and a capital "T") is to just ignore the fact that tied ranks change the sampling distribution and just use tables in books or computer functions that are based on the assumption of no ties.
This is not quite as bad as it sounds, because tied ranks were thought up in the first place with the idea of not changing the sampling distribution much. So this does give wrong answers, but not horribly wrong.
private
and government
.
mu <- 0 # hypothesized value of medianand
z <- z - mu z <- z[z != 0]serve no purpose in this problem where μ = 0 and there are no zero differences. They are only there for the general case.
floor
function in the last line rounds tplus
down to the nearest integer (in this case from 62.5 to 62) because only
integer arguments make sense to psignrank
and rounding down
is conservative for upper tails (makes them bigger).
psignrank(ceiling(tplus), n)
As long as you have a computer, why not use it? There are only 2n points in the sample space of the sampling distribution of the test statistic under the null hypothesis corresponding to all possible choices of signs to the ranks. In this case, 212 = 4096, not a particularly big number for a computer (although out of the question for hand calculation). So just do it.
There is an R function wilcox.exact
(on-line help) that does the job.
Wrong | P = 0.04 |
Right | P = 0.03 |
wilcox.test
function to do the
wrong thing and a wilcox.exact
function to do the right thing.
I wonder too. Here's my guess. The wilcox.test
function
has been there
for more than 10 years and uses 60-year-old ideas. It's well understood.
The wilcox.exact
uses ideas that are very recent research.
It may eventually replace the other function, but has to wait a while.
We have been ignoring up to now, large sample (also called "asymptotic") approximations to sampling distributions of test statistics. Why use an approximation when the computer can do it exactly?
However, the computer can't do it exactly for really large n.
The functions psignrank
and qsignrank
crash
when n is larger than about 50.
The function wilcox.exact
may also crash for large n
but I haven't experimented with it.
Thus the need for large sample approximation.
It is a fact, which Hollander and Wolfe derive on pp. 44-45, that the mean and variance of the sampling distribution of T+ are
under the assumption of continuity (hence no ties).
When there are ties, the mean stays the same but the variance is reduced by a quantity that, because it has a summation sign in it, doesn't look good in web pages. Just see equation (3.13) in Hollander and Wolfe.
Here's how R uses this approximation.
pnorm
.
Arrghh!!! The tied ranks may also have only integer values (when the number tied in each tied group is odd). In that case the correction for continuity should be 1 / 2.
pnorm(tplus, et, sqrt(vt)) pnorm(tplus + 1 / 4, et, sqrt(vt)) pnorm(tplus + 3 / 8, et, sqrt(vt)) pnorm(tplus + 1 / 2, et, sqrt(vt))whichever you think is better.
The R function wilcox.test
always uses 1 / 2 for the correction
for continuity even though we can see that 1 / 4 works better in this case.