Statistics 5601 (Geyer, Spring 2006) Examples: Better Bootstrap Confidence Intervals

Contents

General Instructions

To do each example, just click the "Submit" button. You do not have to type in any R instructions or specify a dataset. That's already done for you.

BCa Intervals

Section 14.3 in Efron and Tibshirani.

BCa stands for bias corrected and accelerated. It is an example of really horrible alphabet soup terminology. Really trendy, though. Used to be that scientists used terminology that involved real English (or Latin) words. Nowadays, it is trendy to just use letters. It's molecular biology envy (a la DNA, RNA, G6PD, and so forth). If you can actually express yourself and be understood, then you must not be a real scientist, because as everyone knows science is hard to understand. Hence the modern trend for scientists to speak and write as illiterately as possible.

To parody this trend, we call these alphabet soup, type 1 intervals (for type 2 see below).

External Data Entry

Enter a dataset URL :

Comments

ABC Intervals

These are the alphabet soup, type 2 intervals.

ABC stands for approximate bootstrap confidence, whatever that means. It doesn't actually bootstrap, but just approximates the bootstrap. Chapter 22 of Efron and Tibshirani explains, but we won't get into that.

Section 14.4 in Efron and Tibshirani.

External Data Entry

Enter a dataset URL :

Comments

The rather strange form of rvar is an estimator written in resampling form, which we saw before in the improved bootstrap bias correction procedure.

As the example shows and the on-line help documents, the tt argument to the abcnon function must have the signature function(p, x) where

The idea is that the relationship of a bootstrap sample x.star to the original data x can be expressed as a probability vector p.star such that p.star[i] is the fraction of times x[i] occurs in x.star.

We have to write a function that calculates the estimator given x and p.star.

And this function must work for any probability vector p.star, not just ones with elements that are multiples of 1 / n, because that's what the ABC method requires.

Unfortunately, this is, in general, hard.

Fortunately, this is, for moments, quite straightforward.

For any function g, any data vector x, and any probability vector p, the expression

sum(g(x) * p)

calculates the expectation of the random variable g(X) in the probability model that assigns probability p[i] to the point x[i] for each i (and probability zero to everywhere else).

Thus

sum(x * p)

calculates the mean

sum((x - a)^2 * p)

calculates the second moment about the point a, and so forth.