--- title: "Stat 5421 Lecture Notes: Model Selection and Model Averaging" author: "Charles J. Geyer" date: "`r format(Sys.time(), '%B %d, %Y')`" output: bookdown::html_document2: number_sections: true md_extensions: -tex_math_single_backslash mathjax: https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.1/MathJax.js?config=TeX-AMS-MML_HTMLorMML bookdown::pdf_document2: latex_engine: xelatex number_sections: true md_extensions: -tex_math_single_backslash linkcolor: blue urlcolor: blue --- # License This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License (http://creativecommons.org/licenses/by-sa/4.0/). # R * The version of R used to make this document is `r getRversion()`. * The version of the `rmarkdown` package used to make this document is `r packageVersion("rmarkdown")`. * The version of the `glmbb` package used to make this document is `r packageVersion("glmbb")`. # Information Criteria ## AIC In the early 1970's Akaike proposed the first information criterion. Later many others were proposed, so Akaike's is now called the [Akaike information criterion (AIC)](https://en.wikipedia.org/wiki/Akaike_information_criterion). The "information" in AIC is [Kullback-Leibler information](https://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence), which is the concept of [Shannon information](https://en.wikipedia.org/wiki/Entropy_(information_theory)) imported into statistics, which in turn is the concept of entropy imported from statistical physics into communications theory. It is expected log likelihood, which is what the maximized value of the log likelihood is trying to estimate. See also the [maximum entropy section of the notes on exponential families](expfam.html#maximum-entropy). What Akaike discovered is that the maximized value of the log likelihood is a biased estimate of Kullback-Leibler information. It overestimates it by $p$, the dimension of the model (number of parameters). Or, what is the same thing, the likelihood ratio test (LRT) statistic, which is minus twice the (maximized value of the) log likelihood, underestimates its expectation by $2 p$. So $$ \text{AIC} = \text{LRT} + 2 p $$ All of this is a "large sample size" result based on the "usual" asymptotics of maximum likelihood, which is not valid for all statistical models. But it is always valid for exponential family models in general and categorical data analysis in particular (when sample size is "large"). ## AIC Versus Hypothesis Tests The [central dogma of hypothesis testing](http://users.stat.umn.edu/~geyer/101b.pdf#page=13) is "do only one test" or if you do more than one test, you must [correct $P$-values to account for doing more than one test](http://users.stat.umn.edu/~geyer/101b.pdf#page=24). So the theory of hypothesis tests is the [Wrong Thing](http://www.catb.org/jargon/html/W/Wrong-Thing.html) for comparing many models. And AIC is the [Right Thing](http://www.catb.org/jargon/html/R/Right-Thing.html) or at least *a* Right Thing. Approaches based on hypothesis testing, such as that implemented by R function `step` come with no guarantees of doing anything correct. They are undeniably TTD (things to do) but have no theoretical justification. Also, hypothesis tests we know about ([Wald, Wilks, Rao](likelihood.html#wald-wilks-rao)) require models being compared be nested. AIC does not have any such requirement. ## BIC In the late 1970's Schwarz proposed another information criterion, which is now usually called the [Bayesian information criterion (BIC)](https://en.wikipedia.org/wiki/Bayesian_information_criterion). Its formula is $$ \text{BIC} = \text{LRT} + \log(n) \cdot p $$ Since $\log(n) \ge 2$ for $n \ge 8$, BIC penalizes larger models more than AIC. BIC always selects smaller models than AIC. The reason BIC is called "Bayesian" is that, if $\mathop{\rm BIC}(m)$ denotes the BIC for model $m$ and $g(m)$ denotes the prior probability for model $m$, then \begin{equation} \Pr(m \mid \text{data}) \approx \frac{\exp\bigl(- \tfrac{1}{2} \mathop{\rm BIC}(m) \bigr) g(m)} {\sum_{m^* \in \mathcal{M}} \exp\bigl(- \tfrac{1}{2} \mathop{\rm BIC}(m^*) \bigr) g(m^*)} (\#eq:posterior) \end{equation} where $\mathcal{M}$ is the class of models under consideration. This is a "large sample size" result based on the "usual" asymptotics of Bayesian inference ([the Bernstein–von Mises theorem](https://en.wikipedia.org/wiki/Bernstein%E2%80%93von_Mises_theorem)), which is not valid for all statistical models. But it is always valid for exponential family models in general and categorical data analysis in particular (when sample size is "large"). When we use a flat prior ($g$ is a constant function of $m$), the prior $g$ cancels out of the formula, and we obtain $$ \mathop{\rm BIC}(m) \approx - 2 \log \Pr(m \mid \text{data}) + \text{a constant} $$ Clearly, BIC is defined the way it is to be comparable to AIC, not to produce the simplest Bayesian formulas. ## AIC Versus BIC In model selection AIC and BIC do two different jobs. No selection criterion can do both jobs (Yang, 2005, [DOI:10.1093/biomet/92.4.937](https://doi.org/10.1093/biomet/92.4.937)). * BIC provides consistent model selection when the true unknown model is among the models under consideration. * AIC is minimax-rate optimal for estimating the regression function and other probabilities and expectations. It does not need the true unknown model to be among the models under consideration. Assuming the true unknown model to be among the models under consideration, and Bayesians have to assume this — not among the models under consideration means prior probability zero and posterior probability zero — selecting the model with smallest BIC will select the true unknown model with probability that goes to one as $n$ goes to infinity. Of course, that does not mean BIC is guaranteed to select the correct model at any finite sample size. If we do not assume the true unknown model is among the models under consideration, then we only have AIC as an option. It generally does not do consistent model selection. However, it does give the best predictions of probabilities and expectations of random variables in the model. It is using the models under consideration to give the best predictions of probabilities and expectations under the true unknown model (which need not be among the models under consideration). In short, * use BIC when the true unknown model is assumed to be among the models under consideration, but * use AIC when we do not want to assume this. A shorthand often used is "sparsity". The *sparsity* assumption is that the true unknown model has only a few parameters and is one of the models under consideration. Under sparsity, BIC does consistent model selection. If you do not want to assume sparsity, then use AIC. ## Other Information Criteria Many other information criteria have been proposed. We will not cover any of them except to note that R function `glmbb` in R package `glmbb` also allows for so-called corrected AIC, denoted $\text{AIC}_c$. The help page for that function says that this has no justification for categorical data analysis, so we will not use it. # Considering All Possible Models The [branch and bound algorithm](https://en.wikipedia.org/wiki/Branch_and_bound) allows consideration of all possible models (or all models in some large class) without actually fitting all of the models. The trick is in the "bounds". Sometimes one can tell that all models either having or lacking a certain term in their formula will fail to be anywhere near the best model found so far. Thus they can be ignored. This is what R function `glmbb` (for GLM branch and bound) does. Even when there are thousands of models, it can consider all of them in a reasonable amount of computer time. The branch and bound algorithm is not magic, however. When there are billions and billions of models, it may be too slow. # Model Selection Versus Model Averaging Model selection is a [mug's game](https://www.merriam-webster.com/dictionary/mug%27s%20game). When there are many models under consideration, the probability of selecting the correct model may be very small, even when the correct model (true unknown model) is among the models under consideration. True, BIC selects the correct model with probability going to one as sample size goes to infinity, but for any finite sample size, this probability may be small. And AIC does not even guarantee that. Also, model selection is just the Wrong Thing if you think like a Bayesian. Bayesians apply Bayes' rule, and that provides posterior probabilities, which are approximated by \@ref(eq:posterior). So Bayesians should not select models, they should average over all models according to posterior probability. This is called *Bayesian model averaging* (BMA). For a good introduction see the paper by Hoeting, et al. (1999, [doi:10.1214/ss/1009212519](https://doi.org/10.1214/ss/1009212519), unfortunately this paper had a "printing malfunction" that caused a lot of minus signs and left parentheses to be omitted, a corrected version is available at http://www.stat.washington.edu/www/research/online/hoeting1999.pdf). To do full BMA one usually needs to do MCMC with multiple models. That can be done with R function `temper` in R package `mcmc` but we are not going to cover that. The package vignette [bfst.pdf](https://cloud.r-project.org/web/packages/mcmc/vignettes/bfst.pdf) shows how to do that, but we will not cover it in this course. A fairly good approximation (if the sample size is large) to BMA uses BIC. Let $\hat{\theta}_m$ denote the maximum likelihood estimate parameter vector for model $m$, and let $E(W \mid m, \hat{\theta}_m)$ be the expected value of some function $W$ of the data assuming model $m$ is correct and the true unknown parameter value is $\hat{\theta}_m$. Then \begin{equation} E(W \mid \text{data}) \approx \frac{\sum_{m \in \mathcal{M}} E(W \mid m, \hat{\theta}_m) \exp\bigl(- \tfrac{1}{2} \mathop{\rm BIC}(m) \bigr) g(m)} {\sum_{m^* \in \mathcal{M}} \exp\bigl(- \tfrac{1}{2} \mathop{\rm BIC}(m^*) \bigr) g(m^*)} (\#eq:bma) \end{equation} for large $n$. The logic is that for large sample sizes the posterior distribution of the within model parameter $\theta_m$ will be concentrated near $\hat{\theta}_m$. So $E(W \mid m, \hat{\theta}_m)$ is a good approximation to $E(W \mid m)$. Frequentists can also play the model averaging game. Even though they do not buy Bayesian arguments, they can see that model selection is a mug's game. If model selection is very unlikely to pick the best model (which is always the case when there are many models and the sample size is not humongous), then averaging over good models is better. There are many proposed ways to do frequentist model averaging (FMA) in the literature, but one simple way (and the only way we will cover in this course is to replace BIC with AIC in \@ref(eq:bma) giving \begin{equation} \hat{w}_\text{FMA} \approx \frac{\sum_{m \in \mathcal{M}} E(W \mid m, \hat{\theta}_m) \exp\bigl(- \tfrac{1}{2} \mathop{\rm AIC}(m) \bigr) g(m)} {\sum_{m^* \in \mathcal{M}} \exp\bigl(- \tfrac{1}{2} \mathop{\rm AIC}(m^*) \bigr) g(m^*)} (\#eq:fma) \end{equation} As an alternative to summing over all possible models in \@ref(eq:bma) and \@ref(eq:fma). Madigan and Raftery (1992, [DOI:10.1080/01621459.1994.10476894](https://doi.org/10.1080/01621459.1994.10476894)) proposed what they called "Occam's window" in which we only sum over a subset of the models under consideration. Here we only sum over those that have the highest BIC values. This is convenient because that is what R function `glmbb` outputs (only those models within a "cutoff" of the minimum criterion value). The user can choose the "cutoff" as desired. The default cutoff of 10 results in a "weights" $\exp\bigl(- \tfrac{1}{2} \mathop{\rm BIC}(m) \bigr)$ that are no less than $\exp\bigl(- \tfrac{1}{2} \text{cutoff} \bigr)$, which is $\exp(- 5) = `r exp(-5)`$ for the default cutoff, relative to the maximum weight. # Examples We will redo some of the examples from other handouts that already used R function `glmbb`. ## High School Student Survey In the [high school student survey data from Chapter 9 in Agresti](ch9.html#example-high-school-student-survey) we saw there were two relatively good models according to AIC ```{r high.school.aic} library(CatDataAnalysis) data(table_9.3) library(glmbb) out <- glmbb(count ~ a * c * m, data = table_9.3, family = poisson) summary(out) ``` Now we know how to interpret the weights. They are $\exp\bigl(- \tfrac{1}{2} \mathop{\rm AIC}(m) \bigr)$ normalized to sum to one. ```{r high.school.aic.summary} sout <- summary(out) sout$results aic <- sout$results$criterion w <- exp(- aic / 2) w <- w / sum(w) all.equal(sout$results$weight, w) ``` If we treat these as vaguely like posterior probabilities (they are the FMA competitor to posterior probabilities), then they say the "best" model is not so much better than the second "best" that we should not assume the "best" model is correct. We can also use BIC. ```{r high.school.bic} out.bic <- glmbb(count ~ a * c * m, data = table_9.3, family = poisson, criterion = "BIC", BIC.option = "sum") summary(out.bic) ``` We see that BIC is much more favorable to the smaller (more parsimonious) model than AIC is. BIC puts probability `r round(summary(out.bic)$results$weight[1], 5)` on the "best" model but AIC puts only probability `r round(summary(out)$results$weight[1], 5)` on this model. In this example they agree on the "best" model, but generally these two criteria do not agree. The optional argument `BIC.option = "sum"` to R function `glmbb` is explained in the help for that function. We should always use it when doing categorical data analysis. It perhaps should be the default, but is not for reasons of backward compatibility. ## Seat Belt Use In the [seat belt use data from Chapter 9 in Agresti](ch9.html#example-seat-belt-use) we saw there were many relatively good models according to AIC ```{r options.wide,echo=FALSE} options(width = 200) ``` ```{r seat.belt.aic} # clean up R global environment rm(list = ls()) count <- c(7287, 11587, 3246, 6134, 10381, 10969, 6123, 6693, 996, 759, 973, 757, 812, 380, 1084, 513) injury <- gl(2, 8, 16, labels = c("No", "Yes")) gender <- gl(2, 4, 16, labels = c("Female", "Male")) location <- gl(2, 2, 16, labels = c("Urban", "Rural")) seat.belt <- gl(2, 1, 16, labels = c("No", "Yes")) library(glmbb) out.aic <- glmbb(count ~ seat.belt * injury * location * gender, family = "poisson") summary(out.aic) ``` Now we know how to interpret the weights. We see that no model gets a majority of the weight. None is highly probable to be the best model. There are just too many models under consideration for that to happen. This, of course, assumes that in FMA we want to use a flat "prior" (because frequentists don't like priors), that is, we want $g(m) = 1$ for all $m$ in \@ref(eq:fma). We can also use BIC. ```{r seat.belt.bic} out.bic <- glmbb(count ~ seat.belt * injury * location * gender, family = "poisson", criterion = "BIC", BIC.option = "sum") summary(out.bic) ``` There is a big difference between the analyses. * BIC does put the majority of the posterior probability on one model (assuming flat prior on models). * That best model according to BIC weight is number 5 according to AIC weight. Conversely, the best model according to AIC weight is below the cutoff according to BIC weight and so does not appear on the BIC list. * BIC plus Occam's window spreads out the posterior probability over fewer models than AIC + Occam's window. If we compute expected cell counts according to FMA and BMA, they should be rather different. It is not very obvious how to do FMA without refitting all the models, but all the model fits are stored in the R environment `out.aic$envir` ```{r options.normal,echo=FALSE} options(width = 80) ``` ```{r seat.belt.expected.fma.ls} ls(envir = out.aic$envir) ``` All of the R objects whose names begin with `sha1.` are the model fits. The object `min.crit` is the same as `out.aic$min.crit` ```{r seat.belt.expected.fam.min.crit} identical(out.aic$min.crit, out.aic$envir$min.crit) ``` And, although this is not documented, we can look in the code for R function `summary.glmbb` to see how to extract the criteria from these models. ```{r seat.belt.expected.fma} min.crit <- out.aic$min.crit cutoff <- out.aic$cutoff e <- out.aic$envir rm("min.crit", envir = e) criterion <- unlist(eapply(e, "[[", "criterion")) is.in.window <- criterion <= min.crit + cutoff w <- criterion[is.in.window] w <- exp(- w / 2) w <- w / sum(w) moo <- eapply(e, "[[", "fitted.values") moo <- moo[is.in.window] moo <- as.data.frame(moo) moo <- as.matrix(moo) mu.hat <- drop(moo %*% w) foo <- data.frame(injury = injury, gender = gender, location = location, seat.belt = seat.belt, observed = count, expected.fma = mu.hat) print(foo, digits=5) ``` Note this is frequentist model averaging. The expected cell counts reported in the last column do not correspond to the prediction of any model. They are a weighted average of the predictions of all `r nrow(summary(out.aic))` models that pass through "Occam's window" (that are reported by `summary(out.aic)`). And the weights are the weights reported by `summary(out.aic)`. And, if we redo the same calculation using BMA rather than FMA, we get ```{r seat.belt.expected.bma} min.crit <- out.bic$min.crit cutoff <- out.bic$cutoff e <- out.bic$envir rm("min.crit", envir = e) criterion <- unlist(eapply(e, "[[", "criterion")) is.in.window <- criterion <= min.crit + cutoff w <- criterion[is.in.window] w <- exp(- w / 2) w <- w / sum(w) moo <- eapply(e, "[[", "fitted.values") moo <- moo[is.in.window] moo <- as.data.frame(moo) moo <- as.matrix(moo) mu.hat <- drop(moo %*% w) foo <- data.frame(foo, expected.bma = mu.hat) print(foo, digits=5) ``` Even though the `fitted.values` components of the model fits are the same in FMA and BMA, the weighted averages are different because the weights are different and because different numbers of models are given any weight at all, `r nrow(summary(out.aic))` for FMA and `r nrow(summary(out.bic))` for BMA. If one wanted to go on and calculate other things that are functions of these expected values, like conditional odds ratios that Agresti computes, then they should be based on the FMA or BMA values computed above. If one is doing FMA, then standard errors for these FMA estimators can be computed as described by [Burnham and Anderson, Section 4.3.2](https://www.amazon.com/Model-Selection-Multimodel-Inference-Information-Theoretic/dp/0387953647/). But we will not give examples of that here. If one wants Bayesian posterior distributions for these expectations, one would have to do full BMA via MCMC as described in the aforementioned vignette [bfst.pdf](https://cloud.r-project.org/web/packages/mcmc/vignettes/bfst.pdf) in R package `mcmc`. But, as also aformentioned, we will not be discussing that in this course. ## Alligator Food Choice ### AIC {#alligator-aic} In the [alligator food choice data from Chapter 8 in Agresti](ch8.html#multinomial-response) we saw there were many relatively good models according to AIC ```{r options.wide.too,echo=FALSE} options(width = 200) ``` ```{r alligator.aic} # clean up R global environment rm(list = ls()) library(CatDataAnalysis) data(table_8.1) foo <- transform(table_8.1, lake = factor(lake, labels = c("Hancock", "Oklawaha", "Trafford", "George")), gender = factor(gender, labels = c("Male", "Female")), size = factor(size, labels = c("<=2.3", ">2.3")), food = factor(food, labels = c("Fish", "Invertebrate", "Reptile", "Bird", "Other"))) out.aic <- glmbb(big = count ~ lake * gender * size * food, little = ~ lake * gender * size + food, family = poisson, data = foo) summary(out.aic) ``` ### BIC {#alligator-bic} And now we can also do BIC. ```{r alligator.bic} out.bic <- glmbb(big = count ~ lake * gender * size * food, little = ~ lake * gender * size + food, family = poisson, data = foo, criterion = "BIC", BIC.option = "sum") summary(out.bic) ``` This is quite a shock. It says that the empty model that says `food` is not associated with `lake`, `gender`, or `size` or any interaction of them is the best model (according to BIC). If you think like a Bayesian, perhaps there is nothing much going on in these data.