Fitting a Regression Model
Simple Linear Regression
We start with simple linear regression where the regression function has the form
done using R
The estimates of the regression coefficients are
β_{1} = −0.7861 and
β_{2} = 0.6850, and
the unbiased estimate of the error variance is σ^{2}, which is
1.083^2
where 1.083 is the Residual standard error
reported just below the coefficients table.
The R function lm
fits the linear model.
The expression y ~ x
specifies the model.
More about model formulas is given in the
R documentation for model formulas, but you don't need to look at that.
We'll cover all the formulas you need to know about in this web page.
For now it's enough to know that the formula specifies the
predictor x
and the response y
.
If these variables were instead named fred
and sally
,
respectively, the model formula would have to be sally ~ fred
and the whole model fitting statement would be
out < lm(sally ~ fred)
Note that the model formula doesn't mention regression coefficients explicitly.
There is one regression coefficient for each predictor in the formula
(that is, there is a β_{1} for the predictor x
)
and there is also a regression coefficient for the constant predictor
included by default.
The reason why the model fit is saved in a variable out
is because we generally want to do many things with it. In this example,
we use it twice. In other examples, we will use it more.
Quadratic Regression
Now we do the model
The estimates of the regression coefficients reported by the printout are
β_{2} = 0.6186
β_{3} = 0.0126
The R function lm
fits the linear model.
The expression y ~ x + I(x^2)
specifies the model.
More about model formulas is given in the R documentation for model formulas, but you don't need to look at that. We'll cover all the formulas you need to know about in this web page.
For now it's enough to know that the formula specifies the
predictors x
and x^2
and the response y
.
The plus sign (+
) is magical. It separates different predictors.
The I
function that wraps
the second predictor function.
It is necessary to write I(x^2)
instead of just x^2
because otherwise the hat (^
) would not be interpreted
correctly.
Generally, you have to wrap
all complicated expressions
for predictors. For example, you would write I(sin(x))
to use sin(x)
as a predictor.
Only simple variable names do not have to be wrapped
.
Note that the model formula doesn't mention regression coefficients explicitly.
There is one regression coefficient for each predictor in the formula,
that is, there is a β_{1} for the predictor x
and a β_{2} for the predictor I(x^2)
,
and there is also a regression coefficient for the constant predictor
(that is, there is a β_{0} too) included by default.
The curve
function that draws the sample regression curve
on the plot we will just treat as magic
because it is too complicated
to explain. It is, of course, documented in the
online help, but you don't need to look at that.
For comparison, we add (dotted line) the regression line from the other plot (simple linear regression).
Multiple Regression
Here is a model with two predictor variables.
We can't call both predictor variables the same name.
Here we call them x1
and x2
.
The regression function is
The regression coefficients reported by the printout are
β_{2} = 0.4503
β_{3} = 0.1725
The R function lm
fits the linear model.
The expression y ~ x1 + x2
specifies the model.
Hypothesis Tests
Tests about Regression Coefficients
We'll use the example for simple linear regression and the example for quadratic regression which were done above as our examples for hypothesis tests about regression coefficients.
The form below redoes those examples, except for the plots, which we don't need.
The columns labeled t value
and Pr(>t)
of the Coefficients:
table give, respectively,

t value
, the test statistic for a test of the hypothesis that the specified regression coefficient (the one for that line of the table) is actually zero. 
Pr(>t)
the Pvalue for the twotailed test about that regression coefficient.
For example, if we want to do the test with regression function
and hypotheses
H_{1} : β_{2} ≠ 0
then the value of the test statistic is T = 3.801. Assuming the null hypothesis and assuming the standard regression assumptions (i. i. d. mean zero normal errors), this test statistic has a Student t distribution with n − 2 degrees of freedom. The corresponding Pvalue is P = 0.00523 (twotailed). But you don't have to do a lookup in a table of the t distribution, R does it for you.
The interpretation of P = 0.00523 for this test is, of course, that the null hypothesis is rejected and the true unknown population regression coefficient β_{1} is nonzero.
If you wanted a onetailed rather than a twotailed test, you would have to divide the given Pvalue by 2.
For another example, if we want to do the test with regression function
and hypotheses
H_{1} : β_{3} ≠ 0
then the value of the test statistic is T = 0.092 and the Pvalue is 0.929.
The interpretation of this Pvalue is, of course, that the null hypothesis is accepted and the true unknown population regression coefficient is zero. As usual, that doesn't prove the population regression coefficient is actually zero. It only says that the data at hand don't give any real evidence it is not zero.
Summary
The quadratic regression is unnecessary for these data. It appears that the linear model with the regression function
fits the data well.
The linear regression does appear to be necessary for these data. It appears that the model with the constant regression function
does not fit the data well. So the the linear model is necessary.
Caution
Remember the dogma: a statistical hypothesis test is valid only when you do only one test on a data set.
We can push that a little bit. It's not too bogus if we do only one test per regression.
But we should never do more than one test on
one regression. That means we should never look at more than one
of the Pvalues printed in the Pr(>t)
column of the Coefficients:
table in a regression output.
The two regressions in this section are a good example of this issue.
 Suppose we look at the second regression and conclude (as we did above) that β_{3} is not significantly different from zero.
 Then it would be the Wrong Thing (with a capital W and a capital T) to do another test using the same regression output and conclude that β_{2} is also not significantly different from zero (P = 0.437).
 In fact, the Right Thing is to fit the linear regression model and conclude exactly the opposite, that β_{2} is significantly different from zero (P = 0.00523).
 After you've used the quadratic regression output once (to conclude β_{3} is not significantly different from zero), then you shouldn't use this regression output for anything else! If you want to do another test, then it should be based on another regression fit.
Omnibus Tests
Also of interest in multiple regression is a test of whether there are any regression coefficients that are significantly nonzero except for the coefficient β_{1} that goes with the constant predictor and is usually not of interest. That is we want to do a test of
H_{1} : β_{i} &ne 0, for some i > 1
This test is often not of particular research
interest. It serves
as a straw man to knock down
. The null hypothesis is generally
thought to be false and is easily rejected with a reasonable amount of data.
But it is important to do the test anyway. When the null hypothesis cannot be rejected, this means the data are completely worthless. The model that has the constant regression function fits as well as the regression model.
Thus if this test fails to reject H_{0} the right thing to do is throw the data in the trash. They're worthless. No further analysis need be done.
The idea that their precious data, obtained at great cost in time, money, or effort, might actually be worthless upsets some people so much that they don't even want to think about doing this test. But they should. Drawing conclusions from worthless data makes you a fool, whether or not you are aware of your foolishness. If someone else reanalyzes the data, they'll find out.
R does this test in every regression printout. For quadratic regression example were done above part of the printout was
Coefficients:
Estimate Std. Error t value Pr(>t)
(Intercept) 0.7451 0.7319 1.018 0.343
x 0.6186 0.7500 0.825 0.437
I(x^2) 0.0126 0.1373 0.092 0.929
Residual standard error: 1.157 on 7 degrees of freedom
Multiple RSquared: 0.644, Adjusted Rsquared: 0.5423
Fstatistic: 6.332 on 2 and 7 DF, pvalue: 0.02692
The last line describes this omnibus
test.
The interpretation is that the null hypothesis is rejected (P = 0.02692). So the regression is not completely worthless. Either β_{2} or β_{3} appears to be nonzero (or perhaps both).
Note that this is quite the opposite conclusion from what one would get
if one were foolish enough to do two tests on the same regression using
the Pvalues for the regression coefficients in the
Pr(>t)
column of the Coefficients:
table.
Yet another example of the reason for the only one test
rule.
Model Comparison Tests
In this section we consider testing a big model
versus a little model
. The null hypothesis is some regression
model, and the alternative hypothesis is some other regression model,
and the little model is a submodel of the big model
(the little model is obtained by setting some of the regression coefficients
of the big model to zero).
As an example, let us consider the
multiple regression example done above.
We use the model from that fit as the little model
.
And we consider a quadratic model as the big model
.
A quadratic model has three more terms. The regression function
for the big model is
We obtain the regression function for the little model by setting β_{3} = β_{4} = β_{5} = 0. So the little model is indeed a submodel of the big model (as the test requires).
To compare the fits of the two models we first do the fits,
which we save in the R objects out.little
and out.big
. Then we compare the models using
the anova
function. The printout of that function
is a socalled ANOVA (analysis of variance) table, which dates
back to the days of hand calculation and gives old timers a
warm fuzzy feeling. For our purposes we only need to get
the test statistic (F = 1.5442) and Pvalue
(P = 0.3337) out of this table.
The interpretation of the test is that the null hypothesis is accepted. The data give no evidence of statistically significant departure from the little model. That doesn't prove the little model is actually correct, only that the data give no evidence it isn't.
Note that you can't get the same effect by looking at the three Pvalues for the three regression coefficients separately. That violates the do only one test rule.
If you want to make this model comparison, you need to do this test.
The omnibus test
of the preceding section is the special case
of the test of this section where the little model has only the constant
predictor.
Confidence Intervals
Confidence Intervals for Regression Coefficients
We'll use the example for quadratic regression which was done above as our example.
R doesn't do regression confidence intervals for you. It does most of the work but leaves a bit left for you. Let's again look at the regression printout for that example.
Coefficients: Estimate Std. Error t value Pr(>t) (Intercept) 0.7451 0.7319 1.018 0.343 x 0.6186 0.7500 0.825 0.437 I(x^2) 0.0126 0.1373 0.092 0.929 Residual standard error: 1.157 on 7 degrees of freedom Multiple RSquared: 0.644, Adjusted Rsquared: 0.5423 Fstatistic: 6.332 on 2 and 7 DF, pvalue: 0.02692
The printout gives us three numbers (highlighted in yellow) needed to construct the confidence interval.
 The first number is the estimate, also called the sample regression coefficient.
 The second number is the standard error of this estimate (the estimated standard deviation of the sampling distribution of the estimate).
 The third number is the degrees of freedom for the t pivotal quantity made from the first two numbers
Thus one makes up a confidence interval in the usual way
where the critical value here comes from the t distribution with the degrees of freedom stated.
If we want a 95% confidence interval, the critical value for 7 degrees for freedom is given by
Rweb:> qt(0.975, 7) [1] 2.364624
and then the confidence interval for the parameter β_{3} for these data is given by
Rweb:> 0.0126 + c(1, 1) * qt(0.975, 7) * 0.1373 [1] 0.3120629 0.3372629
Of course, this whole calculation can be done by hand with a calculator and a table of the t distribution starting with the regression output. No further use of R is necessary after getting the regression output.
Confidence Intervals for the Regression Function
We can also get a confidence interval for the value of the regression function μ = g(x) at some predictor point x = (x_{1}, ..., x_{k}).
The R function predict
does this.
For a first example, take the quadratic regression example we used frequently in this web page. Suppose we want to estimate g(x) for x = 4.5.
The interval: (0.97802, 3.60901)
If the predictor is a vector, then each component must be supplied
in the newdata
argument.
This time we use the multiple regression example. we used frequently in this web page. Suppose we want to estimate g(x_{1}, x_{2}) for x_{1} = 2.0 and x_{2} = 64.