This is not the document you are looking for? Use the search form below to find more!

Report home > Education

Regression Analysis: Basic Concepts

0.00 (0 votes)
Document Description
This model represents the dependent variable, yi , as a linear function of one independent variable, xi , subject to a random ‘disturbance’ or ‘error’, ui . yi = β0 + β1xi + ui The error term ui is assumed to have a mean value of zero, a constant variance, and to be uncorrelated with itself across observations (E(ui u j ) = 0, i = j). We may summarize these conditions by saying that ui ‘white noise’. The task of estimation is to determine regression coefficients ˆ β0 and ˆβ1, estimates of the unknown parameters β0 and β1 respectively. The estimated equation will have the form ˆyi = ˆ β0 + ˆβ1x. We define the estimated error or residual associated with each pair of data values asˆui = yi − ˆyi = yi − ( ˆ β0 + ˆβ1xi)
File Details
  • Added: October, 07th 2010
  • Reads: 629
  • Downloads: 17
  • File size: 35.98kb
  • Pages: 4
  • Tags: dependent, independent, disturbance
  • content preview
Submitter
  • Name: frej
Embed Code:

Add New Comment




Related Documents

Earned Value Analysis - Basic Concepts

by: gabriel, 25 pages

Earned Value Analysis Basic Concepts Ricardo Viana Vargas, MSc, IPMA-B, PMP ricardo.vargas@macrosolutions.com.br © BY RICARDO VIANA VARGAS. TODOS OS DIREITOS RESERVADOS ...

Introduction to Regression Analysis

by: bizmana, 8 pages

This course provides an introduction to the theory, methods, and practice of regression analysis. The goals are to provide students with the skills that are necessary to: (1) read, understand, and ...

An Introduction to Regression Analysis

by: bizmana, 33 pages

Regression analysis is a statistical tool for the investigation of relationships between variables. Usually, the investigator seeks to ascertain the causal eVect of one variable upon ...

Regression Analysis

by: bizmana, 77 pages

Regression Analysis with S-Plus Robert A. Yaffee, Ph.D. Statistics, Social Science, and Mapping Group Academic Computing Services Information Technology Services Office: 75 Third Avenue Level C-3 ...

Regression Analysis Tutorial (excel & matlab)

by: bizmana, 15 pages

Regression analysis can be used to identify the line or curve which provides the best fit through a set of data points. This curve can be useful to identify a trend in the data, whether it is linear, ...

Regression Analysis and the Philosophy of Social Sciences: a Critical Realist View

by: bizmana, 44 pages

This paper challenges the connection conventionally made between regression analysis and the empiricist philosophy of science and offers an alternative explication for the way regression analysis is ...

LOGISTIC REGRESSION ANALYSIS

by: bizmana, 9 pages

Logistic regression analysis (LRA) extends the techniques of multiple regression analysis to research situations in which the outcome variable is categorical. In practice, situations involving ...

Robust Regression Analysis: Some Popular Statistical Package Options

by: bizmana, 12 pages

Robust regression analysis provides an alternative to a least squares regression model when fundamental assumptions are unfulfilled by the nature of the data. When the analyst estimates his ...

Oracle WebLogic Server Basic Concepts

by: niklas, 36 pages

WebLogic Server Overview Topology, Configuration and Administration Oracle WebLogic Server Agenda Topology Domain Server ...

A Second Course in Statistics: Regression Analysis, 7th Edition, William Mendenhall, Terry Sincich, PEARSON, ISM+STUDENT SOL MANUAL

by: mysmandtb, 9 pages

Solution Manuals and Test Banks I have huge collection of solution manuals and test banks. I strive to provide you unbeatable prices with excellent support. So, I assure you that you won’t be ...

Content Preview
Regression Analysis: Basic Concepts
Allin Cottrell∗
1
The simple linear model
This model represents the dependent variable, yi , as a linear function of one independent variable, xi , subject to a
random ‘disturbance’ or ‘error’, ui .
yi = β0 + β1xi + ui
The error term ui is assumed to have a mean value of zero, a constant variance, and to be uncorrelated with itself
across observations (E(ui u j ) = 0, i = j). We may summarize these conditions by saying that ui ‘white noise’.
The task of estimation is to determine regression coefficients ˆ
β0 and ˆβ1, estimates of the unknown parameters β0
and β1 respectively. The estimated equation will have the form
ˆyi = ˆ
β0 + ˆβ1x.
We define the estimated error or residual associated with each pair of data values as
ˆui = yi − ˆyi = yi − ( ˆ
β0 + ˆβ1xi)
In a scatter diagram of y against x, this is the vertical distance between the observed yi vaue and the ‘fitted value’,
ˆyi , as shown in Figure 1.
ˆ
β0 + ˆβ1x
yi
ˆui
ˆyi
xi
Figure 1: Regression residual
Note that we are using a different symbol for this estimated error ( ˆui ) as opposed to the ‘true’ disturbance
or error term defined above (ui ). These two will coincide only if ˆ
β0 and ˆβ1 happen to be exact estimates of the
regression parameters β0 and β1.
The basic technique for determining the coefficients ˆ
β0 and ˆβ1 is Ordinary Least Squares (OLS): values for ˆβ0
and ˆ
β1 are chosen so as to minimize the sum of the squared residuals (SSR). The SSR may be written as
∗Last revised 2003/02/03.
1

SSR =
ˆu2i = (yi − ˆyi )2 = (yi − ˆ
β0 − ˆβ1xi)2
(It should be understood throughout that
denotes the summation
n
, where n denotes the number of observa-
i =1
tions in the sample). The minimization of SSR is a calculus exercise: we need to find the partial derivatives of SSR
with respect to both ˆ
β0 and ˆβ1 and set them equal to zero. This generates two equations (the ‘normal equations’
of least squares) in the two unknowns, ˆ
β0 and ˆβ1. These equations are then solved jointly to yield the estimated
coefficients.
We start out from:
∂SSR/∂ ˆβ0 = −2 (yi − ˆβ0 − ˆβ1xi) = 0
(1)
∂SSR/∂ ˆβ1 = −2 xi(yi − ˆβ0 − ˆβ1xi) = 0
(2)
Equation (1) implies that
yi n ˆ
β0 − ˆβ1 xi = 0
⇒ ˆ
β0 = ¯y − ˆβ1 ¯x
(3)
while equation (2) implies that
xi yi − ˆ
β0 xi − ˆβ1 x2i = 0
(4)
We can now substitute for ˆ
β0 in equation (4), using (3). This yields
xi yi − ( ¯y − ˆ
β1 ¯x) xi − ˆβ1 x2i = 0

xi yi − ¯y xi − ˆ
β1( x2i − ¯x xi) = 0
xi yi − ¯y xi
⇒ ˆ
β1 =
(5)
x2i − ¯x xi
Equations (3) and (4) can now be used to generate the regression coefficients. First use (5) to find ˆ
β1, then use (3)
to find ˆ
β0.
2
Goodness of fit
The OLS technique ensures that we find the values of ˆ
β0 and ˆβ1 which ‘fit the sample data best’, in the specific
sense of minimizing the sum of squared residuals. There is no guarantee, however, that ˆ
β0 and ˆβ1 correspond
exactly with the unknown parameters β0 and β1. Neither, in fact, is there any guarantee that the ‘best fitting’ line
fits the data well: maybe the data do not even approximately lie along a straight line relationship. So how do we
assess the adequacy of the ‘fitted’ equation?
• First step: find the residuals. For each x-value in the sample, compute the fitted value or predicted value of
y, using ˆyi = ˆ
β0 + ˆβ1xi.
• Then subtract each fitted value from the corresponding actual, observed, value of yi . Squaring and summing
these differences gives the SSR, as shown in Table 1.
Now obviously, the magnitude of the SSR will depend in part on the number of data points in the sample (other
things equal, the more data points, the bigger the sum of squared residuals). To allow for this we can divide though
by the ‘degrees of freedom’, which is the number of data points minus the number of parameters to be estimated
(2 in the case of a simple regression with an intercept term). Let n denote the number of data points (or ‘sample
size’), then the degrees of freedom, d.f. = n − 2. The square root of the resulting expression is called the estimated
standard error of the regression ( ˆσ ):
SSR
ˆσ =
n − 2
2

Table 1: Example of finding residuals
Given ˆ
β0 = 52.3509 ; ˆβ1 = 0.1388
data (xi )
data (yi )
fitted ( ˆyi )
ˆui = yi − ˆyi
ˆu2i
1065
199.9
200.1
−0.2
0.04
1254
228.0
226.3
1.7
2.89
1300
235.0
232.7
2.3
5.29
1577
285.0
271.2
13.8
190.44
1600
239.0
274.4
−35.4
1253.16
1750
293.0
295.2
−2.2
4.84
1800
285.0
302.1
−17.1
292.41
1870
365.0
311.8
53.2
2830.24
1935
295.0
320.8
−25.8
665.64
1948
290.0
322.6
−32.6
1062.76
2254
385.0
365.1
19.9
396.01
2600
505.0
413.1
91.9
8445.61
2800
425.0
440.9
−15.9
252.81
3000
415.0
468.6
−53.6
2872.96
= 0
= 18273.6
= SSR
The standard error gives us a first handle on how well the fitted equation fits the sample data. But what is a ‘big’
ˆσ and what is a ‘small’ one depends on the context. The standard error is sensitive to the units of measurement of
the dependent variable.
A more standardized statistic, which also gives a measure of the ‘goodness of fit’ of the estimated equation is
R2. This statistic is calculated as follows:
SSR
SSR
R2 = 1 −
(
≡ 1 −
yi − ¯y)2
SST
Note that SSR can be thought of as the ‘unexplained’ variation in the dependent variable—the variation ‘left
over’ once the predictions of the regression equation are taken into account. The expression
(yi − ¯y)2, on the
other hand, represents the total variation (total sum of squares or SST) of the dependent variable around its mean
value. So R2 can be written as 1 minus the proportion of the variation in yi that is ‘unexplained’; or in other words
it shows the proportion of the variation in yi that is accounted for by the estimated equation. As such, it must be
bounded by 0 and 1.
0 ≤ R2 ≤ 1
R2 = 1 is a ‘perfect score’, obtained only if the data points happen to lie exactly along a straight line; R2 = 0 is
perfectly lousy score, indicating that xi is absolutely useless as a predictor for yi .
When you add an additional variable to a regression equation, there is no way it can raise the SSR, and in fact
it’s likely to lower the SSR somewhat even if the added variable is not very relevant. And lowering the SSR means
raising the R2 value. One might therefore be tempted to add too many extraneous variables to a regression if one
were focussed on achieving the maximum R2. An alternative calculation, the adjusted R-squared or ¯
R2, attaches a
small penalty to adding more variables: thus if adding an additional variable raises the ¯
R2 for a regression, that’s a
better indication that is has “improved” the model that if it merely raises the plain, unadjusted R2. The formula is:
SSR/(n
n
¯
k − 1)
− 1
R2 = 1 −
(1 − R2)
SST/(n − 1)
= 1 − n k − 1
where k + 1 represents the number of parameters being estimated (2 in a simple regression).
To summarize so far: alongside the estimated regression coefficients ˆ
β0 and ˆβ1, we should also examine the
sum of squared residuals (SSR), the regression standard error ( ˆσ ) and the R2 value (adjusted or unadjusted), in
order to judge whether the best-fitting line does in fact fit the data to an adequate degree.
3

3
Confidence intervals for regression coefficients
As stated above, even if the OLS math is performed correctly there is no guarantee that the coefficients ˆ
β0 and
ˆ
β1 thus obtained correspond exactly with the underlying parameters β0 and β1. Actually, such an exact corre-
spondence is highly unlikely. The statistical issue here is a very general one: estimation is inevitably subject to
sampling error
.
As we have seen, a confidence interval provides a means of quantifying the uncertainty produced by sampling
error. Instead of simply stating ‘I found a sample mean income of $39,000 and that is my best guess at the
population mean, although I know it is probably wrong’, we can make a statement like: ‘I found a sample mean
of $39,000, and there is a 95 percent probability that my estimate is off the true parameter value by no more than
$1200.’
Confidence intervals for regression coefficients can be constructed in a similar manner. Suppose we come up
with a slope estimate of ˆ
β1 = .90, using the OLS technique, and we want to quantify our uncertainty over the true
slope parameter, β1, by drawing up a 95 percent confidence interval for this parameter.
Provided our sample size is reasonably large, the rule of thumb is the same as before; the 95 percent confidence
interval for β1 is given by:
ˆ
β1 ± 2 standard errors
Our single best guess at β1 (‘point estimate’) is simply ˆ
β1, since the OLS technique yields unbiased estimates
of the parameters (actually, this is not always true, but we’ll postpone consideration of tricky cases where OLS
estimates are biased). And on exactly the same grounds as before, there is a 95 per chance that our estimate ˆ
β1
will lie within 2 standard errors of its mean value, β1. But how do we find the standard error of ˆ
β1? I shall not
derive this rigorously, but give the formula along with an intuitive explanation. The standard error of ˆ
β1 (written
as se( ˆ
β1), and not to be confused with the standard error of the regression, ˆσ) is given by the formula:
ˆσ 2
se( ˆ
β1) =
(xi − ¯x)2
i.e., it is the square root of [the square of the regression standard error divided by the total variation of the indepen-
dent variable, xi , around its mean].
What are the various components of the calculation doing? First, note the general point that the larger is se( ˆ
β1),
the wider will be the confidence interval for any specified confidence level. Now, according to the formula, the
larger is ˆσ , the larger will be se( ˆ
β1), and hence the wider the confidence interval for the true slope. This makes
sense: ˆσ provides a measure of the ‘degree of fit’ of the estimated equation, as discussed above. If the equation fits
the data badly (‘large’ ˆσ ), it stands to reason that we should have a relatively high degree of uncertainty over the
true slope parameter.
Secondly, the formula tells us that, other things equal, a high degree of variation of xi makes for a smaller
se( ˆ
β1), and so a tighter confidence interval. Why should this be? The more xi has varied in our data sample, the
better the chance we have of picking up any relationship that exists between x and y. Take an extreme case and
this is rather obvious: suppose that x happens not to have varied at all in our sample (i.e.,
(xi − ¯x)2 = 0). In that
case we have no chance of detecting any influence of x on y. And the more the independent variable has moved,
the more any influence it may have on the dependent variable should stand out against the background ‘noise’, ui .
4
Example of confidence interval for the slope parameter
One example. Suppose we’re interested in whether a positive linear relationship exists between xi and yi . We’ve
obtained ˆ
β1 = .90 and se( ˆβ1) = .12. The approximate 95 percent confidence interval for β1 is then
.90 ± 2(.12) = .90 ± .24 = .66 to 1.14
This tells us that we can state, with at least 95 percent confidence, that β1 > 0, and there is a positive relationship.
On the other hand, if we had obtained se( ˆ
β1) = .61, our interval would have been
.90 ± 2(.61) = .90 ± 1.22 = −.32 to 2.12
In this case the interval straddles zero, and we cannot be confident (at the 95 percent level) that there exists a
positive relationship.
4

Download
Regression Analysis: Basic Concepts

 

 

Your download will begin in a moment.
If it doesn't, click here to try again.

Share Regression Analysis: Basic Concepts to:

Insert your wordpress URL:

example:

http://myblog.wordpress.com/
or
http://myblog.com/

Share Regression Analysis: Basic Concepts as:

From:

To:

Share Regression Analysis: Basic Concepts.

Enter two words as shown below. If you cannot read the words, click the refresh icon.

loading

Share Regression Analysis: Basic Concepts as:

Copy html code above and paste to your web page.

loading