Essays /

Note On Game Theory Essay

Essay preview

CHAPTER 1

SOLUTIONS TO PROBLEMS

1.1 (i) Ideally, we could randomly assign students to classes of different sizes. That is, each student is assigned a different class size without regard to any student characteristics such as ability and family background. For reasons we will see in Chapter 2, we would like substantial variation in class sizes (subject, of course, to ethical considerations and resource constraints).

(ii) A negative correlation means that larger class size is associated with lower performance. We might find a negative correlation because larger class size actually hurts performance. However, with observational data, there are other reasons we might find a negative relationship. For example, children from more affluent families might be more likely to attend schools with smaller class sizes, and affluent children generally score better on standardized tests. Another possibility is that, within a school, a principal might assign the better students to smaller classes. Or, some parents might insist their children are in the smaller classes, and these same parents tend to be more involved in their children’s education.

(iii) Given the potential for confounding factors – some of which are listed in (ii) – finding a negative correlation would not be strong evidence that smaller class sizes actually lead to better performance. Some way of controlling for the confounding factors is needed, and this is the subject of multiple regression analysis.

1.2 (i) Here is one way to pose the question: If two firms, say A and B, are identical in all respects except that firm A supplies job training one hour per worker more than firm B, by how much would firm A’s output differ from firm B’s?

(ii) Firms are likely to choose job training depending on the characteristics of workers. Some observed characteristics are years of schooling, years in the workforce, and experience in a particular job. Firms might even discriminate based on age, gender, or race. Perhaps firms choose to offer training to more or less able workers, where “ability” might be difficult to quantify but where a manager has some idea about the relative abilities of different employees. Moreover, different kinds of workers might be attracted to firms that offer more job training on average, and this might not be evident to employers.

(iii) The amount of capital and technology available to workers would also affect output. So, two firms with exactly the same kinds of employees would generally have different outputs if they use different amounts of capital or technology. The quality of managers would also have an effect.

(iv) No, unless the amount of training is randomly assigned. The many factors listed in parts (ii) and (iii) can contribute to finding a positive correlation between output and training even if job training does not improve worker productivity.

1.3 It does not make sense to pose the question in terms of causality. Economists would assume that students choose a mix of studying and working (and other activities, such as attending class, leisure, and sleeping) based on rational behavior, such as maximizing utility subject to the constraint that there are only 168 hours in a week. We can then use statistical methods to measure the association between studying and working, including regression analysis that we cover starting in Chapter 2. But we would not be claiming that one variable “causes” the other. They are both choice variables of the student.

CHAPTER 2
SOLUTIONS TO PROBLEMS

2.1 (i) Income, age, and family background (such as number of siblings) are just a few possibilities. It seems that each of these could be correlated with years of education. (Income and education are probably positively correlated; age and education may be negatively correlated because women in more recent cohorts have, on average, more education; and number of siblings and education are probably negatively correlated.)

(ii) Not if the factors we listed in part (i) are correlated with educ. Because we would like to hold these factors fixed, they are part of the error term. But if u is correlated with educ then E(u|educ) ( 0, and so SLR.4 fails.

2.2 In the equation y = (0 + (1x + u, add and subtract (0 from the right hand side to get y = ((0 + (0) + (1x + (u ( (0). Call the new error e = u ( (0, so that E(e) = 0. The new intercept is (0 + (0, but the slope is still (1.

2.3 (i) Let yi = GPAi, xi = ACTi, and n = 8. Then [pic]= 25.875, [pic] = 3.2125, [pic](xi – [pic])(yi – [pic]) = 5.8125, and [pic](xi – [pic])2 = 56.875. From equation (2.9), we obtain the slope as [pic]= 5.8125/56.875 [pic] .1022, rounded to four places after the decimal. From (2.17), [pic] = [pic] – [pic][pic] [pic] 3.2125 – (.1022)25.875 [pic] .5681. So we can write

[pic] = .5681 + .1022 ACT

n = 8.

The intercept does not have a useful interpretation because ACT is not close to zero for the population of interest. If ACT is 5 points higher, [pic] increases by .1022(5) = .511.

(ii) The fitted values and residuals — rounded to four decimal places — are given along with the observation number i and GPA in the following table:

|i |GPA |[pic] | [pic] |
|1 |2.8 |2.7143 |.0857 |
|2 |3.4 |3.0209 |.3791 |
|3 |3.0 |3.2253 |–.2253 |
|4 |3.5 |3.3275 |.1725 |
|5 |3.6 |3.5319 |.0681 |
|6 |3.0 |3.1231 |–.1231 |
|7 |2.7 |3.1231 |–.4231 |
|8 |3.7 |3.6341 |.0659 |

You can verify that the residuals, as reported in the table, sum to (.0002, which is pretty close to zero given the inherent rounding error.

(iii) When ACT = 20, [pic]= .5681 + .1022(20) [pic] 2.61.

(iv) The sum of squared residuals, [pic], is about .4347 (rounded to four decimal places), and the total sum of squares, [pic](yi – [pic])2, is about 1.0288. So the R-squared from the regression is

R2 = 1 – SSR/SST [pic] 1 – (.4347/1.0288) [pic] .577.

Therefore, about 57.7% of the variation in GPA is explained by ACT in this small sample of students.

2.4 (i) When cigs = 0, predicted birth weight is 119.77 ounces. When cigs = 20, [pic] = 109.49. This is about an 8.6% drop.

(ii) Not necessarily. There are many other factors that can affect birth weight, particularly overall health of the mother and quality of prenatal care. These could be correlated with cigarette smoking during birth. Also, something such as caffeine consumption can affect birth weight, and might also be correlated with cigarette smoking.

(iii) If we want a predicted bwght of 125, then cigs = (125 – 119.77)/( –.524) [pic]–10.18, or about –10 cigarettes! This is nonsense, of course, and it shows what happens when we are trying to predict something as complicated as birth weight with only a single explanatory variable. The largest predicted birth weight is necessarily 119.77. Yet almost 700 of the births in the sample had a birth weight higher than 119.77.

(iv) 1,176 out of 1,388 women did not smoke while pregnant, or about 84.7%. Because we are using only cigs to explain birth weight, we have only one predicted birth weight at cigs = 0. The predicted birth weight is necessarily roughly in the middle of the observed birth weights at cigs = 0, and so we will under predict high birth rates.

2.5 (i) The intercept implies that when inc = 0, cons is predicted to be negative $124.84. This, of course, cannot be true, and reflects that fact that this consumption function might be a poor predictor of consumption at very low-income levels. On the other hand, on an annual basis, $124.84 is not so far from zero.

(ii) Just plug 30,000 into the equation: [pic] = –124.84 + .853(30,000) = 25,465.16 dollars.

(iii) The MPC and the APC are shown in the following graph. Even though the intercept is negative, the smallest APC in the sample is positive. The graph starts at an annual income level of $1,000 (in 1970 dollars). [pic]

2.6 (i) Yes. If living closer to an incinerator depresses housing prices, then being farther away increases housing prices.

(ii) If the city chose to locate the incinerator in an area away from more expensive neighborhoods, then log(dist) is positively correlated with housing quality. This would violate SLR.4, and OLS estimation is biased.

(iii) Size of the house, number of bathrooms, size of the lot, age of the home, and quality of the neighborhood (including school quality), are just a handful of factors. As mentioned in part (ii), these could certainly be correlated with dist [and log(dist)].

2.7 (i) When we condition on inc in computing an expectation, [pic] becomes a constant. So E(u|inc) = E([pic][pic]e|inc) = [pic][pic]E(e|inc) = [pic][pic]0 because E(e|inc) = E(e) = 0.

(ii) Again, when we condition on inc in computing a variance, [pic] becomes a constant. So Var(u|inc) = Var([pic][pic]e|inc) = ([pic])2Var(e|inc) = [pic]inc because Var(e|inc) = [pic].

(iii) Families with low incomes do not have much discretion about spending; typically, a low-income family must spend on food, clothing, housing, and other necessities. Higher income people have more discretion, and some might choose more consumption while others more saving. This discretion suggests wider variability in saving among higher income families.

2.8 (i) From equation (2.66),

[pic] = [pic] / [pic].

Plugging in yi = (0 + (1xi + ui gives

[pic] = [pic]/ [pic].

After standard algebra, the numerator can be written as

[pic].

Putting this over the denominator shows we can write [pic] as

[pic] = (0[pic]/ [pic] + (1 + [pic]/ [pic].

Conditional on the xi, we have

E([pic]) = (0[pic]/ [pic] + (1
because E(ui) = 0 for all i. Therefore, the bias in [pic] is given by the first term in this equation. This bias is obviously zero when (0 = 0. It is also zero when [pic] = 0, which is the same as [pic] = 0. In the latter case, regression through the origin is identical to regression with an intercept.

(ii) From the last expression for [pic]in part (i) we have, conditional on the xi,

Var([pic]) = [pic]Var[pic] = [pic][pic]

= [pic][pic] = [pic]/ [pic].

(iii) From (2.57), Var([pic]) = σ2/[pic]. From the hint, [pic] ( [pic], and so Var([pic]) ( Var([pic]). A more direct way to see this is to write [pic] = [pic], which is less than [pic] unless [pic] = 0.

(iv) For a given sample size, the bias in [pic] increases as [pic] increases (holding the sum of the [pic] fixed). But as [pic] increases, the variance of [pic]increases relative to Var([pic]). The bias in [pic] is also small when [pic] is small. Therefore, whether we prefer [pic] or [pic] on a mean squared error basis depends on the sizes of [pic], [pic], and n (in addition to the size of [pic]).

2.9 (i) We follow the hint, noting that [pic] = [pic] (the sample average of [pic] is c1 times the sample average of yi) and [pic] = [pic]. When we regress c1yi on c2xi (including an intercept) we use equation (2.19) to obtain the slope:

[pic]
From (2.17), we obtain the intercept as [pic] = (c1[pic]) – [pic](c2[pic]) = (c1[pic]) – [(c1/c2)[pic]](c2[pic]) = c1([pic] – [pic][pic]) = c1[pic]) because the intercept from regressing yi on xi is ([pic] – [pic][pic]).

(ii) We use the same approach from part (i) along with the fact that [pic] = c1 + [pic] and [pic] = c2 + [pic]. Therefore, [pic] = (c1  + yi) – (c1 + [pic]) = yi – [pic] and (c2 + xi) – [pic] = xi – [pic]. So c1 and c2 entirely drop out of the slope formula for the regression of (c1 + yi) on (c2 + xi), and [pic] = [pic]. The intercept is [pic] = [pic] – [pic][pic] = (c1 + [pic]) – [pic](c2 + [pic]) = ([pic]) + c1 – c2[pic] = [pic] + c1 – c2[pic], which is what we wanted to show.

(iii) We can simply apply part (ii) because [pic]. In other words, replace c1 with log(c1), yi with log(yi), and set c2 = 0.

(iv) Again, we can apply part (ii) with c1 = 0 and replacing c2 with log(c2) and xi with log(xi). If [pic] are the original intercept and slope, then [pic] and [pic].

2.10 (i) This derivation is essentially done in equation (2.52), once [pic] is brought inside the summation (which is valid because [pic] does not depend on i). Then, just define [pic].

(ii) Because [pic] we show that the latter is zero. But, from part (i), [pic][pic] Because the [pic] are pairwise uncorrelated (they are independent), [pic] (because [pic]). Therefore, [pic]

(iii) The formula for the OLS intercept is[pic] and, plugging in [pic] gives [pic]

(iv) Because [pic] are uncorrelated,
[pic],
which is what we wanted to show.

(v) Using the hint and substitution gives [pic] [pic]

2.11 (i) We would want to randomly assign the number of hours in the preparation course so that hours is independent of other factors that affect performance on the SAT. Then, we would collect information on SAT score for each student in the experiment, yielding a data set [pic], where n is the number of students we can afford to have in the study. From equation (2.7), we should try to get as much variation in [pic] as is feasible.

(ii) Here are three factors: innate ability, family income, and general health on the day of the exam. If we think students with higher native intelligence think they do not need to prepare for the SAT, then ability and hours will be negatively correlated. Family income would probably be positively correlated with hours, because higher income families can more easily afford preparation courses. Ruling out chronic health problems, health on the day of the exam should be roughly uncorrelated with hours spent in a preparation course.

(iii) If preparation courses are effective,[pic] should be positive: other factors equal, an increase in hours should increase sat.

(iv) The intercept, [pic], has a useful interpretation in this example: because E(u) = 0, [pic] is the average SAT score for students in the population with hours = 0.

CHAPTER 3
SOLUTIONS TO PROBLEMS

3.1 (i) hsperc is defined so that the smaller it is, the lower the student’s standing in high school. Everything else equal, the worse the student’s standing in high school, the lower is his/her expected college GPA.

(ii) Just plug these values into the equation:

[pic] = 1.392 ( .0135(20) + .00148(1050) = 2.676.

(iii) The difference between A and B is simply 140 times the coefficient on sat, because hsperc is the same for both students. So A is predicted to have a score .00148(140) [pic] .207 higher.

(iv) With hsperc fixed, [pic] = .00148(sat. Now, we want to find (sat such that [pic] = .5, so .5 = .00148((sat) or (sat = .5/(.00148) [pic] 338. Perhaps not surprisingly, a large ceteris paribus difference in SAT score – almost two and one-half standard deviations – is needed to obtain a predicted difference in college GPA or a half a point.

3.2 (i) Yes. Because of budget constraints, it makes sense that, the more siblings there are in a family, the less education any one child in the family has. To find the increase in the number of siblings that reduces predicted education by one year, we solve 1 = .094((sibs), so (sibs = 1/.094 [pic] 10.6.

(ii) Holding sibs and feduc fixed, one more year of mother’s education implies .131 years more of predicted education. So if a mother has four more years of education, her son is predicted to have about a half a year (.524) more years of education.

(iii) Since the number of siblings is the same, but meduc and feduc are both different, the coefficients on meduc and feduc both need to be accounted for. The predicted difference in education between B and A is .131(4) + .210(4) = 1.364.

3.3 (i) If adults trade off sleep for work, more work implies less sleep (other things equal), so [pic] < 0.

(ii) The signs of [pic] and [pic] are not obvious, at least to me. One could argue that more educated people like to get more out of life, and so, other things equal, they sleep less ([pic] < 0). The relationship between sleeping and age is more complicated than this model suggests, and economists are not in the best position to judge such things.

(iii) Since totwrk is in minutes, we must convert five hours into minutes: (totwrk = 5(60) = 300. Then sleep is predicted to fall by .148(300) = 44.4 minutes. For a week, 45 minutes less sleep is not an overwhelming change.

(iv) More education implies less predicted time sleeping, but the effect is quite small. If we assume the difference between college and high school is four years, the college graduate sleeps about 45 minutes less per week, other things equal.

(v) Not surprisingly, the three explanatory variables explain only about 11.3% of the variation in sleep. One important factor in the error term is general health. Another is marital status, and whether the person has children. Health (however we measure that), marital status, and number and ages of children would generally be correlated with totwrk. (For example, less healthy people would tend to work less.)

3.4 (i) A larger rank for a law school means that the school has less prestige; this lowers starting salaries. For example, a rank of 100 means there are 99 schools thought to be better.

(ii) [pic] > 0, [pic] > 0. Both LSAT and GPA are measures of the quality of the entering class. No matter where better students attend law school, we expect them to earn more, on average. [pic], [pic] > 0. The number of volumes in the law library and the tuition cost are both measures of the school quality. (Cost is less obvious than library volumes, but should reflect quality of the faculty, physical plant, and so on.)

(iii) This is just the coefficient on GPA, multiplied by 100: 24.8%.

(iv) This is an elasticity: a one percent increase in library volumes implies a .095% increase in predicted median starting salary, other things equal.

(v) It is definitely better to attend a law school with a lower rank. If law school A has a ranking 20 less than law school B, the predicted difference in starting salary is 100(.0033)(20) = 6.6% higher for law school A.

3.5 (i) No. By definition, study + sleep + work + leisure = 168. Therefore, if we change study, we must change at least one of the other categories so that the sum is still 168.

(ii) From part (i), we can write, say, study as a perfect linear function of the other independent variables: study = 168 ( sleep ( work ( leisure. This holds for every observation, so MLR.3 violated.

(iii) Simply drop one of the independent variables, say leisure:

GPA = [pic] + [pic]study + [pic]sleep + [pic]work + u.

Now, for example, [pic] is interpreted as the change in GPA when study increases by one hour, where sleep, work, and u are all held fixed. If we are holding sleep and work fixed but increasing study by one hour, then we must be reducing leisure by one hour. The other slope parameters have a similar interpretation.

3.6 Conditioning on the outcomes of the explanatory variables, we have [pic] = E([pic] + [pic]) = E([pic]) + E([pic]) = (1 + (2 = [pic].

3.7 Only (ii), omitting an important variable, can cause bias, and this is true only when the omitted variable is correlated with the included explanatory variables. The homoskedasticity assumption, MLR.5, played no role in showing that the OLS estimators are unbiased. (Homoskedasticity was used to obtain the usual variance formulas for the [pic].) Further, the degree of collinearity between the explanatory variables in the sample, even if it is reflected in a correlation as high as .95, does not affect the Gauss-Markov assumptions. Only if there is a perfect linear relationship among two or more explanatory variables is MLR.3 violated.

3.8 We can use Table 3.2. By definition, [pic] > 0, and by assumption, Corr(x1,x2) < 0. Therefore, there is a negative bias in[pic]: E([pic]) < [pic]. This means that, on average across different random samples, the simple regression estimator underestimates the effect of the training program. It is even possible that E([pic]) is negative even though [pic] > 0.

3.9 (i) [pic] < 0 because more pollution can be expected to lower housing values; note that [pic] is the elasticity of price with respect to nox. [pic] is probably positive because rooms roughly measures the size of a house. (However, it does not allow us to distinguish homes where each room is large from homes where each room is small.)

(ii) If we assume that rooms increases with quality of the home, then log(nox) and rooms are negatively correlated when poorer neighborhoods have more pollution, something that is often true. We can use Table 3.2 to determine the direction of the bias. If [pic] > 0 and Corr(x1,x2) < 0, the simple regression estimator [pic] has a downward bias. But because [pic] < 0, this means that the simple regression, on average, overstates the importance of pollution. [E([pic]) is more negative than [pic].]

(iii) This is what we expect from the typical sample based on our analysis in part (ii). The simple regression estimate, (1.043, is more negative (larger in magnitude) than the multiple regression estimate, (.718. As those estimates are only for one sample, we can never know which is closer to[pic]. But if this is a “typical” sample, [pic] is closer to (.718.

3.10 (i) Because [pic] is highly correlated with [pic] and [pic], and these latter variables have large partial effects on y, the simple and multiple regression coefficients on [pic] can differ by large amounts. We have not done this case explicitly, but given equation (3.46) and the discussion with a single omitted variable, the intuition is pretty straightforward.

(ii) Here we would expect [pic] and [pic] to be similar (subject, of course, to what we mean by “almost uncorrelated”). The amount of correlation between [pic] and [pic] does not directly effect the multiple regression estimate on [pic] if [pic] is essentially uncorrelated with [pic] and[pic].

(iii) In this case we are (unnecessarily) introducing multicollinearity into the regression: [pic] and [pic] have small partial effects on y and yet [pic] and [pic] are highly correlated with[pic]. Adding [pic] and[pic] like increases the standard error of the coefficient on [pic] substantially, so se([pic]) is likely to be much larger than se([pic]).

(iv) In this case, adding [pic] and [pic] will decrease the residual variance without causing much collinearity (because [pic] is almost uncorrelated with [pic] and[pic]), so we should see se([pic]) smaller than se([pic]). The amount of correlation between [pic] and [pic] does not directly affect se([pic]).

3.11 From equation (3.22) we have

[pic]

where the [pic] are defined in the problem. As usual, we must plug in the true model for yi:

[pic]

The numerator of this expression simplifies because [pic] = 0, [pic] = 0, and [pic] = [pic]. These all follow from the fact that the [pic] are the residuals from the regression of [pic] on [pic]: the [pic] have zero sample average and are uncorrelated in sample with [pic]. So the numerator of [pic] can be expressed as

[pic]

Putting these back over the denominator gives

[pic]

Conditional on all sample values on x1, x2, and x3, only the last term is random due to its dependence on ui. But E(ui) = 0, and so

[pic]

which is what we wanted to show. Notice that the term multiplying [pic] is the regression coefficient from the simple regression of xi3 on [pic].

3.12 (i) The shares, by definition, add to one. If we do not omit one of the shares then the equation would suffer from perfect multicollinearity. The parameters would not have a ceteris paribus interpretation, as it is impossible to change one share while holding all of the other shares fixed.

(ii) Because each share is a proportion (and can be at most one, when all other shares are zero), it makes little sense to increase sharep by one unit. If sharep increases by .01 – which is equivalent to a one percentage point increase in the share of property taxes in total revenue – holding shareI, shareS, and the other factors fixed, then growth increases by [pic](.01). With the other shares fixed, the excluded share, shareF, must fall by .01 when sharep increases by .01.

3.13 (i) For notational simplicity, define szx = [pic] this is not quite the sample covariance between z and x because we do not divide by n – 1, but we are only using it to simplify notation. Then we can write [pic] as

[pic]

This is clearly a linear function of the yi: take the weights to be wi = (zi ([pic])/szx. To show unbiasedness, as usual we plug yi = [pic] + [pic]xi + ui into this equation, and simplify:

[pic]

where we use the fact that [pic] = 0 always. Now szx is a function of the zi and xi and the expected value of each ui is zero conditional on all zi and xi in the sample. Therefore, conditional on these values,

[pic]

because E(ui) = 0 for all i.

(ii) From the fourth equation in part (i) we have (again conditional on the zi and xi in the sample),

[pic]

because of the homoskedasticity assumption [Var(ui) = (2 for all i]. Given the definition of szx, this is what we wanted to show.

(iii) We know that Var([pic]) = (2/[pic] Now we can rearrange the inequality in the hint, drop [pic] from the sample covariance, and cancel n-1 everywhere, to get [pic] ≥ [pic] When we multiply through by (2 we get Var([pic]) (( Var([pic]), which is what we wanted to show.

CHAPTER 4
4.1 (i) and (iii) generally cause the t statistics not to have a t distribution under H0. Homoskedasticity is one of the CLM assumptions. An important omitted variable violates Assumption MLR.3. The CLM assumptions contain no mention of the sample correlations among independent variables, except to rule out the case where the correlation is one.

4.2 (i) H0:[pic] = 0. H1:[pic] > 0.

(ii) The proportionate effect on [pic] is .00024(50) = .012. To obtain the percentage effect, we multiply this by 100: 1.2%. Therefore, a 50 point ceteris paribus increase in ros is predicted to increase salar...

Read more

Keywords

+1 +2 -1 -12 -2 -3 -4 -6 -8 /.039 /.049 /.175 /.37 /.56 /100 /2 /34.33 /5.88 /56.875 /n /n1 /se /szx /t /var 0 0.27 0.429 000 0000000070 000000014 00016 0002 00024 00024/.00054 00026 0003 00030 00054 00058 00078 00148 00321 0033 0037 0044 0062 0070 0077 01 011 012 0126 0135 017 018 019 020 0215 023 026 027 029 031 032 0330 036 038 0395 044 05 059/.069 0659 066 067 068 068/.240 0681 073 0857 090 094 095 0intt 0y90 0zt 1 1.0288 1.043 1.053 1.087 1.1 1.104 1.17 1.172 1.2 1.26 1.282 1.3 1.31 1.311 1.33 1.36 1.364 1.392 1.41 1.417 1.43 1.46 1.47 1.486 1.52 1.54 1.60 1.65 1.684 1.699 1.70 1.71 1.77 1.85 1.89 1.96 1.987 1/.094 1/.2 1/2 1/3 1/4 1/z1 10 10.1 10.18 10.2 10.3 10.4 10.5 10.51 10.6 10.7 100 101 1022 103 1050 107.50 109.49 11 11.1 11.13 11.2 11.21 11.3 11.37 11.4 11.5 11.6 11.7 113 116 117 119.77 11z1 12 12.1 12.2 12.3 12.4 12.42 12.43 12.5 12.51 12.6 12.9 120 1231 124 124.84 124/3 125 126 128 12z2 12zt 13 13.1 13.10 13.2 13.3 13.4 13.5 13.6 13.7 13.9 131 132 137/4 14 14.1 14.2 14.3 14.4 14.5 14.6 140 141 142 148 1484 15 15.1 15.10 15.11 15.19 15.2 15.20 15.22 15.26 15.3 15.30 15.31 15.4 15.5 15.6 15.7 15.8 15.9 151 157 158 16 16.1 16.2 16.3 16.4 16.5 16.6 16.7 16.8 163 163/.018 165 168 169.81 16zt 17 17.1 17.2 17.23 17.3 17.4 17.5 17.6 17.7 170 1725 176 177 18 18.1 18.2 18.3 18.4 18.5 18.6 18.68 18.69 18.7 18.8 18.9 181 186 19 19.3 191 1970 1972 1978 198 1981 1984 1988 1990 1992 1994 1995 1996 1athsuccit 1atndrte 1atndrtesc 1cathrel 1cov 1d90t 1e 1et 1expendit 1faminc 1fl 1girlhs 1hpriceit 1infrate 1intt 1log 1medincit 1mpricet 1policeit 1pricet 1prigpa 1q2t 1returnt 1stateallit 1t 1u1 1ui 1uit 1unemt 1ut 1v2 1var 1vt 1x 1xi 1xt 1y2 1y90 1yt 1z1 1zt 2 2.05 2.1 2.10 2.11 2.13 2.15 2.17 2.19 2.2 2.21 2.3 2.30 2.38 2.4 2.423 2.45 2.5 2.52 2.56 2.57 2.58 2.6 2.61 2.613 2.62 2.66 2.660 2.67 2.676 2.7 2.7143 2.75 2.79 2.8 2.86 2.9 2.97 2/.3 20 200 2002 207 209 21 21.1 210 211 215 21z1 22 2253 228 229 22z2 23 232 24 24.7 24.8 247 247/6 25 25.875 256 259 262 267 273 28 28.3 281 283 283/.099 29 29.2 2act 2cov 2d95t 2e 2et 2faminc 2grant 2inc2 2log 2majorsc 2meduc 2prigpa 2q3t 2rd 2sls 2var 2z1 2z2 2zt 3 3.0 3.00 3.0209 3.1 3.10 3.11 3.12 3.1231 3.13 3.2 3.2125 3.22 3.2253 3.23 3.29 3.3 3.3275 3.4 3.46 3.5 3.5319 3.6 3.6341 3.7 3.8 3.9 3.96 3/100 3/2 3/4 30 300 305 31 32 321 321/.216 321/100 338 353 365 375 376 3791 38 38.46 388 3act 3dist 3feduc 3meduc 3q3t 3rd 3rd2 3sats 3x1 3zt 4 4.1 4.10 4.11 4.2 4.23 4.3 4.4 4.40 4.41 4.5 4.6 4.7 4.8 4.85 4.9 40 412 414 4231 428.57 4347 4347/1.0288 44 44.4 441 448.99 45 465.16 480/.109 49 4cumgpas 4feduc 4numghs 4prigpa 4zt 5 5.1 5.2 5.3 5.4 5.5 5.8125 50 500 503 511 52 524 545 56.875 5681 57 57.7 577 586 593 596 6 6.1 6.2 6.3 6.4 6.5 6.6 6.7 6.8 60 62 62.31 64 644.51 66 67 671 671/2 680 686 7 7.1 7.2 7.29 7.3 7.4 7.5 7.6 7.7 7.8 7.9 70 700 702 702/2 706 718 8 8.1 8.2 8.20 8.21 8.3 8.4 8.5 8.6 8.7 80 82 820 829 83 83/3 839 84.7 853 86 87.75 88 89 9 9.06 9.1 9.2 9.3 9.4 9.5 9.8 90 900 92 929 95 951 976 99 9zt a1 a2 a3 abil abl absenc absolut abus accept account achiev across act acti activ actual ad add addit adjust adult afchng afdec6 affect affluent afford age age2 agei aggreg agre ai ai1 alcohol algebra alloc allow almost along also altern although alway among amount analysi analyst annual anoth answer anticip anyway apart apc appear appendix appli applic appreci approach appropri approxim appsit ar arbitrarili arch area argu arguabl argument aris arrest asid assign associ assum assumpt asymptot athlet athsuccit atndrt atndrte2 atndrtesc attach attend attract attribut autocorrel avail averag away awhil b babi back background base basi basketbal bathroom becom behav behavior belong besid best better beyond bias bid bill binari bind birth bit bizarr black blue bowl bp brought budget built buy bwght c c0 c0/cj c0cj c0yi c1 c1/c2 c1xi1 c1yi c2 c2xi caffein calcul call campus cancel candid cannot capac capit captur care case catch categori cathh cathol cathre1 caus causal censor ceo ceoten2 certain ceteri cev chain championship chanc chang chapter characterist check child children choic choos chose chosen chronic ci cig cigarett citi ckxik claim clariti class classroom clear clm close closer cloth cochran cochrane-orcutt coeffici cohort colgpa collect colleg collinear column combin common communiti community-level compani compar compens compet complet complic composit comput comten2 con concern conclud condi condit condom confer confid confound confus consequ consider consist consprod constant constraint construct consum consumpt contain contemporan continu contrari contribut control conus conveni convent convert corn corr correct correl correspond cost could count cours cov covari cover crime critic crmrtei crop cross cross-sect crsgpa crucial cumgpa cumul current cv d data date daughter day deadlin decid decim decis decreas defin definit degre deliv demand demean denomin denot depend depress depth deriv design desir determin deviat df dfur differ differenti difficult direct disadvantag disagre discret discrimin discuss diseas display dissip dist distinguish distribut district disturb divid dkr dollar domin done doubl downward dramat drink drive driver drop drug drunk due dummi durat dynam e e0 e1 earli earlier earn earner easi easier easiest easili econom economi economist educ effect effici eit eitei either elast elect element elig elimin els employ employe endogen enforc enough enrol enter enthusiast entir equal equat equival error especi essenti establish estim et et-1et et-1ut et-2et etet ethic etz evalu evan even event everi everyon everyth everywher evid exact exam exampl exceed except exclud exercis exist exogen exp expand expect expend expenda expenditur expens exper exper2 experi explain explanatori explicit express extend extra ez f f2 f6 fact factor faculti fail fair fall fals famili faminc far farther fatheduc favor fdl feasibl featur feder feduc fell femal fertil fewer fgls fi figur final financ find firm first first-year fit five fix fl fld florida foc follow food footbal forecast form formula four fourth fq freedom full fulli function fund futur g g.2 g.3a g.3b g.3c game gap gauss gauss-markov gave gender general georgia get gfr ggdpt girl girlh give given gls gm gmint gmt go goe gone good gpa gpai gprice gpricet grade graduat granger grant graph greater group grow growth guarante guess guid gusmint gwage gwaget gymnast h h0 h1 half hall hand happen hard harm harvey he/she health healthi held help heterogen heteroskedast heteroskedasticity-robust hhsize high highearn higher hing hint hire his/her hispan histori hold home homoskedast hope hour hous household howev hrsemp hsestrtst hsgpa hsize hsize2 hsperc huge hundr hurt hvyuser hy3t hybrid hypothesi idea ideal ident identifi idiosyncrat ignor ii iii illustr immedi impact impli import impos imposs improv in-depth in-season inappropri inc inc2 incent incident inciner includ inclus incom inconsist incorrect increas incumb independ index indic individu individual-specif induc industri inequ infect infer inflat influenc inform infrat inher initi inlf inlfi innat input insid insignific insist instead instructor instrument int intellig interact intercept interest interpret interv intervent introduc intt intuit invalid invest involv issu iv j j2 januari jerk job joint judg justif justifi k key kidsage6 kidslt6 kind knee knee-jerk know label lag larg larger largest last later latter law lead least leav leisur less let level lexpendit lghtuser librari life like likelihood line linear link list littl live lnchprg local locat log logarithm logit long long-run longer look lot low low-incom lower lrp lsat lunch m0 m0/n0 m1 m1/n1 ma made magnitud main maintain major make male man manag mani margin marijuana marit market markov marri mask math math10 matter max maxim maximum may mayb mean meaning meap measur mechan median meduc men mention met method mi middl might mile million mind minimum minor minut minwagei mislead miss misspecif mitig mix mktval mlr.3 mlr.4 mlr.5 model modus money month moreov motheduc mother motiv move mpc much multicollinear multipl multipli must mvpi n n0 n0/n n0n1/n n1 n1/n nativ natur ncaa near nearinc necess necessari necessarili need negat neglig neighborhood neither net nettfa never nevertheless new next nine ninth nomin non non-cathol non-ceo nonblack none nonlinear nonrandom nonsens nonus nonwhit nopc normal notabl notat note noth notic nox null number numer numgh nwifeinc observ obtain obvious occur offer often ol older omit one one-half one-sid one-tail oppos opposit optim orcutt order origin other otherwis ounc outcom outlf outlfi output outsid overal overestim overidentifi overlap overst overwhelm own ownership p p-valu pair pairwis panel paramet pareduc parent paribus parsimoni part partial particular pass past patent pc pcinct pcipt pcspt pdl pe pension peopl per percent percentag percpar perfect perform perhap period perman perpar persist person perspect perunion pet physic pic pick place plant play plim plim.2 plug plus point poisson polici pollut pool poor poorer pop popul popular pose posit possibl potenti poverti practic prai prais-winsten precis predict predictor prefer pregnant prenat prepar presenc present presidenti prestig presum presumpt pretti prevent previous price prigpa prigpa2 princip prior prioriti privat probabl probit problem problemat procedur process produc product profit profmarg program promin proof proper properti proport proportion propos proptaxit prospect prove provid proxi pupil purpos push put q q2t q3t q4t quadrat qualit qualiti quantifi quarter question quit r r-squar r2 race race/gender radius rainfal random rank rate rather ratio ration rd rdinten re reaction read realli rearrang reason receiv recent record reduc redund reestim reflect regard regress regressor reject relat relationship relev reli rememb remov rent rental repeat replac report repres requir requisit rescal residu resourc respect respons rest restaur restrict result return returnt revenu rewrit right right-hand-sid risk risktol robust roe roe2 role room root ros rough round rule run salari sale sales/1 sales/employ salesbil salesbil2 sampl sat satisfi sav save savingi saw say scale school schwab scienc scill score scoresc scrap sd se season seasperct second section see seem seen segment select self self-explanatori self-select semest semi semi-elast sens sequenc seri serial serious set sever sex sexual share sharef sharei sharep she/he shock shorthand show shown sib sibl side sign signific similar simpl simpler simplest simpli simplic simplifi simultan sinc singl site six six-month size sizeabl skew sleep slight slope slr.4 small smaller smallest smoke smoker snapshot socioeconom softbal sold solut solv someon someth somewhat son sourc speak special specif specifi spend spent sport spread spurious squar ssr ssr/sst ssrr ssrs ssrur sst stabl staff staff-to-pupil stand standard start state statement stationar stationari statist status stay stds still stndfnl stock straightforward strategi strict strong stronger structur student student-athlet student/faculty studi subgroup subject subpopul subscript subsidi substanti substitut subtl subtract success suffer suffici suggest suit sum summat superstar suppli support sure surpris surround sweet symmetr system systemat szx t-1ui t-1uit t-bill t-test tabl tage tail take taken tast tax tceoten tcomten tdkr team technolog teduc televis temporari ten tend term test textbook theorem theoret theori therefor thing think third thornier though thought three three-month thus ticket tight time time-const time-vari tmale toler took total tothr totwrk tournament toward trade tradeoff train tran transform transmit transport trap treat trend tri trivial true truncat ts.1 ts.2 ts.3 ts.5 tuition tuitionit turn turnaround tv tvhour twelfth twelv twhite two two-sid two-tail two-third typic u u.s u1 u2 ui uit uitui unbias unbiased unchang uncorrel under underestim underreport underst unemi unemploy unemt unexplain union uniqu unit uniti univers unless unlik unnecessarili unobserv unrel unrestrict unusu updat upward us usag usc use usual ut util v v1 v2 valid valu valuabl var var.3 vari variabl varianc variat various verifi versus vi view vii violat vis visibl vit vitvi volleybal volum vote vote2.raw votea88 vt vt-h w1 w2 wage wagei want watch way weak week weigh weight well went whatev wherea whether white wi wider willing win winperct winsten within without wls woman women won wooldridg word work worker workforc worri wors would write written wrote x x1 x2 x3 xa xa/var xi xi1 xi3 xik xit xt xt-1et xt-1ut xt-1vt xtet xtxt y y0 y1 y2 y81 y84 y90 year yes yet yi yield yn yt ytyt z z1 z2 zero zet zi zn zt zt0 zt1 zt2 µx µy σ2