September 23, 2001 Version
Econ 232D

Causal Inference and Program Evaluation

READINGS

NOTE: Most of the readings can be obtained from JSTOR (www.jstor.org) or NBER (www.nber.org). The remaining papers are provided below, in PDF format.
 

Week 1 (9/25): Introduction: Potential Outcomes

Cox, D. R., (1992), “Causality: Some Statistical Aspects,” Journal of the Royal Statistical Society, Series A, 155, part 2, 291-301.

Holland, P., (1986), “Statistics and Causal Inference,” (with discussion), Journal of the American Statistical Association, 81, 945-970.

Rubin, D. (1974), “Estimating Causal Effects of Treatments in Randomized and Non-randomized Studies,” Journal of Educational Psychology, 66, 688-701.

Week 2 (10/2): Randomized Experiments, Randomization Inference

Neyman, J., (1923), “On the Application of Probability Theory to Agricultural Experiments. Essay on Principles. Section 9,” translated in Statistical Science, (with discussion), Vol 5, No 4, 465-480, 1990.

Fisher, R. A., (1935), The Design of Experiments, chapter 2, “The principles of experimentation, illustrated by a psycho-physical experiment.”

Lalonde, R. (1988), “Evaluating the Econometric Evaluations of Training Programs,” American Economic Review.

Heckman, J. (1992), “Randomization and Social Policy Evaluation,” in C. Manski and I. Garfinkel, eds., Evaluating Welfare and Training Programs, Harvard University Press.

Burtless, G. (1995), “The Case for Randomized Field Trials in Economic and Policy Research,” Journal of Economic Perspectives, 9(2):63-84.

Heckman, J. and J. Smith (1998), “Assessing the Case for Social Experiments,” Journal of Economic Perspectives, 9(2):85-110.

Heckman, J., R. Lalonde, and J. Smith (1999), “The Economics and Econometrics of Active Labor Market Programs,” Handbook of Labor Economics, Volume 3, Ashenfelter, A. and D. Card, eds., Amsterdam: Elsevier Science.

Week 3: (10/9): Observational Studies with Unconfounded Treatment Assignment

Rubin, D. B., (1977), “Assignment to a Treatment Group on the Basis of a Covariate,” Journal of Educational Statistics, 2, 1-26.

Barnow, B., G. Cain, and A. Goldberger (1980). “Issues in the Analysis of Selectivity Bias,” Evaluation Studies, Vol. 5, ed. by E. Stromsdorfer and G. Farkas, 1980, pp. 42-59.

Card, D., and Sullivan “Measuring the Effect of Subsidized Training Programs on Movements In and Out of Employment,” Econometrica, 56(3):497-530.

Heckman, J., H. Ichimura, and P. Todd (1998), “Matching as an Econometric Evaluation Estimator,” Review of Economic Studies, 65, 261-294.

Abadie, A., and G. Imbens, (2001), “A Simple and Bias-corrected Matching Estimator for Average Treatment Effects”.

Week 4 (10/16): The Role of the Propensity Score

Rosenbaum, P., and D. Rubin, (1983), “The central role of the propensity score in observational studies for causal effects,” Biometrika, 70, 1, 41-55.

Rosenbaum, P., and D. Rubin, (1984), “Reducing bias in observational studies using subclassification on the propensity score,” Journal of the American Statistical Association, Vol 79, 516-524.

Dehejia, R., and S. Wahba, (1999), “Causal Effects in Non-experimental Studies: Re-evaluating the Evaluation of Training Programs,” Journal of the American Statistical Association

Rosenbaum, P., and D. Rubin, (1983), “Assessing Sensitivity to an Unobserved Binary Covariate in an Observational Study with Binary Outcome,” Journal of the Royal Statistical Society, Series B, 45, 212-218.

Hirano, K., G. Imbens and G. Ridder, “Efficient Estimation of Average Treatment Effects Using the Estimated Propensity Score.”

Week 5 (10/23): The Role of Testing

Heckman, J., and V. J. Hotz, (1989) “Alternative Methods for Evaluating the Impact of Training Programs,” (with discussion), Journal of the American Statistical Association, 84(408):862-880.

Rosenbaum, P., (1987), “The role of a second control group in an observational study,” Statistical Science, (with discussion), 2(3):292-316.

Hotz, V. J., G. Imbens, and J. Klerman (2001) “The Long-Term Gains from GAIN: A Re-Analysis of the Impacts of the California GAIN Program,” Unpublished manuscript, UCLA, September 2001.

Week 6 (10/30): Bounds

Manski, C. (1990), “Nonparametric Bounds on Treatment Effects,” American Economic Review Papers and Proceedings, 80, 319-23.

Manski, C., G. Sandefur, S. McLanahan, and D. Powers (1992), “Alternative Estimates of the Effect of Family Structure During Adolescence on High School,” Journal of the American Statistical Association, 87(417):25-37.

Manski, C. (1997), “The Mixing Problem in Programme Evaluation,” Review of Economic Studies, 64(4):537-53.

Hotz, J., C. Mullin and S. Sanders, (1997), “Bounding Causal Effects Using Data from a Contaminated Natural Experiment: Analyzing the Effects of Teenage Childbearing,” Review of Economic Studies, 64:576-603.

Heckman, J., N. Clements and J. Smith (1997), “Making The Most Out of Social Experiments: The Intrinsic Uncertainty in Evidence From Randomized Trials With An Application To The National JTPA Experiment,” Review of Economic Studies, 64, pp. 487-535.

Week 7 (11/6): Instrumental Variables

Angrist, J., G. W. Imbens and D. Rubin, (1996), “Identification of Causal Effects Using Instrumental Variables,” Journal of the American Statistical Association.

Angrist, J., (1990), “Lifetime Earnings and the Vietnam Era Draft Lottery: Evidence from Social Security Administrative Records,” American Economic Review, 80, 313-335.

Angrist, J., and A. Krueger, (1991), “Does Compulsory School Attendance Affect Schooling and Earnings,” Quarterly Journal of Economics, 106, 979-1014.

Berry, S. (1994), “Estimating Discrete-Choice Models of Product Differentiation,” RAND Journal of Economics, 25(2)242-262.

Week 8 (11/13): Simultaneous Equations Models

J. Tinbergen, “Determination and Interpretation of Supply Curves: An Example” Zeitschrift fur Nationalokonomie, reprinted in: The Foundations of Econometrics, Hendry and Morgan (eds).

Angrist, J., K. Graddy and G. Imbens, (2000), “The Interpretation of Instrumental Variables Estimators in Simultaneous Equations Models with an Application to the Demand for Fish,” Review of Economic Studies, 67, 499-527.

Imbens, G., and W. Newey (2001) “Nonparametric Identification of Triangular Simultaneous Equation Models without Addivity.”

Week 9 (11/27): Difference in Differences Estimation

Card, D. and A. Krueger, (1994), “Minimum Wages and Employment: A Case Study of the Fast-Food Industry in New Jersey and Pennsylvania” American Economic Review, 84: 772-93.

Blundell, R. and T. MaCurdy (1999), “Labour Supply: A Review of Alternative Approaches,” Handbook of Labor Economics, Volume 3, Ashenfelter, A. and D. Card, eds., Amsterdam: Elsevier Science, 1608-15.

Heckman, J., R. Lalonde, and J. Smith (1999), “The Economics and Econometrics of Active Labor Market Programs,” Handbook of Labor Economics, Volume 3, Ashenfelter, A. and D. Card, eds., Amsterdam: Elsevier Science.

Angrist, J. and A. Krueger (1999), “Empirical Strategies in Labor Economics,” Handbook of Labor Economics, Volume 3, Ashenfelter, A. and D. Card, eds., Amsterdam: Elsevier Science.

Dynarski, S. (1999), “Does Aid Matter? Measuring the Effect of Student Aid on College Attendance and Completion,” NBER Working Paper No. W7422.

Week 10 (12/4): Regression Discontinuity

Van der Klaauw, W. (2000), “Estimating the Effect of Financial Aid Offers on College Enrollment: A Regression-Discontinuity Approach,” forthcoming in International Economic Review.

Hahn, J., P. Todd, and W. Van der Klaauw (2001), “Identification and Estimation of  Treatment Effects with a Regression-Discontinuity Design,” Econometrica, 69(1):201-209.

Angrist, J., and V. Lavy (1999), “Using Maimonides' Rule to Estimate the Effect of Class Size on Scholastic Achievement,” Quarterly Journal of Economics.

Black, S. (1999), “Do Better Schools Matter? Parental Valuation of Elementary Education,” Quarterly Journal of Economics, May 1999, 577-99.