Abstract
Statistical causal inference from observational studies often requires adjustment for a possibly multi-dimensional variable, where dimension reduction is crucial. The propensity score, first introduced by Rosenbaum and Rubin, is a popular approach to such reduction. We address causal inference within Dawid’s decision-theoretic framework, where it is essential to pay attention to sufficient covariates and their properties. We examine the role of a propensity variable in a normal linear model. We investigate both population-based and sample-based linear regressions, with adjustments for a multivariate covariate and for a propensity variable. In addition, we study the augmented inverse probability weighted estimator, involving a combination of a response model and a propensity model. In a linear regression with homoscedasticity, a propensity variable is proved to provide the same estimated causal effect as multivariate adjustment. An estimated propensity variable may, but need not, yield better precision than the true propensity variable. The augmented inverse probability weighted estimator is doubly robust and can improve precision if the propensity model is correctly specified.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
For convenience, the values of the regime indicator F_{T} are presented as subscripts.
- 2.
The ⪯ symbol is interpreted as ‘a function of’.
- 3.
The hollow arrow head, pointing from X to V, is used to emphasise that V is a function of X.
- 4.
Rosenbaum and Rubin do not define the balancing score and the PS explicitly for observational studies, although they do aim to apply the PS approach in such studies.
- 5.
In causal system, finite number of individuals in a study is called ‘population’, which can be regard as a sample from a larger ‘superpopulation’ of interest.
References
Bang, H., Robins, J.M.: Doubly robust estimation in missing data and causal inference models. Biometrics 61, 962–972 (2005)
Berzuini, G.: Causal inference methods for criminal justice data, and an application to the study of the criminogenic effect of custodial sanctions. MSc Thesis in Applied Statistics, Birkbeck College, University of London (2013)
Carpenter, J.R., Kenward, M.G., Vansteelandt, S.: A comparison of multiple imputation and doubly robust estimation for analyses with missing data. J. R. Stat. Soc. Ser. A 169, 571–584 (2006)
Dawid, A.P.: Conditional independence in statistical theory (with discussion). J. R. Stat. Soc. Ser. B 41, 1–31 (1979)
Dawid, A.P.: Conditional independence for statistical operations. Ann. Stat. 8, 598–617 (1980)
Dawid, A.P.: Causal inference without counterfactuals. J. Am. Stat. Assoc. 95, 407–424 (2000)
Dawid, A.P.: Influence diagrams for causal modelling and inference. Int. Stat. Rev. 70, 161–189 (2002)
Fisher, R.A.: Theory of statistical estimation. Proc. Camb. Philol. Soc. 22, 700–725 (1925)
Guo, H., Dawid, A.P.: Sufficient covariates and linear propensity analysis. In: Teh, Y.W., Titterington, D.M. (eds.) Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics (AISTATS), Chia Laguna, Sardinia, Italy, 13–15 May 2010. Journal of Machine Learning Research Workshop and Conference Proceedings, vol. 9, pp. 281–288 (2010)
Hahn, J.: On the role of the propensity score in efficient semiparametric estimation of average treatment effects. Econometrica 66, 315–331 (1998)
Hirano, K., Imbens, G.W., Ridder, G.: Efficient estimation of average treatment effects using the estimated propensity score. Econometrica 71, 1161–1189 (2003)
Horvitz, D.G., Thompson, D.J.: A generalization of sampling without replacement from a finite universe. J. Am. Stat. Assoc. 47, 663–685 (1952)
Imbens, G.W., Lemieux, T.: Regression discontinuity designs: a guide to practice. J. Econ. 142, 615–635 (2007)
Kang, J.D.Y., Schafer, J.L.: Demystifying double robustness: a comparison of alternative strategies for estimating a population mean from incomplete data. Stat. Sci. 22, 523–539 (2007)
Mardia, K.V., Kent, J.T., Bibby, J.M.: Multivariate Analysis. Academic, New York (1979)
Pearl, J.: Causal diagrams for empirical research (with discussion). Biometrika 82, 669–710 (1995)
Pearl, J.: Causality: Models, Reasoning, and Inference. Cambridge University Press, Cambridge (2000)
Robins, J.M., Mark, S.D., Newey, W.K.: Estimating exposure effects by modelling the expectation of exposure conditional on confounders. Biometrics 48, 479–495 (1992)
Rosenbaum, P.R., Rubin, D.B.: The central role of the propensity score in observational studies for causal effects. Biometrika 70, 44–55 (1983)
Rosenbaum, P.R., Rubin, D.B.: Reducing bias in observational studies using subclassification on the propensity score. J. Am. Stat. Assoc. 79, 516–524 (1984)
Rubin, D.B.: Estimating causal effects of treatments in randomized and nonrandomized studies. J. Educ. Psychol. 66, 688–701 (1974)
Rubin, D.B.: Assignment to treatment group on the basis of a covariate. J. Educ. Stat. 2, 1–26 (1977)
Rubin, D.B.: Bayesian inference for causal effects: the role of randomization. Ann. Stat. 6, 34–68 (1978)
Rubin, D.B.: Matched Sampling for Causal Effects. Cambridge University Press, Cambridge (2006)
Rubin, D.B., Thomas, N.: Characterizing the effect of matching using linear propensity score methods with normal distributions. Biometrika 79, 797–809 (1992)
Rubin, D.B., van de Laan, M.J.: Covariate adjustment for the intention-to-treat parameter with empirical efficiency maximization. U.C.Berkeley Division of Biostatistics Working Paper 229 (2008)
Sekhon, J.: Multivariate and propensity score matching software with automated balance optimization: the matching package for R. J. Stat. Softw. 42, 1–52 (2011)
Senn, S., Graf, E., Caputo, A.: Stratification for the propensity score compared with linear regression techniques to assess the effect of treatment or exposure. Stat. Med. 26, 5529–5544 (2007)
Tang, Z.: Understanding OR, PS, and DR, Comment on “Demystifying double robustness: a comparison of alternative strategies for estimating a population mean from incomplete data” by Kang and Schafer. Stat. Sci. 22, 560–568 (2007)
Winkelmayer, W.C., Kurth, T.: Propensity scores: help or hype? Nephrol. Dial. Transplant. 19, 1671–1673 (2004)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Appendix: R Code of Simulations and Data Analysis
Appendix: R Code of Simulations and Data Analysis
################################################################
Figure 5: Linear regression (homoscedasticity)
----------------------------------------------------------------
1. Y on X;
2. Y on population linear discriminant / propensity variable LD;
3. Y on sample linear discriminant / propensity variable LD*;
4. Y on population linear predictor LP.
################################################################
## set parameters
p <- 2
delta <- 0.5
phi <- 1
n <- 20
alpha <- matrix(c(1,0), nrow=1)
sigma <- diag(1, nrow=p)
b <- matrix(c(0,1), nrow=p)
## create a function to compute ACE from four linear regressions
ps <- function(r) {
# data for T, X and Y from the specified linear normal model
set.seed(r)
.Random.seed
t <- rbinom(n, 1, 0.5)
require(MASS)
m <- rep(0, p)
ex <- mvrnorm(n, mu=m, Sigma=sigma)
x <- t%*%alpha + ex
ey <- rnorm(n, mean=0, sd=sqrt(phi))
y <- t*delta + x%*%b + ey
# calculate the true and sample linear discriminants
ld.true <- x%*%solve(sigma)%*%t(alpha)
pred <- x%*%b
d1 <- data.frame(x, t)
c <- coef(lda(t~.,d1))
ld <- x%*%c
# extract estimated average causal effect (ACE)
# from the four linear regressions
dhat.pred <- coef(summary(lm(y~1+t+pred)))[2]
dhat.x <- coef(summary(lm(y~t+x)))[2]
dhat.ld <- coef(summary(lm(y~t+ld)))[2]
dhat.ld.true <- coef(summary(lm(y~t+ld.true)))[2]
return(c(dhat.x, dhat.ld, dhat.ld.true, dhat.pred))
}
## estimate ACE from 200 simulated datasets
## compute mean, standard deviation and mean square error of ACE
g <- rep(0, 4)
for (r in 31:230) {
g <- rbind(g, ps(r))
}
g <- g[-1,]
d.mean <- 0
d.sd <- 0
mse <- 0
for (i in 1:4) {
d.mean[i] <- round(mean(g[,i]),4)
d.sd[i] <- round(sd(g[,i]),4)
mse[i] <- round((d.sd[i])^2+(d.mean[i]-delta)^2, 4)
}
## generate Figure 5
par(mfcol=c(2,2), oma=c(1.5,0,1.5,0), las=1)
main=c("M0: Y on (T, X=(X1, X2)’)", "M3: Y on (T, LD*)",
"M1: Y on (T, LD=X1)", "M2: Y on (T, LP=X2)")
for (i in 1:4){
hist(g[,i], br=seq(-2.5, 2.5, 0.5), xlim=c(-2.5, 2.5), ylim=c(0,80),
main=main[i], col.lab="blue", xlab="", ylab="",col="magenta")
legend(-2.5,85, c(paste("mean = ",d.mean[i]), paste("sd = ",d.sd[i]),
paste("mse = ",mse[i])), cex=0.85, bty="n")
}
mtext(side=3, cex=1.2, line=-1.1, outer=T, col="blue",
text="Linear regression (homoscedasticity) [200 datasets]")
dev.copy(postscript,"lrpvpdecmbook.ps", horiz=TRUE, paper="a4")
dev.off()
###########################################################################
Linear regression and subclassification (heteroscedasticity)
---------------------------------------------------------------------------
Figure 6:
1. Regression on population linear predictor LP;
2. Regression on population linear discriminant LD;
3. Regression on population quadratic discriminant / propensity variable QD;
4. Subclassification on QD.
Figure 7:
1. Regression on sample linear predictor LP*;
2. Regression on sample linear discriminant LD*;
3. Regression on sample quadratic discriminant / propensity variable QD*;
4. Subclassification on QD*.
###########################################################################
## set parameters
p <- 20
d <- 0
delta <- 0.5
phi <- 1
n <- 500
a <- matrix(rep(0,p), nrow=1)
alpha <- matrix(c(0.5,rep(0,p-1)), nrow=1)
sigma1 <- diag(1, nrow=p)
sigma0 <- diag(c(rep(0.8, 10), rep(1.3, 10)), nrow=p)
b <- matrix(c(0, 1, rep(0,p-2)), nrow=p)
## create a function to compute ACE from eight approaches
ps <- function(r) {
# data for T, X and Y from the specified linear normal model
set.seed(r)
.Random.seed
pi <- 0.5
t <- rbinom(n, 1, pi)
n0 <- 0
for (i in 1:n) {
if (t[i]==0)
n0 <- n0+1
}
t <- sort(t, decreasing=FALSE)
mu1 <- a+alpha
mu0 <- a
require(MASS)
m <- rep(0, p)
ex0 <- mvrnorm(n0, mu=m, Sigma=sigma0)
ex1 <- mvrnorm((n-n0), mu=m, Sigma=sigma1)
a <- matrix(rep(a, n), nrow=n, byrow=TRUE)
x0 <- a[(1:n0),] + t[1:n0]%*%alpha + ex0
x1 <- a[(n0+1):n,] + t[(n0+1):n]%*%alpha + ex1
x <- rbind(x0, x1)
ey <- rnorm(n, mean=0, sd=sqrt(phi))
d <- rep(d, n)
y <- d + t*delta + x%*%b + ey
# calculate linear discrimant, quadratic discrimant, for population
# and for sample, extract estimated ACE from linear regressions
ld <- x%*%solve(pi*sigma1+pi*sigma0)%*%t(alpha)
d1 <- data.frame(x, t)
c <- coef(lda(t~.,d1))
ld.s <- x%*%c
z1 <- x%*%(solve(sigma1)%*%t(mu1) - solve(sigma0)%*%t(mu0))
z2 <- 0
for (j in 1:n){
z2[j] <- - 1/2*matrix(x[j,], nrow=1)%*%(solve(sigma1)
- solve(sigma0))%*%t(matrix(x[j,], nrow=1))
}
qd <- z1+z2
dhat.x2 <- coef(summary(lm(y~1+t+x[,2])))[2]
dhat.ld <- coef(summary(lm(y~1+t+ld)))[2]
dhat.qd <- coef(summary(lm(y~1+t+qd)))[2]
mn <- aggregate(d1, list(t=t), FUN=mean)
m0 <- as.matrix(mn[1, 2:(p+1)])
m1 <- as.matrix(mn[2, 2:(p+1)])
v0 <- var(x0)
v1 <- var(x1)
c1 <- solve(v1)%*%t(m1)-solve(v0)%*%t(m0)
z1.s <- x%*%c1
c2 <- solve(v1)-solve(v0)
z2.s <- 0
for (i in 1:n){
z2.s[i] <- -1/2*matrix(x[i,], nrow=1)%*%c2%*%t(matrix(x[i,], nrow=1))
}
qd.s <- z1.s+z2.s
dhat.x <- coef(summary(lm(y~1+t+x)))[2]
dhat.ld.s <- coef(summary(lm(y~1+t+ld.s)))[2]
dhat.qd.s <- coef(summary(lm(y~1+t+qd.s)))[2]
# extract estimated ACE from subclassification
d2 <- data.frame(cbind(qd, qd.s, y, t))
tm1 <- vector("list", 2)
tm0 <- vector("list", 2)
te.qd <- 0
for (k in 1:2) {
d3 <- d2[, c(k,3,4)]
d3 <- split(d3[order(d3[,1]), ], rep(1:5, each=100))
tm <- vector("list", 5)
for (j in 1:5) {
tm[[j]] <- aggregate(d3[[j]], list(Stratum=d3[[j]]$t), FUN=mean)
tm1[[k]][j] <- tm[[j]][2,3]
tm0[[k]][j] <- tm[[j]][1,3]
}
te.qd[k] <- sum(tm1[[k]] - tm0[[k]])/5
}
# return estimated ACE from the eight approaches
return(c(dhat.x2, te.qd[1], dhat.ld, dhat.qd,
dhat.x, te.qd[2], dhat.ld.s, dhat.qd.s))
}
## estimate ACE from 200 simulated datasets
## compute mean, standard deviation and mean square error of ACE
g <- rep(0, 8)
for (r in 31:230) {
g <- rbind(g, ps(r))
}
g <- g[-1,]
d.mean <- 0
d.sd <- 0
d.mse <- 0
for (i in 1:8) {
d.mean[i] <- round(mean(g[,i]),4)
d.sd[i] <- round(sd(g[,i]),4)
d.mse[i] <- round((d.sd[i])^2+(d.mean[i]-delta)^2, 4)
}
## generate Figure 6
par(mfcol=c(2,2), oma=c(1.5,0,1.5,0), las=1)
main=c("Regression on LP=X2","Subclassification on QD",
"Regression on LD=5/9X1","Regression on QD")
for (i in 1:4){
hist(g[,i], br=seq(-0.1, 1.1, 0.1), xlim=c(-0.1, 1.1), ylim=c(0,80),
main=main[i], col.lab="blue", xlab="", , ylab="", col="magenta")
legend(-0.2,85, c(paste("mean = ",d.mean[i]), paste("sd = ",d.sd[i]),
paste("mse = ",d.mse[i])), cex=0.85, bty="n")
}
mtext(side=3, cex=1.2, line=-1.1, outer=T, col="blue",
text="Linear regression and subclassification
(heteroscedasticity) [200 datasets]")
dev.copy(postscript,"pslrsubtruebook.ps", horiz=TRUE, paper="a4")
dev.off()
## generate Figure 7
main=c("Regression on X","Subclassification on QD*",
"Regression on LD*", "Regression on QD*")
for (i in 1:4){
hist(g[,i+4], br=seq(-0.1, 1.1, 0.1), xlim=c(-0.1,1.1), ylim=c(0,80),
main=main[i], col.lab="blue", xlab="", ylab="", col="magenta")
legend(-0.2,85, c(paste("mean = ",d.mean[i+4]), paste("sd = ",d.sd[i+4]),
paste("mse = ",d.mse[i+4])), cex=0.85, bty="n")
}
mtext(side=3, cex=1.2, line=-1.1, outer=T, col="blue",
text="Linear regression and subclassification
(heteroscedasticity, sample) [200 datasets]")
dev.copy(postscript,"pslrsubbook.ps", horiz=TRUE, paper="a4")
dev.off()
######################################################################
Figure 9 and Table 1: Propensity analysis of custodial sanctions study
----------------------------------------------------------------------
1. Y on all 17 variables X;
2. Y on estimated propensity score EPS.
######################################################################
## read data, imputation by bootstrapping for missing data
dAll = read.csv(file="pre_impute_data.csv", as.is=T, sep=’,’, header=T)
set.seed(100)
.Random.seed
library(mi)
data.imp <- random.imp(dAll)
## estimate propensity score by logistic regression
glm.ps<-glm(Sentenced_to_prison~
Age_at_1st_yuvenile_incarceration_y +
N_prior_adult_convictions +
Type_of_defense_counsel +
Guilty_plea_with_negotiated_disposition +
N_jail_sentences_gr_90days +
N_juvenile_incarcerations +
Monthly_income_level +
Total_counts_convicted_for_current_sentence +
Conviction_offense_type +
Recent_release_from_incarceration_m +
N_prior_adult_StateFederal_prison_terms +
Offender_race +
Offender_released_during_proceed +
Separated_or_divorced_at_time_of_sentence +
Living_situation_at_time_of_offence +
Status_at_time_of_offense +
Any_victims_female,
data = data.imp, family=binomial)
summary(glm.ps)
eps <- predict(glm.ps, data = data.imp[, -1], type=’response’)
d.eps <- data.frame(data.imp, Est.ps = eps)
## Figure 9: densities of estimated propensity score (prison vs. probation)
library(ggplot2)
d.plot <- data.frame(Prison = as.factor(data.imp$Sentenced_to_prison),
Est.ps = eps)
pdf("ps.dens.book.pdf")
ggplot(d.plot, aes(x=Est.ps, fill=Prison)) + geom_density(alpha=0.25) +
scale_x_continuous(name="Estimated propensity score") +
scale_y_continuous(name="Density")
dev.off()
## logistic regression of the outcome on all 17 variables
glm.y.allx<-glm(Recidivism~
Sentenced_to_prison +
Age_at_1st_yuvenile_incarceration_y +
N_prior_adult_convictions +
Type_of_defense_counsel +
Guilty_plea_with_negotiated_disposition +
N_jail_sentences_gr_90days +
N_juvenile_incarcerations +
Monthly_income_level +
Total_counts_convicted_for_current_sentence +
Conviction_offense_type +
Recent_release_from_incarceration_m +
N_prior_adult_StateFederal_prison_terms +
Offender_race +
Offender_released_during_proceed +
Separated_or_divorced_at_time_of_sentence +
Living_situation_at_time_of_offence +
Status_at_time_of_offense +
Any_victims_female,
data = d.eps, family=binomial)
summary(glm.y.allx)
## logistic regression of the outcome on the estimated propensity score
glm.y.eps<-glm(Recidivism ~ Sentenced_to_prison + Est.ps,
data = d.eps, family=binomial)
summary(glm.y.eps)
Rights and permissions
Copyright information
© 2016 Springer International Publishing Switzerland
About this chapter
Cite this chapter
Guo, H., Dawid, P., Berzuini, G. (2016). Sufficient Covariate, Propensity Variable and Doubly Robust Estimation. In: He, H., Wu, P., Chen, DG. (eds) Statistical Causal Inferences and Their Applications in Public Health Research. ICSA Book Series in Statistics. Springer, Cham. https://doi.org/10.1007/978-3-319-41259-7_3
Download citation
DOI: https://doi.org/10.1007/978-3-319-41259-7_3
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-41257-3
Online ISBN: 978-3-319-41259-7
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)