Affirmative action in higher education in India: targeting, catch up, and mismatch

Abstract

Using detailed data on the 2008 graduating class from an elite engineering institution in India, we evaluate the impact of affirmative action policies in higher education focusing on three issues: targeting, catch up, and mismatch. We find that admission preferences effectively target minority students who are poorer than average displaced nonminority students. Moreover, we find that minority students, especially those in more selective majors, fall behind their same-major peers in terms of grades as they progress through college. We also identify evidence in favor of the mismatch hypothesis: Once we control for selection into majors, minority students in more selective majors end up earning less than they would have had if they had chosen a less selective major.

This is a preview of subscription content, access via your institution.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Notes

  1. 1.

    This is partly explained by a large number of people having close to or perfect SAT scores. The best schools could easily fill their seats with only such candidates. However, based on the USA experience, there is reason to believe that this would result in a worse entering class (Blau et al. 2004; Bowen and Bok 1998).

  2. 2.

    Admissions officers are often rewarded on the basis of acceptance rates.

  3. 3.

    The Texas top 10 % law, which guaranteed admission to the top 10 % of graduates from all Texas high schools to any state or public university may be one of the few exceptions. See the Texas Higher Education Opportunity Project (THEOP) for more on this. The law was loosened in June 2010.

  4. 4.

    Lower castes in India represent a greater share in total population than any minority in the USA. Even if we only consider the most disadvantaged castes, SC and ST, their 22.5 % share surpasses the 13 % share of African Americans in the USA.

  5. 5.

    In fact, the quotas are so much in favor of these disadvantaged groups that even with huge differences in admissions cutoffs, some elite schools are not able to fill their quotas.

  6. 6.

    In the past, there was an incentive to underreport belonging to SC/ST castes due to social status concerns. Nowadays, there seems to be a tendency to overreport SC/ST status in order to capture government subsidies and benefits targeted toward these historically disadvantaged populations.

  7. 7.

    The sample includes 448 universities, 8123 colleges, and 4076 stand-alone institutions who voluntarily decided to answer the survey.

  8. 8.

    Reservations for OBC were recommended in 1978 and implemented in 1989 in private unaided institutions as well as high-end government jobs for minority communities. The EEI we analyze did not make any changes to its reservations policy until 2008. Since then, OBCs have also been provided with a 27 % reservation, although their share in India’s population is about 50 %. However, it has been argued that the OBC group is not “backward” and some privileged castes have made it on to this list.

  9. 9.

    Since 2008, a separate merit list is also constructed for OBC students. However, the relaxation in marks applied to the admission cutoff for this group is at most 10 %.

  10. 10.

    Dual degree programs integrate undergraduate and postgraduate studies in selected areas of specialization. They are completed in 5 years, only one more year compared to conventional Bachelor’s degrees.

  11. 11.

    The survey was only administered in 2008. All individual records were anonymized prior to the analysis.

  12. 12.

    Throughout their stay in the EEI, students live in hostels located in campus.

  13. 13.

    Common courses usually include basic Electronics, Mechanics, Chemistry, and Physics.

  14. 14.

    Out of the nine bachelor majors and five dual degree majors offered at the EEI, only the Bachelor’s programs in Computer Science and Electrical Engineering (Power) as well as dual degrees in Computer Science and Electrical Engineering are defined as selective. See Table 4 in Appendix 1 for descriptive statistics on the average performance in the first year and wages by major and caste.

  15. 15.

    Poverty data come from Table A2 in Chaudhuri and Gupta (2009), while data on minority and rural population at the district level are obtained from the Indian Census 2001.

  16. 16.

    SC/ST status is documented by certificates issued by the Indian government. Given the widespread income tax evasion in India, income is likely to be underestimated, especially in nonsalaried employment.

  17. 17.

    See Appendix 4 for more details on district-level patterns identified in these data.

  18. 18.

    Additionally, 251 seats were reserved for students with physical disabilities, but since we do not have these applicants in the applicant data we exclude them from the analysis.

  19. 19.

    Although we might mislabel some applicants due to non-enrollment after placement, we expect this potential bias to be small. Colleges who allocate seats using this centralized exam are top higher education institutions in the country so students who are offered a seat in one of them are not very likely to reject it.

  20. 20.

    Arcidiacono et al. (2011) make a similar point for black students in Duke University.

  21. 21.

    To avoid misleading patterns due to outliers, the five SC/ST students who ranked above the 50th percentile in the first year are removed. Two of these dropped observations belong to the group of minority students who have high family income.

  22. 22.

    Again, the five minority students above the 50th percentile in terms of first year CGPA are dropped.

  23. 23.

    See Rosenbaum and Rubin (1983). Unfortunately, we have interval data on wages. If the outcome variable were binary or continuous, we could also implement the traditional matching approach which provides a nonparametric estimator of \(\alpha _{1}\) without making assumptions on the relationship between wages and \(X_{i}\) (or \(Z_{i}\)).

  24. 24.

    We also experimented with instruments. For the GE group, the results were very similar to those relying on functional form assumptions. The sample size was too small in the SC/ST group for us to consider using instruments there.

  25. 25.

    Tables 8, 9 in Appendix 1 report the same regression results when the final CGPA is excluded as an independent variable.

  26. 26.

    A weakness in our estimates is that the estimated \(\rho\)s, though positive, are not significantly different from zero as we can see from Table above.

  27. 27.

    Our results are robust to the exclusion of potentially intermediate variables such as final CGPA and type of job. See Tables 8, 9 in Appendix 1.

  28. 28.

    Although there are no earnings differentials by caste within occupations, minority students seem to be placed in worse occupations than general students, which may be indicative of discrimination in terms of occupations. These results do not necessarily imply that minority students are discriminated in their first job placement. In fact, a higher probability of being placed in a lower paying job may be explained by individual choices of the students or by labor market reservations that are in place in government jobs, which pay less but are more stable than jobs in finance firms and in the private sector. Results can be found in Frisancho and Krishna (2012).

  29. 29.

    Table 10 in Appendix 1 shows that indeed, students whose first year CGPA is below the average performance in their major seem to be the most emotionally affected as the coefficient on the demeaned first year CGPA shows.

  30. 30.

    We drop individuals with missing observations in each of the well-being variables. We did not include these variables in the imputation process because the missing at random assumption is less defendable in these cases. The problem is small though; missing values in the stress, depression, loneliness, discrimination, or hostel is like home variables only represent between 6 and 10 % of the individuals in each caste.

  31. 31.

    The exception is the last variable, hostel does not feel like home, where the correlation is very close to zero.

  32. 32.

    Social mismatch does not seem to affect wages or grades directly.

  33. 33.

    Allison (2001).

  34. 34.

    The latter comes from the State Profile 2005–2006 prepared by the Indian Ministry of Human Resource and Development. We acknowledge that the total population “at risk” of taking the exam will be overestimated, especially due to higher dropout in the last years of high school, and more so in poorer districts. Unfortunately, data for 12th grade enrollment in 2008 were not publicly available at the district level, only at the state level.

References

  1. Akyol, P., & Krishna, K. (2014). Preferences, selection, and value added: A structural approach. NBER working paper no. 20013.

  2. Allison, P. (2001). Missing data. Thousand Oaks, CA: Sage.

    Google Scholar 

  3. Alon, S., & Tienda, M. (2005). Assessing the “Mismatch” hypothesis: Differences in college graduation rates by institutional selectivity. Sociology of Education, 78, 294–315.

    Article  Google Scholar 

  4. Alon, S., & Tienda, M. (2007). Diversity, opportunity, and the shifting meritocracy in higher education. American Sociological Review, 72(4), 487–511.

    Article  Google Scholar 

  5. Alon, S., & Malamud, O. (2014). The impact of Israel’s class-based affirmative action policy on admission and academic outcomes. Economics of Education Review, 40, 123–139.

    Article  Google Scholar 

  6. Altonji, J. G., Elder, T. E., & Taber, C. R. (2005). Selection on observed and unobserved variables: Assessing the effectiveness of catholic schools. Journal of Political Economy, 113(1), 151–184.

    Article  Google Scholar 

  7. Arcidiacono, P. (2005). Affirmative action in higher education: How do admission and financial aid rules affect future earnings? Econometrica, 73(5), 1477–1524.

    Article  Google Scholar 

  8. Arcidiacono, P., Ausejo, E., & Spenner, K. (2011). What happens after enrollment? An analysis of the time path of racial differences in GPA and major choice. Working paper.

  9. Assuno, J., & Ferman, B. (2013). Does affirmative action enhance or undercut investment incentives? Evidence from quotas in Brazilian public universities. Working paper

  10. Bertrand, M., Hanna, R., & Mullainathan, S. (2010). Affirmative action in education: Evidence from engineering college admissions in India. Journal of Public Economics, 94(1–2), 16–29.

    Article  Google Scholar 

  11. Blau, J., Moller, S., & Jones, L. (2004). Why test? Talent loss and enrolment loss. Social Science Research, 33, 409–434.

    Article  Google Scholar 

  12. Bowen, W., & Bok, D. (1998). The Shape of the River. Princeton: Princeton University Press.

    Google Scholar 

  13. Carnevale, A. P., & Rose, S. J. (2003). Socioeconomic status, race/ethnicity and selective college admissions. New York: The Century Foundation Press.

    Google Scholar 

  14. Chakravarty, S., & Somanathan, E. (2008). Discrimination in an elite labour market? Job placements at IIM-Ahmedabad. Economic and Political Weekly, 43(44), 45–50.

    Google Scholar 

  15. Chaudhuri, S., & Gupta, N. (2009). Levels of living and poverty patterns: A district-wise analysis for India. Economic and Political Weekly, 44(9), 94–110.

    Google Scholar 

  16. Chay, K., McEwan, P. J., & Urquiola, M. (2005). The central role of noise in evaluating interventions that use test scores to rank schools. American Economic Review, 95, 1237–1258.

    Article  Google Scholar 

  17. Francis, A., & Tannuri-Pianto, M. (2012). Using Brazil’s racial continuum to examine the short-term effects of affirmative action in higher education. The Journal of Human Resources, 47, 754–784.

    Article  Google Scholar 

  18. Frisancho, V., & Krishna, K. (2012). Affirmative action in higher education in India: Targeting, catch up, and mismatch. NBER working paper no. 17727.

  19. Frisancho, V., Krishna, K., Lychagin, S., & Yavas, C. (2014). Better luck next time: Learning through retaking. Working paper.

  20. Kahlenberg, R. D. (Ed.). (1996). The Remedy: Class, race, and affirmative action. New York: Basic Books.

    Google Scholar 

  21. Kahlenberg, R. D. (Ed.). (2004). America’s untapped resource: Low-income students in higher education. New York: The Century Foundation Press.

    Google Scholar 

  22. Kochar, A. (2010). Affirmative action through quotas: The effect on learning In India. Stanford Center for International Development, Working paper no. 430.

  23. Loury, L. D., & Garman, D. (1993). Affirmative action in higher education. American Economic Review, 83(2), 99–103.

    Google Scholar 

  24. Loury, L. D., & Garman, D. (1995). College selectivity and earnings. Journal of Labor Economics, 13(2), 289–308.

    Article  Google Scholar 

  25. NCEUS. (2009). The Challenge of Employment in India An Informal Economy Perspective. National Commission for Enterprises in the Unorganised Sector: Technical Report.

  26. Rosenbaum, P. R., & Rubin, D. B. (1983). The Central Role of the Propensity Score in Observational Studies for Causal Effects. Biometrika, 70, 41–55.

    Article  Google Scholar 

  27. Rothstein, J., & Yoon, A. (2008). Affirmative action in law school admissions: What do racial preferences do? University of Chicago Law Review, 75(2), 649–714.

    Google Scholar 

  28. Rothstein, J., & Yoon, A. (2009). Mismatch in Law School. Working paper.

  29. Saeme, D. (2014). does the implementation of affirmative action in a competitive setting incentivize underrepresented public school applicants performance? Evidence from So Paulo. The Yale Journal of Economics, 2, 93–114.

    Google Scholar 

  30. Sander, R. (2004). A systemic analysis of affirmative action in American law schools. Stanford Law Review, 57(2), 367–483.

    Google Scholar 

  31. Van Buuren, S., Brand, J. P. L., & Groothuis-Oudshoorn, C. G. M. (2006). Fully conditional specification in multivariate imputation. Journal of Statistical Computation and Simulation, 76, 1046–1064.

    Google Scholar 

Download references

Author information

Affiliations

Authors

Corresponding author

Correspondence to Veronica Frisancho.

Additional information

We would like to thank Pradeep Kumar for answering many of our questions about the context. We are also indebted to participants of the 7th Annual Conference on Economic Growth and Development at ISI Delhi, the mini conference on the effects of racial preferences in higher education on student outcomes at The Brookings Institution, and seminar participants from the Economics and Social Sciences area at the Indian Institute of Management at Bangalore for suggestions and comments.

Appendices

Appendix 1

See Tables 3, 4, 5, 6, 7, 8, 9, and 10.

Table 3 Summary statistics
Table 4 Average first year CGPA by major and caste
Table 5 First year CGPA as a function of income and caste
Table 6 Wage regressions for general students
Table 7 Wage regressions for SC/ST students
Table 8 Wage regressions for general students, excluding final GPA and type of job as regressors
Table 9 Wage regressions for SC/ST students, excluding final GPA and type of job as regressors
Table 10 Well-being as a function of relative academic disadvantage and controls

Appendix 2: Pattern of missing observations

See Tables 11, 12 and 13.

Table 11 Summary statistics
Table 12 Checking missing at random assumption (I)
Table 13 Checking missing at random assumption (II)

Appendix 3: Multiple random imputation

Allison (2001) proposes multiple imputation methods as an alternative to maximum likelihood function methods. Like maximum likelihood, multiple imputation estimates are consistent and asymptotically normal and close to being asymptotically efficient. In addition, multiple imputation has two big advantages over maximum likelihood: (i) It can be applied to any kind of data or model and (ii) the imputation procedure can be implemented using conventional software. Since imputed values are random draws, the major disadvantage of multiple imputation is that it produces different imputed databases every time they are used.

The most widely used method for multiple imputation is the Markov Chain Monte Carlo algorithm based on linear regression. However, in the case of the EEI data, important complications arise from the fact that some of the missing variables are categorical. The Monte Carlo method presumes that every variable with missing data is normally distributed and that is clearly not the case for categorical variables. An alternative approach is known as “sequential generalized regression” or “multiple imputation for chained equations” (MICE). Instead of assuming a single multivariate model for all the data, this method specifies a separate regression model for each variable which is used to impute missing values. This method is thus flexible and allows us to incorporate logistic, binomial, or multinomial models for categorical variables.Footnote 33

Each model is estimated sequentially using available data, starting with the variable that has the fewest missing data and proceeding to the variable with the most missing data. After each model is estimated, the parameter estimates are used to generate imputed values. Once imputed values have been generated for all the missing data, the sequential imputation process is repeated, except now the imputed values of the previous round are used as predictors for imputing other variables. The main drawback of sequential generalized regressions is that no theory guarantees convergence to the correct distribution for the missing values. However, simulation-based evidence in Van Buuren et al. (2006) suggests that the method works well.

The MICE method is implemented in the following way:

  1. 1.

    For each dependent variable to be imputed, choose a model that reflects the type of data.

  2. 2.

    First round: Imputation starts with dependent variable with the fewest missing data and proceeds to dependent variable with the most missing data.

    • Order-dependent variables according to amount of missing data from \(Y_{1}\) to \(Y_{k}.\) Denote variables with complete data values as X.

    • Regress \(Y_{1}\) on X and obtain \(\widehat{\beta }\) and \(\widehat{V}( \widehat{\beta }).\) Generate imputed values using observed covariates and coefficients drawn from N(\(\widehat{\beta},\) \(\widehat{V}(\widehat{\beta}))\).

    • Regress \(Y_{2}\) on X and \(Y_{1}\) (including imputed values of \(Y_{1}\) ) and obtain imputed values.

    • Continue until all regression models have been estimated.

  3. 3.

    Second and subsequent rounds repeat the process, but each variable is regressed on all other variables, using imputed values from previous rounds.

  4. 4.

    Process ends when stable imputed values are reached or after a specified number of rounds.

Appendix 4: 2009 Entrance exam applicant data

In general, Fig. 7 shows that there are greater concentrations of exam applicants in north eastern areas, especially in richer districts. Moreover, central regions of the country with higher urban poverty rates are contributing less to the applicant pool. Although this evidence is suggestive, we need to take into account the population “at risk” of taking the exam in each district to check whether certain district characteristics are related to a relatively higher proportion of high school graduates taking the entrance exam. We approximate the probability of taking the entrance exam in a given district as the number of 2009 applicants from a given PIN code divided by the number of high school students enrolled in grades 9th through 12th in 2006.Footnote 34 Even though the 2009 applicant data do not contain information about students’ placement, we code all students who made it into their respective merit list and are ranked above the maximum number of seats available for their group as admitted students. We can then proxy the district’s probability of getting into college as the number of admitted students divided by the total number of applicants from the corresponding PIN code.

Fig. 7
figure7

District-level poverty rate and number of exam applicants. Source: Centralized entrance exam, applicant data 2009. Poverty rates from and Chakravarty and Somanathan (2008)

Using the results from non-parametric locally linear regressions in the sample of districts with exam applicants in 2009, Fig. 8 plots districts’ probability of taking the exam and the probability of getting into college as functions of urban poverty rates, share of rural population, and share SC/ST population. The blue lines in panels (a) and (b) show that both the probability of taking the college entrance exam and the probability of getting in do not seem to be affected by the share of minority population in the district. This pattern is particularly interesting if one takes into account that areas with higher concentrations of SC/ST population tend to have lower average performance in the entrance exam (both in the aggregate and by subject) as shown in panel (a), Fig. 9. In the absence of AA policies, the lower prospects of success in the exam in areas with higher concentrations of minority population would lead to a lower probability of taking the exam as well as lower chances of being admitted. The fact that we do not identify a relationship between the district’s share of SC/ST population and the probabilities of writing the exam or getting in suggests that admission preferences particularly motivate minority students to take the exam and facilitate their admission into college.

When we order districts by their poverty rate, panel (a) in Fig. 8 shows a modest decline in the probability of taking the exam as the percentage of urban poor rises. However, the black line in panel (b) suggests that there is no relationship between the probability of getting in and poverty. This could be due to preferences for SC/ST who tend to be poorer than the GE group. The small differences in the proportion of people getting in may follow from the small differences in average performance among poor and rich districts exhibited in panel (b) in Fig. 9. Finally, the gray lines in panels (a) and (b) in Fig. 8 show that the probability of taking the exam and the probability of getting into college are both decreasing in the share of rural population, which reflects AA’s lack of focus on rural students.

Fig. 8
figure8

Probability of taking the entrance exam and probability of admission. a Probability of taking Exam, b Probability of admission. Source: Centralized entrance exam, applicant data 2009. Note: Probabilities are locally mean-smoothed using a kernel-weighted local polynomial smoother

Fig. 9
figure9

District Average Total Marks and Marks by Subject in the 2009 Entrance Exam. a SC/ST share, b Poverty. Source: Centralized Entrance Exam, Applicant data 2009. Note: Average Marks in panel (a) are locally mean-smoothed using a kernel-weighted local polynomial smoother

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Frisancho, V., Krishna, K. Affirmative action in higher education in India: targeting, catch up, and mismatch. High Educ 71, 611–649 (2016). https://doi.org/10.1007/s10734-015-9927-1

Download citation

Keywords

  • Affirmative action
  • Higher education
  • Targeting
  • Catch up
  • Mismatch

JEL Classification

  • I23
  • I24
  • J15
  • J31
  • J71