Abstract
This paper empirically evaluates Caplow and McGee’s (The academic marketplace, 1958) model of academia as a prestige value system (PVS) by testing several hypotheses about the relationship between prestige of faculty appointment and job satisfaction. Using logistic regression models to predict satisfaction with several job domains in a sample of more than 1,000 recent social science PhD graduates who hold tenure-track or tenured faculty positions, we find that the relationship between prestige of faculty appointment and job satisfaction is modified by PhD program prestige. Graduates of high prestige PhD programs value prestige more highly and graduates of low prestige programs value salary more highly. We explain our findings by incorporating reference group theory and a theory of taste formation into our model of the academic PVS, which identifies PhD programs as sites of socialization to different tastes for prestige (a process of cultural transmission) in addition to their well recognized role in transmission of human and social capital. We discuss practical and theoretical implications of our findings in relation to efforts to measure PhD program quality and to understand the structure of academic labor markets.
Similar content being viewed by others
Notes
“The value of a position to its incumbent is determined… by the prestige of the whole organization in its external environment” (Caplow and McGee 1958, p. 75).
For example, across 6 national surveys from 1969 to 1998 levels of satisfaction (including both very satisfied and somewhat satisfied compared with very unsatisfied and somewhat unsatisfied) range from 84 to 93% (Schuster and Finkelstein 2006). Faculty attrition rates are also rather low. Ehrenberg et al. (1990) using large national samples from 1970 to 1989 find that annual within institution retention rates among full professors is 92%; among associate professors is also close to 92%, among assistant professor is close to 85%.
The authors explicitly interpret reported job satisfaction to serve as a proxy for the concept of utility.
If it turns out that income and not prestige drives job satisfaction among academics, the relative deprivation model suggests that those trained at higher prestige programs will require more income for equal amounts of satisfaction than those who were trained at lower prestige programs.
See “Appendix 2” for a list of institutions and programs that participated in the SS5.
An argument may be advanced that the NRC rating of the educational quality of the graduate training program is a more salient measure of prestige. However, the measures of educational quality and faculty scholarly reputation are correlated at above 0.9, so that in practical terms it does not matter which measure is used.
z-Scores for the reputation of the graduate training program are calculated within field. In one of the six fields, Communications, the NRC does not report reputation of programs. In this field a surrogate measure was used for reputation of graduate program—the undergraduate student selectivity of the degree granting institution. For the five fields where both NRC program ratings and USNWR ratings are available, the correlation between measures is 0.69. Thus, institutional selectivity was deemed to be a reliable, albeit imperfect, surrogate measure of prestige of the PhD-granting department in the field of communications. Like the other five fields, z-scores for the reputation of the degree granting program in Communications were converted to with field z-scores.
z-Scores for income and for PhD program prestige are computed relative to the set of cases in the SS5, and therefore have a mean of zero and a standard deviation of 1. However, as described in “Appendix 1”, z-scores for employing institutional prestige are calculated relative to the field of all institutions of higher education. As institutional prestige is positively related to size of the institution, the respondents to the SS5 are more concentrated in the higher prestige institutions. (In other words, higher prestige institutions have more jobs than lower prestige institutions.) Within this analysis, the average institutional prestige score is 0.66. However, the standard deviation, fortuitously happens to be near one, making coefficients from statistical model conducive for comparison across variables.
The appropriate application of ordinal logistic regression depends on conformity to the parallel effects assumption. This assumption and empirical tests verifying conformity are presented in “Appendix 4”.
We also test for the possibility of multicollinearity in our models due to high correlations between prestige of employer and prestige of the institution where the respondent earned the doctorate. Results of this analysis are presented in “Appendix 3”.
References
Argyle, M. (2001). The psychology of happiness. New York: Routledge.
Baldi, S. (1994). Changes in the stratification structure of sociology, 1964–1992. American Sociologist, 25(4), 28–43.
Bisin, A., & Verdier, T. (1998). On the cultural transmission of preferences for social status. Journal of Public Economics, 70, 75–97.
Brooks, R., Morrison, E., & Nerad, M. (2005). Ph.D. career outcomes as a measure of doctoral program quality. Presented at the 2005 association of studies in higher education annual meeting, Philadelphia, PA.
Burke, D. L. (1988). The new academic marketplace. New York: Greewood.
Burris, V. (2004). The academic caste system: Prestige hierarchies in PhD exchange networks. American Sociological Review, 69(2), 239–263.
Bygren, M. (2004). Pay reference standards and pay satisfaction: What do workers evaluate their pay against? Social Science Research, 33, 206–224.
Caplow, T., & McGee, R. (1958). The academic marketplace. Garden City, NY: Doubleday.
Clark, B. R. (1987). The academic life: Small worlds, different worlds. Princeton, NJ: The Carnegie Foundation for the Advancement of Teaching.
Clark, A. E., & Oswald, A. J. (1996). Satisfaction and comparison income. Journal of Public Economics, 61, 359–381.
D’Ambrosio, C., & Frick, J. R. (2007). Income satisfaction and relative deprivation: An empirical link. Social Indicators Research, 81, 497–519.
Debackere, K., & Rappa, M. A. (1995). Scientists at major and minor universities: Mobility along the prestige continuum. Research Policy, 24, 137–150.
Ehrenberg, R., Kasper, H., & Rees, D. (1990). Faculty turnover at American colleges and universities: Analyses of AAUP data. Working paper 3239. Cambridge, MA: National Bureau of Economic Research.
Falk, A., & Knell, M. (2004). Choosing the joneses: Endogenous goals and references standards. Scandinavian Journal of Economics, 106(3), 417–435.
Geiger, R. L. (2002). The competition for high ability students: Universities in a key marketplace. In S. Brint (Ed.), The future of the city of intellect: The changing American university. Stanford, CA: Stanford University Press.
Goldberger, M. L., Maher, B. A., & Flattau, P. E. (1995). Research doctorate programs in the United States: Continuity and change. Washington, DC: National Academies Press.
Hermanowicz, J. C. (2003). Scientists and satisfaction. Social Studies of Science, 33(1), 45–73.
Kerr, C. (1987). A critical age in the university world: Accumulated heritage versus modern imperatives. European Journal of Education, 22(2), 183–193.
Kerr, C. (2002). Shock wave II: An introduction to the twenty-first century. In S. Brint (Ed.), The future of the city of intellect: The changing American university. Stanford, CA: Stanford University Press.
Levy-Garboua, L., & Montmarquette, C. (2004). Reported job satisfaction: What does it mean. Journal of Socio-Economics, 33, 135–151.
Long, J. S., & Freese, J. (2003). Regression models for categorical dependent variables using Stata. College Station, TX: Stata Press Publication.
Massey, W. F. (2004). Markets in higher education: Do they promote internal efficiency? In P. Teixeira, B. Jongbloed, D. Dill, & A. Amarals (Eds.), Markets in higher education: Rhetoric or reality? Dordrecht: Springer.
Merton, R. K. (1957). Priorities in scientific discoveries: A chapter in the sociology of science. American Sociological Review, 22(6), 635–659.
Merton, R. K. (1968). The Matthew effect in science: The reward and communication systems of science are considered. Science, 159(3810), 56–63.
Morse, R. J., Flannigan, S., & Yerkie, M. (2005). America’s best colleges. U.S. News and World Report, 139(7), 78.
Nerad, M., Rudd, E., Morrison, E., & Picciano, J. (2007). Social science PhDs—Five+ years out, a national survey of PhDs in six fields highlights report. Seattle: Center for Innovation and Research in Graduate Education, University of Washington.
Nevill, S. C., & Bradburn, E. M. (2006). Institutional policies and practices regarding postsecondary faculty: Fall 2003 (NCES 2007-157). Washington, DC: U.S. Department of Education, National Center for Education Statistics.
Rhode, D. L. (2006). In pursuit of knowledge: Scholars, status, and academic culture. Stanford, CA: Stanford University Press.
Royston, P. (2004). Multiple imputation of missing values. The Stata Journal, 4(3), 227–241.
Royston, P. (2005). Multiple imputation of missing values: Update. The Stata Journal, 5(2), 188–201.
Sanderson, A., Phua, V. C., & Herda, D. (2000). The American faculty poll. Chicago, IL: National Opinion Research Center.
Schuster, J. H., & Finkelstein, M. J. (2006). The American faculty: The restructuring of academic work and careers. Baltimore, MD: The Johns Hopkins University Press.
Seifert, T. A., & Umbach, P. D. (2008). The effects of faculty demographic characteristics and disciplinary context on dimensions of job satisfaction. Research in Higher Education, 49(4), 357–381.
Smelser, N. J., & Content, R. (1980). The changing academic market. Berkeley, CA: University of California Press.
Snyder, T. D., Dillow, S. A., & Hoffman, C. M. (2009). Digest of education statistics 2008 (NCES 2009-020). Washington, DC: National Center for Education Statistics, Institute of Education Sciences, U.S. Department of Education.
Stutzer, A. (2004). The role of income aspirations in individual happiness. Journal of Economic Behavior & Organization, 54, 89–109.
Wilson, L. (1966). Forward. In A. M. Cartter (author). An assessment of quality in graduate education. Washington, DC: American Council on Education.
Acknowledgment
The authors wish to thank the Ford Foundation for their support of this project.
Author information
Authors and Affiliations
Corresponding author
Appendices
Appendices
Appendix 1: The Measure of Prestige of the Higher Education Institutions
Our measure of prestige of the institution where the faculty member works is not an exact replication of the U.S. News and World Report (USNWR) rankings. The USNWR rankings would be a valid measure of institutional prestige, insofar as the concept of prestige is merely the intersubjective perception of a status order by significant others (Wilson 1966). The USNWR rankings are an attempt to formalize such a concept. However, the USNWR rankings do not suffice for a measure that is comparable for all undergraduate degree granting institutions across the U.S. The problem with using U.S. News rankings to construct a unidimensional measure is that rankings are constructed and reported within region (and within college type). We cannot use the USNWR rankings to compare institutions classified in the southern region with those classified in the western region, nor can we compare rankings of selective colleges with those of national universities.
Instead of using the USNWR rankings, an OLS model using USNWR is generated to predict student test scores—the 75th percentile score for both the SAT and for the ACT and the 25th percentile score for both the SAT and the ACT. Test scores are the data element from the USNWR that are most strongly linked to the concept of student selectivity—which is at the heart of the dimension of prestige that Gieger (2002) singles out as a critical element of prestige. The independent variables, used to predict test scores in this model, included: classification (akin to Carnegie class), region, tier of ranking, the peer assessment, the graduation rate, acceptance rate, and percent of freshmen in the top quarter of their high school class. Models explained from 86% of the variation in first quartile SAT scores to 71% of the variation in third quartile ACT scores. Most importantly the models enabled the prediction of test scores where none were reported (for example the first quartile ACT scores among freshmen in a university that relies exclusively on SAT scores for admission). Each of the four predicted data points were converted into z-scores and averaged. Thus each institution rated by USNWR in 2005 was assigned a single measure on a continuous scale that reflects the selectivity of its student body. These scores are used to measure the prestige of the employing institutions for the SS5 respondents.
Appendix 2
See Table 5.
Appendix 3: Collinearity Diagnostics
We ran collinearity diagnostics for each of the six dependent variables in the analysis presented in Table 3. A Variance Inflation Factor (VIF) above 10 is a conventional threshold which indicates that collinearity threatens proper inference. Table 6 presents VIFs calculated for the model in which Satisfaction with Salary is the dependent variable. The VIFs associated with the models for the other dependent variables are not displayed because they varied by less than 3 one-hundredths of a point from the estimates displayed below. None of the VIFs suggest that multicollinearity may be a problem.
Appendix 4: Tests for Parallel Effects Assumption
The appropriate application of ordinal logistic regression relies on the assumption that the estimated coefficients capture the effects of a unit increase in the independent variables on the likelihood of a one-unit increase in the dependent variable regardless of the value of the dependent variable. This assumption is known as the parallel effects assumption (Long and Freese 2003, pp. 165–168).
For example, in our analysis in Table 3 Model 1, we estimate the ordinal logistic regression coefficient 1.92 for the effect of the employing institution prestige on satisfaction with prestige. The parallel effects assumption demands that this effect is interpreted as meaning that a one standard deviation increase in employing institutional prestige increases the odds of reporting ‘very satisfied’ compared to the odds of reporting ‘somewhat satisfied’ by 92%, and that it also increases the odds of reporting ‘somewhat satisfied’ compared to the odds of reporting ‘somewhat dissatisfied’ also by 92%.
If the parallel effects assumption is violated and the effects of an independent variable on an ordinal dependent variable are different for each step of the dependent variable, then a multinominal logistic regression is to be preferred over an ordinal logistic regression. However, we can apply a Brant test to evaluate whether the parallel effects assumption has been violated. A sufficiently high and statistically significant Brant test statistic indicates a violation of the parallel effects assumption. Table 7 presents Brant test statistics for all variables in each of the six models presented in Table 3.
None of the Brant test statistics at the model fit level are sufficiently high to suggest violation of the parallel effects assumption. Only eight of the ninety-six test statistics at the coefficient level are significant at the p < 0.05 level and all of these are control variables, none having any bearing on any of the hypotheses. In sum, the Brant-tests for violations of the parallel effects assumption suggest little reason to abandon the ordinal logistic regression model.
Rights and permissions
About this article
Cite this article
Morrison, E., Rudd, E., Picciano, J. et al. Are You Satisfied? PhD Education and Faculty Taste for Prestige: Limits of the Prestige Value System. Res High Educ 52, 24–46 (2011). https://doi.org/10.1007/s11162-010-9184-1
Received:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11162-010-9184-1