Careless responding in internet-based quality of life assessments
Quality of life (QoL) measurement relies upon participants providing meaningful responses, but not all respondents may pay sufficient attention when completing self-reported QoL measures. This study examined the impact of careless responding on the reliability and validity of Internet-based QoL assessments.
Internet panelists (n = 2000) completed Patient-Reported Outcomes Measurement Information System (PROMIS®) short-forms (depression, fatigue, pain impact, applied cognitive abilities) and single-item QoL measures (global health, pain intensity) as part of a larger survey that included multiple checks of whether participants paid attention to the items. Latent class analysis was used to identify groups of non-careless and careless responders from the attentiveness checks. Analyses compared psychometric properties of the QoL measures (reliability of PROMIS short-forms, correlations among QoL scores, “known-groups” validity) between non-careless and careless responder groups. Whether person-fit statistics derived from PROMIS measures accurately discriminated careless and non-careless responders was also examined.
About 7.4% of participants were classified as careless responders. No substantial differences in the reliability of PROMIS measures between non-careless and careless responder groups were observed. However, careless responding meaningfully and significantly affected the correlations among QoL domains, as well as the magnitude of differences in QoL between medical and disability groups (presence or absence of disability, depression diagnosis, chronic pain diagnosis). Person-fit statistics significantly and moderately distinguished between non-careless and careless responders.
The results support the importance of identifying and screening out careless responders to ensure high-quality self-report data in Internet-based QoL research.
KeywordsQuality of life Patient-reported outcomes Careless responding Inattentive responding Person-fit statistics
We would like to thank Margaret Gatz, PhD, and Doerte U. Junghaenel, PhD, for their comments on the study design and helpful discussions in preparation of this manuscript.
Compliance with ethical standards
Conflict of interest
A.A.S. is a Senior Scientist with the Gallup Organization and a consultant with Adelphi Values, inc. S.S. and M.M. declare that they have no conflict of interest.
The study was approved by the University of Southern California Institutional Review Board. All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards.
- 2.Cella, D., Riley, W., Stone, A., Rothrock, N., Reeve, B., Yount, S., et al. (2010). The Patient-Reported Outcomes Measurement Information System (PROMIS) developed and tested its first wave of adult self-reported health outcome item banks: 2005–2008. Journal of Clinical Epidemiology, 63, 1179–1194.CrossRefPubMedCentralPubMedGoogle Scholar
- 20.Reeve, B. B., Hays, R. D., Bjorner, J. B., Cook, K. F., Crane, P. K., Teresi, J. A., et al. (2007). Psychometric evaluation and calibration of health-related quality of life item banks - Plans for the patient-reported outcomes measurement information system (PROMIS). Medical Care, 45, S22–S31.CrossRefGoogle Scholar
- 25.Schneider, S. (2017). Careless responding. osf.io/um9d3.Google Scholar
- 26.Ekstrom, R. B., French, J. W., Harman, H. H., & Dermen, D. (1976). Manual for kit of factor-referenced cognitive tests. Princeton, NJ: Educational Testing Service.Google Scholar
- 29.Pilkonis, P. A., Choi, S. W., Reise, S. P., Stover, A. M., Riley, W. T., Cella, D., et al. (2011). Item banks for measuring emotional distress from the Patient-Reported Outcomes Measurement Information System (PROMIS (R)): Depression, anxiety, and anger. Assessment, 18, 263–283.CrossRefPubMedCentralPubMedGoogle Scholar
- 30.Lai, J. S., Cella, D., Choi, S., Junghaenel, D. U., Christodoulou, C., Gershon, R., et al. (2011). How item banks and their application can influence measurement practice in rehabilitation medicine: A PROMIS fatigue item bank example. Archives of Physical Medicine and Rehabilitation, 92, S20–S27.CrossRefPubMedCentralPubMedGoogle Scholar
- 34.Cleeland, C. (1994). Pain assessment: Global use of the Brief Pain Inventory. Annals of Academic Medicine Singapore, 23, 129–138.Google Scholar
- 36.Muthén, B. (2004). Latent variable analysis: Growth mixture modeling and related techniques for longitudinal data. In D. Kaplan (Ed.), The Sage handbook of quantitative methodology for the social sciences (pp. 345–369). Thousand Oaks, CA: Sage.Google Scholar
- 39.Muthén, L. K., & Muthén, B. O. (2017). Mplus user’s guide (7th edn.). Los Angeles, CA: Muthén & Muthén.Google Scholar
- 40.Cohen, J., & Cohen, P. (1983). Applied multiple regression/correlation analysis for the behavioral sciences. Hillsdale, NJ: Erlbaum.Google Scholar
- 41.Cohen, J. (1988). Statistical power analysis for the behavioral sciences. New York, NY: Erlbaum.Google Scholar
- 42.Tendeiro, J. N., Meijer, R. R., & Niessen, A. S. M. (2015). PerFit: An R package for person-fit analysis in IRT. Journal of Statistical Software, 74, 1–27.Google Scholar
- 47.Asparouhov, T., & Muthén, B. (2014). Variable-specific entropy contribution. Retrieved June 30, 2017, from https://www.statmodel.com/download/UnivariateEntropy.pdf.
- 52.Schneider, S. (2018). Extracting response style bias from measures of positive and negative affect in aging research. The Journals of Gerontology Series B: Psychological Sciences and Social Sciences, 73, 64–74.Google Scholar
- 54.Callegaro, M., Villar, A., Krosnick, J., & Yeager, D. (2014). A critical review of studies investigating the quality of data obtained with online panels. In M. Callegaro, R. Baker, J. Bethlehem, A. S. Göritz, J. A. Krosnick & P. J. Lavrakas (Eds.), Online panel research: A data quality perspective. Hoboken, NJ: Wiley.CrossRefGoogle Scholar