Questionable, Objectionable or Criminal? Public Opinion on Data Fraud and Selective Reporting in Science

Abstract

Data fraud and selective reporting both present serious threats to the credibility of science. However, there remains considerable disagreement among scientists about how best to sanction data fraud, and about the ethicality of selective reporting. The public is arguably the largest stakeholder in the reproducibility of science; research is primarily paid for with public funds, and flawed science threatens the public’s welfare. Members of the public are able to make meaningful judgments about the morality of different behaviors using moral intuitions. Legal scholars emphasize that to maintain legitimacy, social control policies must be developed with some consideration given to the public’s moral intuitions. Although there is a large literature on popular attitudes toward science, there is no existing evidence about public opinion on data fraud or selective reporting. We conducted two studies—a survey experiment with a nationwide convenience sample (N = 821), and a follow-up survey with a representative sample of US adults (N = 964)—to explore community members’ judgments about the morality of data fraud and selective reporting in science. The findings show that community members make a moral distinction between data fraud and selective reporting, but overwhelmingly judge both behaviors to be immoral and deserving of punishment. Community members believe that scientists who commit data fraud or selective reporting should be fired and banned from receiving funding. For data fraud, most Americans support criminal penalties. Results from an ordered logistic regression analysis reveal few demographic and no significant partisan differences in punitiveness toward data fraud.

This is a preview of subscription content, access via your institution.

Notes

  1. 1.

    We thank the journal editor for highlighting this point.

  2. 2.

    Although omission of studies or outliers is typically understood in the academic community as a questionable research practice, the line between such behavior and data falsification is blurry. The federal definition of data falsification (42 CFR Part 93.103) includes “omitting data or results such that the research is not accurately represented in the research record.” In the current paper, however, in an effort to be consistent with the most common understanding of questionable research practices within the scientific community, we consider data omission a form of selective reporting.

  3. 3.

    Community sanctions are legal punishments that do not involve incarceration.

References

  1. Allcott, H. (2011). Consumers’ perceptions and misperceptions of energy costs. American Economic Review, 101, 98–104. doi:10.1257/aer.101.3.98.

    Article  Google Scholar 

  2. American Association for Public Opinion Research. (2016). Standard definitions: Final dispositions of case codes and outcome rates for surveys. Ann Arbor, MI: American Association for Public Opinion Research.

    Google Scholar 

  3. Baker, M. (2016). 1,500 scientists lift the lid on reproducibility Survey sheds light on the ‘crisis’ rocking research. Nature, 533, 452–454. doi:10.1038/533452a.

    Article  Google Scholar 

  4. Bakker, M., van Dijk, A., & Wicherts, J. M. (2012). The rules of the game called psychological science. Perspectives on Psychological Science, 7, 543–554. doi:10.1177/1745691612459060.

    Article  Google Scholar 

  5. Bakker, M., & Wicherts, J. M. (2014). Outlier removal and the relation with reporting errors and quality of psychological research. PLoS ONE, 9, e103360. doi:10.1371/journal.pone.0103360.

    Article  Google Scholar 

  6. Banks, G. C., O’Boyle, E. H., Pollack, J. M., White, C. D., Batchelor, J. H., Whelpley, C. E., et al. (2016). Questions about questionable research practices in the field of management: A guest commentary. Journal of Management, 42, 5–20. doi:10.1177/0149206315619011.

    Article  Google Scholar 

  7. Baumeister, R. (2014). Personal quote on the Replicaiton Index Blog. Roy Baumeister’s R-Index. https://replicationindex.wordpress.com/2014/12/01/roy-baumeisters-r-index/.

  8. Begley, C. G., & Ellis, L. M. (2012). Drug development: Raise standards for preclinical cancer research. Nature, 483, 531–533. doi:10.1038/483531a.

    Article  Google Scholar 

  9. Berinsky, A. J., Huber, G. A., & Lenz, G. S. (2012). Evaluating online labor markets for experimental research: Amazon.com’s Mechanical Turk. Political Analysis, 20, 351–368. doi:10.1093/pan/mpr057.

    Article  Google Scholar 

  10. Berinsky, A. J., Margolis, M. F., & Sances, M. W. (2014). Separating the shirkers from the workers? Making sure respondents pay attention on self-administered surveys. American Journal of Political Science, 58, 739–753. doi:10.1111/ajps.12081.

    Article  Google Scholar 

  11. Bhutta, Z. A., & Crane, J. (2014). Should research fraud be a crime? BMJ, 349, g4532. doi:10.1136/bmj.g4532.

    Article  Google Scholar 

  12. Blank, J. M., & Shaw, D. (2015). Does partisanship shape attitudes toward science and public policy? The case for ideology and religion. Annals of the American Academy of Political and Social Science, 658, 18–35. doi:10.1177/0002716214554756.

    Article  Google Scholar 

  13. Bouri, S., Shun-Shin, M. J., Cole, G. D., Mayet, J., & Francis, D. P. (2014). Meta-analysis of secure randomised controlled trials of β-blockade to prevent perioperative death in non-cardiac surgery. Heart, 100, 456–464. doi:10.1136/heartjnl-2013-304262.

    Article  Google Scholar 

  14. Camerer, C. F., Dreber, A., Forsell, E., Ho, T.-H., Huber, J., Johannesson, M., et al. (2016). Evaluating replicability of laboratory experiments in economics. Science, 351, 1433–1436. doi:10.1126/science.aaf0918.

    Article  Google Scholar 

  15. Chandler, J., Mueller, P., & Paolacci, G. (2014). Nonnaiveté among Amazon Mechanical Turk workers: Consequences and solutions for behavioral researchers. Behavior Research Methods, 46, 112–130. doi:10.3758/s13428-013-0365-7.

    Article  Google Scholar 

  16. Chang, L., & Krosnick, J. A. (2009). National surveys via RDD telephone interviewing versus the internet: Comparing sample representativeness and response quality. Public Opinion Quarterly, 72, 641–678. doi:10.1093/poq/nfp075.

    Article  Google Scholar 

  17. Darley, J. M. (2009). Morality in the law: The psychological foundations of citizens’ desires to punish transgressions. Annual Review of Law and Social Science, 5, 1–23. doi:10.1146/annurev.lawsocsci.4.110707.172335.

    Article  Google Scholar 

  18. Engel, C. (2015). Scientific disintegrity as a public bad. Perspectives on Psychological Science, 10, 361–379.

    Article  Google Scholar 

  19. Fanelli, D. (2009). How many scientists fabricate and falsify research? A systematic review and meta-analysis of survey data. PLoS ONE, 4, e5738. doi:10.1371/journal.pone.0005738.

    Article  Google Scholar 

  20. Franco, A., Malhotra, N., & Simonovits, G. (2014). Publication bias in the social sciences: Unlocking the file drawer. Science, 345, 1502–1505. doi:10.1126/science.1255484.

    Article  Google Scholar 

  21. Franco, A., Malhotra, N., & Simonovits, G. (2015). Underreporting in political science survey experiments: Comparing questionnaires to published results. Political Analysis, 23, 306–312. doi:10.1093/pan/mpv006.

    Article  Google Scholar 

  22. Gammon, E., & Franzini, L. (2013). Research misconduct oversight: Defining case costs. Journal of Health Care Finance, 40, 75–99.

    Google Scholar 

  23. Gauchat, G. (2012). Politicization of science in the public sphere: A study of public trust in the United States, 1974 to 2010. American Sociological Review, 77, 167–187. doi:10.1177/0003122412438225.

    Article  Google Scholar 

  24. Gibbons, M. (1999). Science’s new social contract with society. Nature, 402, C81–C84.

    Article  Google Scholar 

  25. Godlee, F., Smith, J., & Marcovitch, H. (2011). Wakefield’s article linking MMR vaccine and autism was fraudulent. BMJ, 342, 64–66. doi:10.1136/bmj.c7452.

    Google Scholar 

  26. Groenendyk, E. (2016). The anxious and ambivalent partisan: The effect of incidental anxiety on partisan motivated recall and ambivalence. Public Opinion Quarterly, 80, 460–479. doi:10.1093/poq/nfv083.

    Article  Google Scholar 

  27. Groves, R. M. (2006). Nonresponse rates and nonresponse bias in household surveys. Public Opinion Quarterly, 70, 646–675. doi:10.1093/poq/nfl033.

    Article  Google Scholar 

  28. Groves, R. M., & Peytcheva, E. (2008). The impact of nonresponse rates on nonresponse bias. Public Opinion Quarterly, 72, 167–189. doi:10.1093/poq/nfn011.

    Article  Google Scholar 

  29. Hadjiargyrou, M. (2015). Scientific misconduct: How best to punish those who consciously violate our profession’s integrity? Journal of Information Ethics, 24, 23–30.

    Google Scholar 

  30. Haidt, J. (2001). The emotional dog and its rational tail: A social intuitionist approach to moral judgment. Psychological Review, 108, 814–834. doi:10.1037/0033-295X.108.4.814.

    Article  Google Scholar 

  31. Haidt, J. (2012). The righteous mind: Why good people are divided by politics and religion. New York, NY: Pantheon Books.

    Google Scholar 

  32. Haidt, J., & Joseph, C. (2004). Intuitive ethics: How innately prepared intuitions generate culturally variable virtues. Daedalus, 133, 55–56. doi:10.1162/0011526042365555.

    Article  Google Scholar 

  33. Haidt, J., & Kesebir, S. (2010). Morality. In S. T. Fiske, D. T. Gilbert, & G. Lindzey (Eds.), Handbook of social psychology (5th ed., pp. 797–832). Hoboken, NJ: Wiley.

    Google Scholar 

  34. Igo, S. E. (2007). The averaged American: Surveys, citizens, and the making of a mass public. Cambridge, MA: Harvard University Press.

    Book  Google Scholar 

  35. John, L. K., Loewenstein, G., & Prelec, D. (2012). Measuring the prevalence of questionable research practices with incentives for truth telling. Psychological Science, 23, 524–532. doi:10.1177/0956797611430953.

    Article  Google Scholar 

  36. Judson, H. F. (2004). The great betrayal: Fraud in science. Orlando, FL: Harcourt.

    Google Scholar 

  37. Krosnick, J. A., Holbrook, A. L., Berent, M. K., Carson, R. T., Hanemann, W. M., Kopp, R. J., et al. (2002). The impact of “no opinion” response options on data quality: Non-attitude reduction or an invitation to satisfice? Public Opinion Quarterly, 66, 371–403.

    Article  Google Scholar 

  38. Mullinix, K. J., Leeper, T. J., Druckman, J. N., & Freese, J. (2015). The generalizability of survey experiments. Journal of Experimental Political Science, 2, 109–138. doi:10.1017/XPS.2015.1.

    Article  Google Scholar 

  39. Nadler, J. (2005). Flouting the law. Texas Law Review, 83, 1399–1441. Available at SSRN: http://ssrn.com/abstract=692223.

  40. O’Boyle, E. H., Banks, G. C., & Gonzalez-Mulé, E. (2014). The chrysalis effect: How ugly initial results metamorphosize into beautiful articles. Journal of Management. doi:10.1177/0149206314527133.

    Google Scholar 

  41. O’Leary, P. (2015). Policing research misconduct. Albany Law Journal of Science & Technology, 25, 39–93.

    Google Scholar 

  42. Open Science Collaboration. (2015). Estimating the reproducibility of psychological science. Science, 349, 943–951. doi:10.1126/science.aac4716.

    Article  Google Scholar 

  43. Peer, E., Vosgerau, J., & Acquisti, A. (2013). Reputation as a sufficient condition for data quality on Amazon Mechanical Turk. Behavior Research Methods, 46, 1023–1031. doi:10.3758/s13428-013-0434-y.

    Article  Google Scholar 

  44. Pew Research Center. (2013, July 11). Public esteem for military still high. http://www.pewforum.org/2013/07/11/public-esteem-for-military-still-high/.

  45. Pew Research Center. (2015, January 29). Public and scientists’ view on science and society. http://www.pewinternet.org/2015/01/29/public-and-scientists-views-on-science-and-society/.

  46. Pickett, J. T., & Bushway, S. D. (2015). Dispositional sources of sanction perceptions: Emotionality, cognitive style, intolerance of ambiguity, and self-efficacy. Law and Human Behavior, 39, 624–640. doi:10.1037/lhb0000150.

    Article  Google Scholar 

  47. Reardon, S. (2015). Uneven response to scientific fraud. Nature, 523, 138–139.

    Article  Google Scholar 

  48. Redman, B. K., & Caplan, A. L. (2005). Off with their heads: The need to criminalize some forms of scientific misconduct. The Journal of Law, Medicine & Ethics, 33, 345–348.

    Article  Google Scholar 

  49. Redman, B. K., & Caplan, A. L. (2015). No one likes a snitch. Science and Engineering Ethics, 21, 813–819. doi:10.1007/s11948-014-9570-8.

    Article  Google Scholar 

  50. Robinson, P. H. (2008). Distributive principles of criminal law: Who should be punished how much?. New York, NY: Oxford University Press.

    Book  Google Scholar 

  51. Robinson, P. H. (2012). Intuitions of justice and the utility of desert. New York, NY: Oxford University Press.

    Google Scholar 

  52. Robinson, P. H., & Darley, J. M. (1997). The utility of desert. Northwestern University Law Review, 91, 453–499. doi:10.2139/ssrn.10195.

    Google Scholar 

  53. Robinson, P. H., Goodwin, G. P., & Reisig, M. D. (2010). The disutility of injustice. New York University Law Review, 85, 1940–2033. Available at SSRN: http://ssrn.com/abstract=1470905.

  54. Silver, J. R., & Silver, E. (2017). Why Are Conservatives More Punitive Than Liberals? A Moral Foundations Approach. Law and Human Behavior.

  55. Simmons, J. P., Nelson, L. D., & Simonsohn, U. (2011). False-positive psychology: Undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychological Science, 22, 1359–1366. doi:10.1177/0956797611417632.

    Article  Google Scholar 

  56. Smith, R. (2013). Should scientific fraud be a criminal offense? http://blogs.bmj.com/bmj/2013/12/09/richard-smith-should-scientific-fraud-be-a-criminal-offence/.

  57. Sovacool, B. K. (2005). Using criminalization and due process to reduce scientific misconduct. The American Journal of Bioethics, 5, W1–W7. doi:10.1080/15265160500313242.

    Article  Google Scholar 

  58. Stern, A. M., Casadevall, A., Steen, R. G., & Fang, F. C. (2014). Financial costs and personal consequences of research misconduct resulting in retracted publications. eLife, 3, e02956. doi:10.7554/eLife.02956.

    Article  Google Scholar 

  59. Stroebe, W., Postmes, T., & Spears, R. (2012). Scientific misconduct and the myth of self-correction in science. Perspectives on Psychological Science, 7, 670–688. doi:10.1177/1745691612460687.

    Article  Google Scholar 

  60. Suhay, E., & Druckman, J. N. (2015). The politics of science: Political values and the production, communication, and reception of scientific knowledge. Annals of the American Academy of Political and Social Science, 658, 6–15. doi:10.1177/0002716214559004.

    Article  Google Scholar 

  61. Weinberg, J. D., Freese, J., & McElhattan, D. (2014). Comparing data characteristics and results of an online factorial survey between a population-based and a crowdsource-recruited sample. Sociological Science, 1, 292–310. doi:10.15195/v1.a19.

    Article  Google Scholar 

Download references

Author information

Affiliations

Authors

Corresponding author

Correspondence to Justin T. Pickett.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (DOC 112 kb)

Appendices

Appendix 1: Demographics for Mechanical Turk Sample (Study 1) and GfK Knowledge Networks Sample (Study 2)

Variables Study 1 Study 2
MTurk Unweighted GfK Weighted GfK
Age (mean) 34.49 46.86 47.39
 18–30 (%) 45 21 23
 31–45 (%) 39 28 24
 46–60 (%) 12 29 27
 61 or older (%) 4 22 25
Male (%) 58 49 48
Race
 White, non-Hispanic (%) 79 76 65
 Black, non-Hispanic (%) 6 8 12
 Other, non-Hispanic (%)a 6 6 8
 Hispanic (%) 7 10 16
Education
 No HS diploma (%) 1 7 12
 HS diploma (%) 9 28 30
 Some college (%) 26 19 20
 Associate’s degree (%) 11 9 8
 Bachelor’s degree (%) 37 21 17
 Graduate degree (%) 16 16 13
Income
 Less $25K (%) 12 17
 $25–49.9K (%) 19 21
 $50–74.9K (%) 22 18
 $75–99.9K (%) 15 15
 $100–149.9K (%) 22 18
 $150K or more (%) 11 10
Region
 Northeast (%) 19 18
 Midwest (%) 24 21
 South (%) 34 37
 West (%) 23 23
Political ideology
 Very liberal (%) 15
 Liberal (%) 38
 Middle of road (%) 28
 Conservative (%) 16
 Very conservative (%) 4
Party identification
 Strong democrat (%) 16 17
 Not strong democrat (%) 15 15
 Leans democrat (%) 20 19
 Independent/other (%) 3 4
 Leans republican (%) 18 17
 Not strong republican (%) 14 12
 Strong republican (%) 15 14
  1. a“Other” includes respondents of two or more races

Appendix 2: Survey Questions

Study 1

Fabrication and falsification condition
Description of behavior: These next few questions are about SCIENTIFIC RESEARCH. Scientific research includes studies and experiments conducted in a wide range of fields, such as psychology, medicine, education, pharmacology, genetics, economics, and political science
FALSIFYING and FABRICATING data are practices that some scientists use to get desired results. FALSIFYING data is dishonestly changing the scores or values in a real data set in order to get a specific result that one wants. FABRICATING data is faking a study and simply making up the data from scratch
Manipulation check: Now we would like to tell you about two specific scenarios
Scenario 1: A medical researcher is writing an article testing a new drug for high blood pressure. The results suggest that the drug is ineffective. She changes the participants’ scores in the data to make the drug seem more effective. Specifically, she subtracts 10 from the participants’ actual blood pressure scores to make them seem lower than they really were
Scenario 2: A psychologist wants to write an article about how thinking about money makes people less willing to cooperate with coworkers. He never actually collects any real data. Instead, he decides to just make up a data set that shows the desired result
Based on the above information, which of the following statements is true?
 ○   Only Scenario 1 is an example of FABRICATING data
 ○   Only Scenario 2 is an example of FABRICATING data
 ○   Both of the scenarios are examples of FABRICATING data
 ○   Neither of the scenarios is an example of FABRICATING data
Outcome variables: In your view, how MORALLY acceptable or unacceptable is it for scientists to falsify or fabricate data in scientific research?
 ○   Acceptable
 ○   Slightly acceptable
 ○   Neither acceptable nor unacceptable
 ○   Slightly unacceptable
 ○   Unacceptable
Do you think that scientists working at colleges or universities who are found to have falsified or fabricated data in scientific research should be FIRED, or not?
 ○   Yes, they should be fired
 ○   No, they should not be fired
Do you think scientists who are found to have falsified or fabricated data in scientific research should or should not be BANNED from receiving government funds in the future to conduct their research?
 ○   Yes, they should be banned
 ○   No, they should not be banned
Do you think it should or should not be a CRIME to falsify or fabricate data in scientific research?
 ○   Yes, it should be a crime
 ○   No, it should not be a crime
[Asked only if respondent answered yes to previous question]
What do you think the punishment should be for someone convicted of falsifying or fabricating data in scientific research?
 ○   A fine and/or probation
 ○   Incarceration for up to 1 year
 ○   Incarceration for more than 1 year
Selective reporting condition
Description of behavior: These next few questions are about SCIENTIFIC RESEARCH. Scientific research includes studies and experiments conducted in a wide range of fields, such as psychology, medicine, education, pharmacology, genetics, economics, and political science
SELECTIVE REPORTING is a practice that some scientists use to get desired results. Selective reporting involves scientists making choices in studies to get the results they want, but then failing to tell others about those choices. We are interested in your views about this practice. Examples of selective reporting include:
 (1)  A scientist runs several experiments, but only reports those experiments that show what he/she wants to see
 (2)  A scientist analyzes several outcomes (e.g., lung cancer, liver cancer, throat cancer) in a single study, but only reports the outcome that shows the results that he/she wants to see
 (3)  A scientist tries several different methods for analyzing the data, but only reports the results for the method that show what he/she wants to see
Manipulation check: Now we would like to tell you about two specific scenarios
Scenario 1: A medical researcher is writing an article testing a new drug for high blood pressure. When she analyzes the data with either method A or B, the drug has zero effect on blood pressure, but when she uses method C, the drug seems to reduce blood pressure. She only reports the results of method C, which are the results that she wants to see
Scenario 2: A psychologist conducts three experiments testing whether thinking about money makes people less willing to cooperate with coworkers. The results are mixed. Two experiments show zero effect on cooperation, but one shows an effect. However, when he writes the paper for publication, he only reports the one experiment that shows the results he wants
Based on the above information, which of the following statements is true?
 ○   Only Scenario 1 is an example of selective reporting
 ○   Only Scenario 2 is an example of selective reporting
 ○   Both of the scenarios are examples of selective reporting
 ○   Neither of the scenarios is an example of selective reporting
Outcome variables: In your view, how MORALLY acceptable or unacceptable is it for scientists to use selective reporting in scientific research?
 ○   Acceptable
 ○   Slightly acceptable
 ○   Neither acceptable nor unacceptable
 ○   Slightly unacceptable
 ○   Unacceptable
Do you think that scientists working at colleges or universities who are found to have used selective reporting in scientific research should be FIRED, or not?
 ○   Yes, they should be fired
 ○   No, they should not be fired
Do you think scientists who are found to have used selective reporting in scientific research should or should not be BANNED from receiving government funds in the future to conduct their research?
 ○   Yes, they should be banned
 ○   No, they should not be banned
Do you think it should or should not be a CRIME to use selective reporting in scientific research?
 ○   Yes, it should be a crime
 ○   No, it should not be a crime
[Asked only if respondent answered yes to previous question]
What do you think the punishment should be for someone convicted of using selective reporting in scientific research?
 ○   A fine and/or probation
 ○   Incarceration for up to 1 year
 ○   Incarceration for more than 1 year
Attention check included at the end of the survey in both conditions
  Recent research on attitudes about science and crime shows that attitudes are affected by news exposure. News includes local TV news, national TV news, newspaper news, and online news. More importantly, we are interested in whether you actually take the time to read directions. To show that you have read the instructions, please ignore the question below about how often you watch the news and instead select the “other” option and type “finished.”
How often do you watch the news?
 ○   Never
 ○   Yearly
 ○   Monthly
 ○   Weekly
 ○   Daily
 ○   Other (please specify)

Study 2

Description of behavior: This next question is about SCIENTIFIC RESEARCH. Scientific research includes studies and experiments conducted in a wide range of fields, such as psychology, medicine, education, pharmacology, genetics, economics, and political science
FALSIFYING and FABRICATING data are practices that some scientists use to get desired results. FALSIFYING data is dishonestly changing the scores or values in a real data set in order to get a specific result that one wants. FABRICATING data is faking a study and simply making up the data from scratch
An example of FALSYIFYING data would be: A medical researcher is writing an article testing a new drug for high blood pressure. The results suggest that the drug is ineffective. She changes the participants’ scores in the data to make the drug seem more effective. Specifically, she subtracts 10 from the participants’ actual blood pressure scores to make them seem lower than they really were
An example of FABRICATING data would be: A psychologist wants to write an article about how thinking about money makes people less willing to cooperate with coworkers. He never actually collects any real data. Instead, he decides to just make up a data set that shows the desired result
Outcome variables: Do you think it should or should not be a CRIME to falsify or fabricate data in scientific research?
    ○   Yes, it should be a crime
    ○   No, it should not be a crime
[Asked only if respondent answered yes to previous question]
What do you think the punishment should be for someone convicted of falsifying or fabricating data in scientific research?
    ○  A fine and/or probation
    ○  Incarceration for up to 1 year
    ○  Incarceration for more than 1 year

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Pickett, J.T., Roche, S.P. Questionable, Objectionable or Criminal? Public Opinion on Data Fraud and Selective Reporting in Science. Sci Eng Ethics 24, 151–171 (2018). https://doi.org/10.1007/s11948-017-9886-2

Download citation

Keywords

  • Research misconduct
  • Fabrication and falsification
  • Questionable research practices
  • Researcher degrees of freedom
  • Publication bias
  • False positives