President Ronald Reagan once opined that “the nine most-terrifying words in the English language are ‘I’m from the government and I’m here to help’” [7]. Of course, he said this more than 30 years ago, long before nightly dinnertime interruptions by telemarketers and email spam. If the Gipper were still around, his nine most-feared words today might be: “We are conducting a brief survey to better understand …”

You’ll find most of us hiding under our desks when these requests come our way, whether by phone or email.

But as editors at Clinical Orthopaedics and Related Research ®, there is no hiding from the fact that we receive many research studies based on email surveys, postal surveys, surveys of large single- or multispecialty collaborative groups, and surveys of society members. While some of them may be interesting, only a much-smaller number are important and robust enough to justify the attention of CORR’s readers.

We assess studies of this design with the needs of those readers in mind. The studies we publish will, in general, share three traits:

First, these studies should tell readers something important that they did not know before. Simply summarizing what some group of experts (or community practitioners) prefers is, generally speaking, not of sufficient interest to publish here. Most of the time, practitioners are aware of the available options, and they usually also know when multiple options are in common use. The goal of a high-quality general-interest journal like CORR ® should be to determine which option the best evidence supports; practice-pattern surveys and reports of provider preferences are at best Level-V evidence, and as such, represent a poor basis for choosing a therapeutic approach. But we can imagine—and have published—exceptions to this. We recently published a practice-pattern survey demonstrating that an important element of fracture care in practice deviates from solid clinical evidence [5]; in the future, we might also consider practice-pattern surveys that present some unexpected or counterintuitive findings, but by definition such studies are likely to be rare. By contrast, we especially enjoy survey studies that cause us to second guess what we thought we knew, and have published a number of these lately; a few recent examples are “Do Surgeons Treat Their Patients Like They Would Treat Themselves” [3], “High Rates of Interest in Sex in Patients with Hip Arthritis” [4], “New Total Knee Arthroplasty Designs: Do Young Patients Notice?” [6], and “Do Orthopaedic Surgeons Acknowledge Uncertainty?” [8].

Second, the group surveyed must represent some well-defined larger group of interest. While the availability of free online survey tools like SurveyMonkey (www.surveymonkey.com) has made it easier to conduct anything from an intradepartmental questionnaire to an international assessment of expert opinion, these tools do not change the fact that high-quality social-science research generally gets conducted by qualified social scientists. We would be surprised if a sociologist could develop and evaluate a surgical approach to the shoulder; it is no more reasonable to assume that a shoulder surgeon can conduct a valid survey study without expert guidance. A key element of survey-study design is the definition of the group of interest, and finding a representative cohort within this group to query; to do this, it often helps to have at least one member on a research team who has particular expertise in survey design. CORR is an international journal, and so we assess whether the surveys we publish address a need of a large-enough subset of our readers. This will always be a judgment call, and so we depend on the authors to make a strong case for why the population of interest and the population surveyed are large or representative enough to matter to our readership.

Finally, the proportion of those surveyed that responded must be large enough (and those who responded should be similar enough to the underlying group) to give the reader confidence that the responses are representative. We note that in the era of Internet surveys, the concept of “response rate” may mean different things in different settings [2]; for instance, the proportion of individuals viewing, starting, and completing a survey all may differ. For web surveys, it can be difficult or impossible to know how many individuals may have seen the invitation, and so it may be impossible to calculate what proportion of those exposed to the idea of that survey actually completed it. Differential response by individuals with greater interest in particular topics can severely bias the conclusions a survey draws. There is no “minimum” response rate that ensures accuracy. Higher is better; conversely, the lower the proportion of those invited who respond, the greater is the need for the authors either to raise that proportion (reminders are one good way to do this, though not the only way [1]), and/or to convince the readers that those who responded did not differ from nonrespondents in important ways, which usually is a tall order.

We note that orthopaedic survey research is just a subcategory of orthopaedic research. Survey studies therefore are more similar than different to all the other studies we publish in CORR. That being so, many of the same principles apply: We look for a sound rationale (reason) for the study, testable research questions, justification for all major methodological decisions, clear reporting of results (including effect-size estimates and confidence intervals, where appropriate), and thoughtful discussion of key limitations, main findings, and take-home messages. But because there are important differences between survey studies and other clinical-research efforts, we recommend those who design these surveys avail themselves of any of a number of in-depth checklists for this kind of research [1, 2]. We encourage, but do not require, use of these checklists. We will, though, consider the importance of the question, the group of interest (and the relationship of the group surveyed to the group of interest), and the response proportion as we assess survey research submitted to CORR, since these are the standards our readers apply as they read this kind of work.