Collecting Child Victimization Information from Youth and Parents: Ethical and Methodological Considerations

  • Heather A. TurnerEmail author
Living reference work entry


Collecting up-to-date and accurate information on children’s exposure to violence is critical to understanding this important source of risk, to track trends, and to inform public policy on how to protect and promote the health and well-being of young people. This chapter reviews literature addressing a variety of ethical and methodological considerations when conducting research on child victimization. Ethical considerations include the potential for harm to participants, strategies for reducing risk of participation, obtaining informed consent, issues of confidentiality and mandatory reporting, and the use of participant incentives. Methodological issues addressed include unit and item nonresponse and data quality, the potential for recall problems, survey mode and social desirability bias, and the use of parent proxy versus self-reports of victimization. Recommendations that emerge from this literature and the need for additional research are discussed.


Child victimization research Informed consent Confidentiality Participation risks Survey mode Data quality Nonresponse Children’s exposure to violence Research 

Ethical Considerations

Can Children Be Harmed by Asking About Child Abuse and Exposure to Violence?

Concerns about whether children might be harmed by participating in surveys on child victimization have generally focused on two issues: (1) psychological distress from the survey content and (2) harm to the child or child’s interests from others who might learn about the child’s participation or responses.

Risk of Generating Participant Distress

The possibility of psychological distress has generally been viewed as the result of two main mechanisms. One is that the child will be reminded of an upsetting or traumatic life event and will not be able to deal with the emotions that the memories provoke. Another is the possibility that the survey subject matter will be troubling to a sensitive child or will broach issues that the child is not developmentally prepared for, particularly concerning sex or sexual violence.

Existing evidence suggests that psychological distress among child participants in victimization surveys is unusual, but may be higher among some groups of youth. A systematic review of literature on conducting research on sensitive topics among adults and adolescents found that the percentage of adolescent participants reporting any level of “upset” or “distress” or “discomfort” was typically low, with a median percentage across studies of 5.7% (McClinton Appollis et al. 2015). Similarly, a meta-analysis of 70 trauma-related studies on adults revealed low to moderate mean levels of distress across samples (mean of 2.3 on a scale of 1–5). (Jaffe et al. 2015).

Although most studies on this issue have found distress levels to be relatively low overall, there is some evidence that youth with trauma histories may more often be upset as a result of participating in victimization research than those without such histories. For example, in a national sample of 3,614 adolescents aged 12–17, Zajac et al. (2011) found that adolescents reporting traumatic experiences or mental health problems were significantly more likely to report distress compared to those who did not report such problems. Only 5.7% reported distress overall, but between 11.5% and 16.1% of youth who disclosed physical assault, physical abuse, or witnessing parental violence reported distress, and the distress rate was 20% among those who indicated sexual victimizations (Zajac et al. 2011). Langhinrichsen-Rohling et al. (2006), in an adolescent survey asking about drug use, suicidal behavior, and physical and sexual abuse, also found that youth with these sensitive experiences reported more frequent upset when completing the survey, although these experiences explained less than 7% of variance in upset ratings.

Two Finnish surveys with similar methodologies (Ellonen and Pösö 2011; Fagerlund and Ellonen 2016) examined child and adolescent feelings about participating in a computer-based self-report victimization study, by analyzing free text comments to the question “How did you feel about answering the questions?” Analyses in the first survey, conducted in 2008, focused on children’s experiences on nonsexual violent victimization (Ellonen and Pösö 2011) while the second (conducted in 2013) focused on sexually victimized children (Fagerlund and Ellonen 2016). In both studies, although most responses were neutral or positive, victimized youth were significantly more likely to describe negative feelings than nonvictimized youth. However, victimized youth in both surveys were also more likely to report positive feelings, such as feelings of relief, suggesting that victimized respondents experience more feelings in general in regard to the survey.

Age may be one factor associated with the likelihood of experiencing distress in victimization surveys. In an online study about violence, Ybarra et al. (2009) found a substantial number of the 10–15-year-old participants (23%) indicated that they were upset about the survey content. In this study, there were no significant differences between victims and nonvictims in upset for most types of victimization. However, Ybarra and colleagues found that youth who were upset by the survey were more likely to be younger, suggesting some content may be more appropriate for older teens. The Finnish surveys discussed above also found that younger youth (sixth grade students) were more likely than adolescents (ninth grade students) to report negative feelings. However, younger respondents were also more likely to report feelings of relief (Fagerlund and Ellonen 2016).

Notably, evidence suggests that discomfort is transient when it occurs. For example, Zajac et al. (2011) reported that only 0.8% of the distressed youth remained distressed at the end of the interview. The National Society for the Prevention of Cruelty to Children in the United Kingdom conducted perhaps the largest survey ever on youth victimization, with more than 6000 participants, including 2,275 youth participants (age 11–17) (Radford et al. 2011). They assessed negative feelings at the close of the survey and reported findings from a subgroup of 191 participants (3%) whose cases were “red-flagged” for possible follow-up because of potentially serious reports or who asked to speak to a counselor (17%) indicating they had been upset by the study. Eighty-two percent of these upset youth (27 of 33) nonetheless said that participation had been worthwhile.

The National Survey of Children Exposed to Violence (NatSCEV) also asked respondents at the end of the survey whether answering questions had upset them (Finkelhor et al. 2013). In NatSCEV, 4.5% of youth reported being at all upset and 1% reported being “pretty” or “a lot” upset. However, only a minority of those upset, 0.3% of the total sample, said they would not participate again had they known about the content. Even in this group, the regret about participation was mostly due to the length of the survey, not the types of questions being asked. Edwards et al. (2016) found that while 6% of study youth participants in a small mixed method dating violence study reported being upset by their participation, only 1.5% percent regretted participating in the study. Moreover, most of those reporting experiencing upset were upset because of peer’s opinions and awkwardness of questions asked in a focus group setting. In contrast, a larger proportion of respondents indicated some level of upset in the NSYC survey, which used ACASI methodology to assess sexual victimization among youth in state juvenile facilities (Smith and Sedlak 2011). Sensitive questions about sexual assault were asked of all youth 10 and over, and particularly explicit and detailed sexual questions were asked of youth 15 and older. In answering end-of-survey questions, 24% of all youth respondents indicated that some questions were upsetting and 15% said they would not do the survey again. However, in this same study, only 1% of youth requested a referral to a counselor outside their facility and less that 0.5% wanted to see a counselor within their facility.

It is also important to note that most studies on psychological distress seldom distinguish between minor discomfort and the triggering of more severe psychological symptoms. The latter appear to be especially rare. The level of distress reported is generally mild and transitory and falls within the emotional distress that is considered an acceptable risk (Carter-Visscher et al. 2007; Ybarra et al. 2009).

Even when distress about survey content does occur, studies on children’s perceptions generally find a positive cost-benefit assessment, or a high percentage of children deeming the research useful (Chu et al. 2008; Kuyper et al. 2010; Widom and Czaja 2006). Similar to trauma research on adults which shows a variety of perceived benefits of participation (Jorm et al. 2007), research on sensitive topics among youth has found largely positive reactions (Chu et al. 2008; Kuyper et al. 2010; Widom and Czaja 2006), including feeling empowered when they believed their input would be used to help others (Cooper Robbins et al. 2011; McCarry 2012). Kuyper et al. (2010) found that victims of sexual coercion reported more distress and need for help due to their participation, but also reported more positive feelings about their participation than those with no sexual victimization experiences. This is consistent with the Fagerlund and Ellonen (2016) discussed above, which also found sexual assault victims reported both more negative and more positive feelings about participation.

Some participants find the disclosure process itself beneficial and are glad to be able to talk to someone about something they cannot ordinarily discuss. Cromer et al. (2006) compared young adults’ reactions to being asked personal questions that were not trauma-related and questions specific to trauma histories. Participants reported that answering trauma-related questions was not more distressing than answering other personal questions and rated the trauma-related questions as more important and beneficial. Edwards et al. (2016) found that almost 50% of youth reported that they personally benefited from the study, such as feeling like they could better help friends in situations of dating violence. Youth may appreciate knowing that the problems they face are important to society and to adults and that adults are actively working on improving the lives of children.

Risk of Retaliation from a Third Party

In addition to psychological distress, the other often-discussed potential harm is harm to the child or their interests as a result of participation. This could mean parents or peers attacking or intimidating the child for a reason connected with participating in the study, for example, because they were afraid of what the child may have told. This kind of harm could also accrue from information about the child (the fact of being victimized) that somehow became known or was suspected and therefore exposed the child to stigma or ostracism. This harm has sometimes been termed “informational” or “social” harm or risk.

Little has been written about the risk of retaliation, but existing evidence suggests that this risk is quite small when adequate safety mechanisms are in place. The most important safety mechanism is ensuring that the interview is conducted in privacy. Tens of thousands of youth have participated in surveys on victimization. These include thousands who have multiple contacts with researchers in longitudinal or panel designs, and who could potentially offer feedback about prior consequences of participation. Although we are not aware of any systematic attempt to assess the safety of interviews on youth victimization, we also do not know of any anecdotal reports whereby survey participation led to negative consequences for youth.

Many types of disclosure seem unlikely to lead to retaliation. These include disclosures by caregivers acting as proxies, as for children aged 2–9 in the NatSCEV protocol (Finkelhor et al. 2005b). Repeat disclosures, which include virtually everyone in clinical and law enforcement samples, as well as many in community surveys, seem unlikely to present a high risk of retaliation. Depending on the methodology, many perpetrators, particularly stranger or peer perpetrators, would have no way of knowing that a child had participated in a youth victimization study. Parents who give consent for participation will know about the interview, but it seems likely that a highly suspicious or guilty parent will refuse consent. Although hard to assess empirically, it is generally thought that the most severe cases of violence become refusals in community surveys. The worst violence reported on large national surveys seems unlikely to equal in intensity the worst violence known in clinical samples.

General Principles for Minimizing the Risks of Survey Participation

Over the years, researchers have developed a number of standards for minimizing the risk of surveying youth about adverse events (see, for example, Meinck et al. 2016).

Introducing the Survey

Surveys on sensitive topics can be made less potentially harmful or distressing by introducing the survey in a way that alerts the respondent of the types of questions that will be asked and sets the tone of the survey as nonjudgmental and confidential (also see section on survey mode and anonymity/confidentiality). As will typically be required by the institutional IRB, information is generally offered up front that explains, in language appropriate to the audience (youth or caregiver, for example), the purpose of the interview and information about what to expect in terms of content. This should include information about the sensitive nature of some of the questions. It should indicate that there are no right or wrong answers, and remind participants that they can skip a question if they choose.

Modes of data collection that involve interviewing, whether in-person or by phone, require interviewers to be skilled and sensitive. They should be trained in basic interview skills such as building rapport, avoiding judgmental responses or expressions, clarifying individual questions, and allowing room for participants to decline to answer. Interviewers can practice so that their comments are positive and nonjudgmental. Interviewers can also be trained in how to recognize and deal with distress and be familiar with whatever steps are in place to offer further assistance or help to participants (see below).

Resources to Minimize Distress

Another common strategy to minimize possible distress is to offer information about sources of help to participants. Typically, this information is offered to all participants, and not just those who show signs of distress. It can be seen as a benefit offered by the research that respondents can use now or in the future.

Depending on the location of the survey, a variety of resources might be offered to participants. Some places have hotlines for distressed youth, mental health services, or to get help for child abuse, such as the national Girls and Boys Town hotline. The names, addresses, or phone numbers of agencies that offer help can be provided, including community mental health centers, student health centers, or other local agencies. The respondents can be referred to the internet that has a wealth of self-help information available, and websites such as that can be reached from any computer with internet access, anywhere in the world.

Steps to Minimize Risk of Retaliation and Informational or Social Risks

Preventing retaliation and informational/social risk is closely related to the ethical practice of maintaining confidentiality. The issues in this area overlap with the material discussed in that section, but a few of the main points are highlighted below.

One of the most important safety measures is providing or verifying that a youth respondent is in a safe and private place, where they can speak comfortably and confidentially during the survey. This should be asked explicitly in phone or internet surveys. This helps minimize harm by reducing the risk of retaliation from someone finding out what the child disclosed. In interview studies, this confidentiality can be increased by using self-administered questionnaires or computer-assisted self-interviews.

In classroom group administrations, another concern has been that victimized children might be inadvertently identified because they take longer to complete self-administered questionnaires. One possible strategy is to add unrelated questionnaire material to the end of the survey so that everyone is working until the allotted time is expired. In general, it may be useful to prepare or debrief participants in such a way that they can minimize unwanted exposure about their participation, for example, by discouraging them from talking about their participation with others.

The language used to obtain parental consent can also help ensure safety. Although caregivers must always be fully informed about the sensitive nature of the questions, it can be important to avoid language that might enhance their suspiciousness.

It is also often recommended that care be taken to avoid collecting so much demographic data that it could be used to identify specific individuals, even if names are not attached to the data. This is especially important in smaller samples. For example, if there is only one 7-year-old Filipino female with 5 siblings, the combination of age, ethnicity, gender, and number of siblings would allow someone to identify her data even without her name. Depending on the needs of the study, ways to handle this are to gather information at a more general level (for example, by using age ranges instead of specific ages), omit demographic information that is not needed to address research questions, or to ensure that the sample size is large enough so that any combination of demographic factors is likely to apply to many participants.

Consent Procedures When Collecting Sensitive Data on Children and Youth

What Is Involved in Obtaining Informed Consent?

Information provided to parents and children. There are several pieces of information that are usually provided in order for parents and/or children to give informed consent for research (see, for example, American Psychological Association 2017; Council for International Organizations of Medical Sciences (CIOMS); and World Health Organization (WHO) 2008; Vitiello 2008):
  • The purpose of the research

  • How the participant was chosen for involvement

  • The expected duration of the research and what, if any, compensation they will receive for their participation

  • That participation is completely voluntary and refusal to participate will not result in any penalty or loss of benefits to which the respondent would be otherwise entitled

  • That, if they choose to participate, participants may decide to stop participation at any time and/or refuse to answer any question

  • That the information that they provide will be held confidential, and what (if any) exceptions to confidentiality apply. For example, if the researcher will report suspected incidents of child abuse to authorities (see above discussion), the principle of informed consent requires that this be disclosed in the informed consent procedure (American Association for Public Opinion Research 2014)

  • Whether there are direct benefits to the participant, including compensation. The participants should be told of the broader benefits of the study. For example, researchers might indicate that the findings from the study will help professionals and policy makers better understand the experiences of young people and develop better ways to help youth avoid or cope with violence

  • Any potential risks associated with participation. The researcher, for example, might state something like the following: “Although many youths enjoy participating in surveys of this type, some people may find certain questions upsetting or difficult to talk about”

One important consideration for informed consent is how detailed the study description should be. Although the information provided should not be misleading, most researchers try to avoid highly charged language (Hill 2005). For example, terminology in study introductions such as “child abuse” may be adequately described as “children’s exposure to violence, crime and family conflict.” Surveys asking about exposure to sexual abuse or date rape, for example, might indicate that “questions will include some sensitive issues such as whether you have experienced unwanted sexual advances.”

Not every topic covered in a survey can be outlined in the consent process. Researchers agree that the most sensitive and potentially distressing content should be explicitly outlined, but it is not always obvious which questions will be perceived as most sensitive by the respondent and they may vary from individual to individual. Indeed, if measured by refusal rates, the survey question that often elicits the greatest concern on the part of participants, even in studies involving highly sensitive crime and abuse questions, is income (Tourangeau and Yan 2007). On the whole, using accurate but more general content descriptors may often be the best strategy.

How to make sure children understand. It is important in the consent process that the child understands the purpose of the research and what is involved in participating. This means that researchers should use age-appropriate language and avoid jargon and legal terminology. To confirm that the child understands, the researcher may want to ask the child, after hearing the consent statement, to describe his/her understanding of the study and its procedures. This strategy can help to establish the child’s competence to give consent/assent when respondents are younger children and/or when the researcher is concerned about the child’s level of comprehension. Few studies, however, have examined children’s perceptions of research participation and understanding of informed consent. Chu et al. (2008) did explore this issue with children aged 7–12 to assess whether they understood consent. The vast majority (87%) generally understood their rights as research participants (for example, their freedom to skip questions, stop at any time, and take a break). Furthermore, understanding of informed consent did not vary across trauma exposure groups (no trauma, noninterpersonal violence, and interpersonal violence). Other research also suggests procedures were at least moderately effective in explaining research rights to children aged 8, 10, and 12 (Hurley and Underwood 2002). However, understanding consent/assent information and the voluntary nature of participation may differ somewhat for youth of different ages. Bruzzese and Fisher (2003) found that seventh graders, when compared to older youth, were less likely to fully understand their veto power over adult permission and their rights as research subjects. Tenth graders comprehension of these issues, however, was similar to adults.

Obtaining consent in self-administered formats, as opposed to through an interviewer, requires that the respondent read the consent/assent language rather than having it read to them (although ACASI format can allow the consent to be heard while the respondent reads along). Moreover, when an interviewer is not present, such as with online surveys, there is less opportunity to ensure that the consent was understood or even read at all (Simon Rosser et al. 2009).

A study addressing online assent processes among youth examined the effect of including questions about the assent information on youth’s willingness to complete the assent and understand its content (Friedman et al. 2016). The researchers compared three randomly assigned groups: (1) youth only asked to read the assent information and indicate their willingness to participate, (2) youth who were required to answer two questions about study risks and the voluntary nature of participating as part of the assent process, and (3) youth who were required to answer seven questions about the assent content. A significantly greater percentage of participants from the two-question group (32.6%) and the seven-question group (40.5%) dropped out prior to completing the assent process, relative to the no question group (13.4%). However, participants in the no questions group were significantly less likely to read and understand key study information. Study results are consistent with online studies of adults showing that “quizzing” about consent content increases the informed nature of consent, but at the cost of reduced participation rates (Kraut et al. 2004; O’Neil et al. 2003).

How to ensure that children are volunteering freely. Because of developmental immaturities and unequal power between children and adults, it can be more difficult to ensure that children are choosing freely to participate (Clacherty and Donald 2007; Powell et al. 2011). Children may want to avoid disappointing the researcher who may be viewed as an authority figure or parents who may have already given consent. Special efforts should be made to ensure that the child participant perceives the research as voluntary and that there will be no negative consequences in refusing participation. Researchers might say things like, “Although your experiences and opinions are important to us, it is completely OK if you do not want to participate in the study,” “No one will be angry or disappointed with you if you decide not to participate,” and “Remember that if you do decide to participate, you can still change your mind at any time, you can choose not to answer any question that you don’t want to answer, and you can stop at any time.” Interviewers should also be trained to monitor the child’s verbal and non-verbal cues throughout the interview. If the child displays hesitancy or discomfort, the interviewer can then ask the child if they wish to continue.

School-administered studies can pose particular problems for voluntary consent. When teachers are present or are administering the study, students may be concerned that refusing to participate could affect their grade or reputation with the teacher. They may be concerned that nonparticipation could also single them out for scrutiny, ridicule, or retaliation from peers. If no arrangement is made for children to have some alternative activity when a group administration is being carried out, the school may put pressure on children to participate. Little is known about the actual consequences of different school survey administration practices. But among those concerned about minimizing pressures on students, best practice is generally thought to involve having outsiders introduce, administer, and explain the study, to emphasize that participation will not affect grade or reputation, to provide alternative activities for nonparticipating students, and to allow students to complete the survey in as much privacy as is possible.

Consent Procedures and Mandatory Reporting of Research Participants

Mandatory reporting promotes a social good: protecting children from caregiver maltreatment. Unfortunately, not all children who are maltreated come to the attention of authorities in a timely manner. In theory, including researchers among mandatory reporters may be a way to increase the protection of children. Reporting possible maltreatment may also reduce the liability of universities, clinics, and other institutions.

Mandatory reporting can, however, conflict with other social goods: accurately documenting the extent of child maltreatment and minimizing the risks of research participation. Good data quality also promotes the safety of children by encouraging policy makers to dedicate resources to the issue, and by promoting awareness among child professionals.

Web-based and CASI/ACASI modes of data collection (discussed later) can allow anonymity of research responses, so that even if a researcher is present, he/she is blind to the content of interview responses. However, there are ways that researchers have implemented these anonymous survey protocols that also allow children who indicate wanting help, or who verbally disclose instances of abuse outside of the interview protocol, to receive referrals. For example, the National Survey of Youth in Custody (NSYC-1 and NSYC-2) used an ACASI mode of data collection that maintained anonymity of survey responses. However, all survey staff in direct contact with youth had to comply with state and local mandatory reporting requirements when a youth made a verbal statement suggesting abuse or neglect (Beck et al. 2013). Similar protocols were implemented for the Survey of Youth in Residential Placement (SYRP) (Sedlak et al. 2012) and the LongSCAN Study (Knight et al. 2000). Moreover, as discussed earlier, most surveys offer debriefing materials to all respondents, allowing any participant access to appropriate resources for assistance.

Do Disclosures Drop When Consent Forms Mention Mandatory Reporting or Limited Confidentiality?

Some research has demonstrated that warning participants that they may be reported to authorities substantially decreases disclosures and minimizes the benefits of any study. This substantially affects the risk-benefit assessments which most Institutional Review Boards (IRBs) do as part of their ethical evaluations (Penslar 1993).

In the area of parental maltreatment, there has been one experimental comparison of a consent form which limited confidentiality in cases of suspected abuse versus one that promised total confidentiality or anonymity (Ondersma and Chase 2006). In that study, the anonymous, fully confidential condition produced rates that were more than three times higher than the limited confidentiality condition. Almost half of a sample of new mothers (48%) endorsed at least one of 5 Conflict Tactics Scale-Parent Child (CTSPC) items (swearing, slapping, pinching, shaking, and insulting) in the total anonymity and confidentiality condition, whereas only 14% reported using any of these tactics in a condition that listed exceptions to confidentiality, including the potential to report abusive behaviors to CPS. These effects are particularly striking because none of the CTSPC items in the Ondersma and Chase study would normally be considered reportable abuse or neglect. The authors suggested that the typical research participant will be unsure as to where the line is drawn between acceptable discipline and abuse, and so will disclose less across the board.

There have been numerous other experimental comparisons of limits to confidentiality that measure what happens when participants are told their responses may be passed on to various authorities. These include studies of suicidal ideation in adolescents, contraceptive health care in adolescents, HIV partner notification, and depression (Dolbear et al. 2002; Lothen-Kline et al. 2003; Reddy et al. 2002). Completely confidential or anonymous data collection results in far more disclosures than consent protocols that warn about the possibility of reporting to some external authority or person. For example, in a natural experiment that required a change in research methodology during a randomized, longitudinal trial of an adolescent alcohol use prevention program (Lothen-Kline et al. 2003), results indicated that the prevalence of disclosing suicidal thoughts dropped significantly after the participants were informed that researchers would tell parents and professionals about any adolescent who endorsed any suicidal thoughts.

Langhinrichsen-Rohling et al. (2006) found significant differences in rates of reporting suicide ideation, suicide attempt, physical abuse, sexual abuse, and illicit drug use that were obtained between adjudicated youth who completed the survey anonymously and similar adjudicated youth who completed the survey without anonymity. These latter participants were told, as required by their reporting guidelines, that responses indicating risk of suicidality or experiences of abuse would be disclosed to their probation officer. Results indicated that adjudicated youth experiencing this procedure had substantially lower rates than those in the anonymous survey condition, suggesting that mandatory reporting language discourages adolescents from disclosing important risk information and may reduce the validity of the obtained data. Moreover, lower disclosure rates were also associated with a small increase in the prevalence of youth who indicated they were “often” upset while completing the survey, suggesting that there may be a link between concerns about confidentiality and increased distress about participation.

The importance of confidentiality for obtaining accurate data is recognized by laws granting researcher-subject privilege to some agencies. In the United States, Title 13 grants privilege to census researchers and Dept. of Justice researchers (Sieber 2001). As Sieber points out, census researchers may simply provide phone numbers for voluntary referrals. In Canada, Statistics Canada researchers are similarly protected from mandatory reporting obligations by the Statistics Act (Palys and Lowman 1999). The U.S. Federal Certificates of Confidentiality also recognize the importance of confidentiality for obtaining accurate data on sensitive topics.

One of the main benefits of child maltreatment and other victimization research is to help in crafting public policy by identifying the numbers of children in need and accurately accounting for the effects of victimization. By reducing disclosures, we vastly underestimate the extent of youth victimization. By including so many actual victims in the “nonvictim” group in our research (false negatives), we minimize the differences between victimized and nonvictimized youth on all of the variables we study. Inaccurate data can have an adverse effect on the availability of funding and services to address the problem of maltreatment, with the result that fewer children in need will be helped.

Methodological Considerations

Nonresponse and Data Quality in Child Victimization Surveys

Nonresponse and/or data quality, whether at the unit level or item level, is influenced by a number of factors. These include: (1) respondents’ reactions to the survey mode, such as whether data collection involves the presence of an interviewer; (2) respondent motivation to complete the survey, influenced by the length of survey, the respondents’ interest in the survey content, and the offering of an incentive; and (3) respondents’ ability to provide accurate answers, either because of their comprehension of the questions and/or their ability to recall the information being asked. A number of these issues are discussed below as they apply to collecting sensitive data from youth.

Unit-Level Nonresponse: Refusals and Noncontacts

Unit-level nonresponse in survey research results largely from refusals and noncontacts. Refusal may come about due to lack of interest in survey topic, perceptions that the survey is too long or that respondents does have enough time to complete the survey, distrust in the survey’s purpose, doubts regarding anonymity or confidentiality, and language barriers (UN Office on Drugs and Crime 2010). Nonresponse is even more complicated for youth surveys as it typically requires several steps of contact and consent before an interview can begin: making contact with a parent/guardian, consent by a parent/guardian for the child to participate, then making contact with the youth, and assent by the youth.

There is a limited amount of research on factors that predict parent consent and youth assent in surveys. Past studies have found that white, more highly educated parents and mothers are more likely to give consent for adolescent research participation than are minority, low SES parents and fathers (Anderman et al. 1995; Esbensen et al. 1999). A recent study examined the correlates of parents’ decisions to consent to their child’s participation in an intergenerational survey (The Youth Development Study) (Hussemann et al. 2016). The researchers found that parents of biological children and parents who were offered a $75 incentive were significantly more likely to consent to their child’s participation, than were stepparents or adoptive parents, or parents who were not offered an incentive. Another recent study examining reasons for parental consent or refusal for adolescent’s participation in sexuality research found that parental consent was largely motivated by perceptions of potential benefits and limited risks of participating in the study (Moilanen 2016). Those indicating that they would be unlikely to consent viewed sexual topics as private matters and/or inappropriate for their adolescents who were sexually inexperienced or immature. Those that indicated being likely to consent had more comfort with the subject matter, viewed the decision as the adolescent’s choice, saw benefits in participation, and viewed research as valuable.

A recent meta-analysis of 15 studies demonstrated that active parental consent procedures not only reduce response rates (relative to passive consent) but also underrepresent black youth, males, and those engaging in higher levels of substance use (Liu et al. 2017). Although this suggests that differences between parents who provide consent and those who do not may over-represent participation by lower risk youth, it is unclear if and how this applies to victimization-related surveys.

Survey Mode and Disclosure of Sensitive Information

Mode of data collection has been found to be an important factor influencing respondent disclosure in sensitive surveys (Holbrook et al. 2003; Metzger et al. 2000; Tourangeau and Yan 2007). The main distinction among the different modes is whether the questions are interviewer-administered or self-administered. Interviewer-administered modes include paper-and-pencil personal interviews (PAPI), computer-assisted personal interviews (CAPI), and computer-assisted telephone interviews (CATI). Self-administered modes include paper-and-pencil self-administered questionnaires (SAQ), computer-assisted self-administered interviews (CASI), audio computer-assisted self-interviewing (ACASI), interactive voice response (IVR), and web surveys.

Research has provided relatively consistent evidence that the presence of an interviewer reduces the likelihood of eliciting unbiased information from respondents about sensitive topics. Tourangeau et al. (2000), for example, reviewed several mode comparison studies of adults and found a significant increase in reporting of sensitive information including drug use, sexual partners, and abortion when using any self-administered modes of data collection relative to any interview mode.

The presumed basis of disclosure differences in interviewer-administered versus self-administered modes is related to social desirability bias, the tendency for respondents to answer questions in ways that would be viewed more favorably by others. In other words, respondents may answer untruthfully in order not to be seen in a negative light by the interviewer. Self-administered modes offer the ability for respondents to answer questions without face-to-face interactions, presumably allowing them to express socially undesirable opinions or feelings or disclose stigmatizing behaviors that would make them uncomfortable in the presence of others.

A study (PEW 2015) comparing phone administration and online survey modes found substantial differences, with the Web-based survey less likely to produce socially desirable answers than telephone interviews. For example, questions asking the respondent to rate the quality of their family and social life produced differences of 18% and 14% points, respectively, with those interviewed on the phone reporting higher levels of satisfaction than those who completed the survey on the Web. Questions about societal discrimination against several different groups also produced large differences, with telephone respondents substantially more likely to agree that gays and lesbians, Hispanics and blacks face a lot of discrimination. Web respondents were substantially more likely than those interviewed on the phone to give various political figures a “very unfavorable” rating.

The same processes appear to operate when collecting sensitive data on youth. For example, in a national US survey, 16% of 15-year-old boys reported in a personal interview that they had engaged in vaginal intercourse, but 25% said they had done so when CASI was used (Mosher et al. 2005).

Although researchers now largely agree that self-administered formats may lead to more candid and potentially more socially undesirable responses, relative to modes in which an interviewer is involved, fewer studies have addressed variations in social desirability responses across different types of self-administered formats (Kreuter et al. 2008). Relevant to this issue, Kreuter et al. (2008) found that respondents (randomly assigned to interactive voice recognition (IVR), the Web, and traditional CATI administration) had the highest level of reporting of sensitive information and greatest reporting accuracy under Web administration mode, followed by IVR, and finally the CATI administration. However, they point out that no mode of data collection dominated the other two with respect to all outcomes. Each of the three main outcome variables – unit nonresponse, item nonresponse, and reporting accuracy – yielded a different ranking of the modes. CATI had the best response rate and the Web, the lowest. CATI had the highest rate of item missing data and the Web the lowest. The Web had the highest levels of reporting accuracy and CATI had the lowest.

Gnambs and Kaspar (2015), in a meta-analysis of survey experiments, compared self-administered paper-and-pencil surveys (PAPI) versus computerized surveys (CASI and ACASI) on rates of several sensitive behaviors for which misreporting has been frequently observed. The results revealed that computerized surveys led to significantly more reporting of socially undesirable behaviors (about 1.5 times higher) than comparable surveys administered on paper. This effect was strongest for highly sensitive behaviors and surveys administered individually to respondents.

On the other hand, it has been suggested that candor in Web-based surveys may be declining because individuals have become increasingly concerned about problems with privacy on the internet (Fogel and Nehmad 2009; Young and Quan-Haase 2009). As such, their responses might reflect levels of social desirability in an effort to control personal information. In a mixed methods study, Wallace et al. (2014) found that online surveys resulted in reduced rates of socially undesirable responses to open-ended questions, relative to a paper-and-pencil version of the self-administered survey.

All in all, although there have been fewer methodological studies comparing mode differences in surveys involving youth, it appears that self-administered surveys may yield more accurate data when the topic is sensitive in nature. According to Krumpal (2013), “methods of self-administration, minimizing the presence of the interviewer, seem to increase respondents’ privacy, to reduce feelings of jeopardy and to decrease subjective probabilities of painful emotions like shame and embarrassment associated with the presence of an interviewer thus generating more honest answers to sensitive questions” (p. 2034). Of course, there are potential tradeoffs in collecting data from youth without the presence of an interviewer, such as lost opportunity to provide clarification if needed, lack of interaction that may keep youth motivated to complete the interview, and the ability to assess response fatigue. It seems likely the benefits and costs of these tradeoffs will vary by the age of the child.

The choice of mode can also importantly influence the content of consent language discussed earlier. If IRBs require researchers to report disclosures of particular types of victimization, such as physical maltreatment, then modes in which the researcher has access to identifying information of respondents may necessitate that consent language outline circumstances of limited confidentiality. However, survey modes that allow completely anonymous data collection, such as certain types of Web-based administrations, can potentially allow the researcher to avoid reporting requirements (as discussed earlier) and reassure respondents, in the study introduction and consent form, that the information collected will not be disclosed to authorities.

Using Incentives in Child Victimization Research

Incentives have a motivating effect on participation. “Incentives generally help mediate or overcome the many different disincentives or contextual reasons influencing the decisions of young people and the adults around them about whether or not to participate in research. Incentives may operate extrinsically, intrinsically, or in a mixture of both of these modes…extrinsic (or external) incentives operate when rewards such as payments are offered to subjects for participating. Intrinsic (or internal) incentives, by contrast, operate when the research participation is motivated by the subject’s own values or commitment to the research topic” (Seymour 2012, p. 52). Although efforts to bolster “intrinsic” incentives by trying to convince respondents of the importance and value of their participation can sometimes be helpful, also providing material compensation for participation, typically in the form of payment or gifts, further contributes to the quality of research by helping to maximize participation and reduce the likelihood of sample bias. Moreover, compensation puts value on the time and effort of the participant and communicates that his/her effort and time is appreciated (Council for International Organizations of Medical Sciences (CIOMS) and World Health Organization (WHO) 2008). Indeed, it can be argued that since researchers, interviewers, and others involved in conducting studies get paid for their work, lack of compensation for participants’ time and effort is exploitive.

However, the use of incentives in research involving youth (and adults for that matter) remains debated. Some researchers have worried that “any payments, however fair, may still bribe or coerce people into taking part” (Alderson and Morrow 2011). Compensations that represent “tokens of appreciation” are typically seen as more appropriate than offering large remunerations that may make it difficult for respondents to think clearly about their interests and needs, and perhaps undermine voluntary participation. What is considered an appropriate compensation will differ by the amount of time and effort that respondents must commit to participate and by the economic context of the population from which participants are recruited (Clacherty and Donald 2007; Powell et al. 2011).

Singer and Couper (2008) argued that in order to exert undue influence, larger incentives must induce respondents to accept risks they would not accept with smaller ones. None of the published experiments, including their own, have found evidence to this effect. Larger incentives induce greater participation than smaller ones, but they do so equally whether the perceived risks are small or larger. Similarly, larger risks induce less participation than smaller ones, regardless of the size of the incentive. More important than the size of the incentive, according to these researchers, are ethical considerations concerning informed consent and protections against harm. “Respondents must understand the benefits as well as the risk of harm of participation… researchers and IRBs have a responsibility to eliminate unnecessary risks (e.g., to institute adequate disclosure protections for sensitive data) and to reduce those that remain to a minimum (e.g., arrange for interviews in settings that will not expose respondents to the view of potentially dangerous others).” (pp. 7–8).

Little research has directly addressed the impact of incentives on youths’ (and their parents’) motivations to participate. Henderson et al. (2010) found, in a longitudinal study of adolescents, that direct monetary rewards were associated with substantially better retention rates across waves, than offering no incentive or lottery incentives of varying amounts. Martinson et al. (2000) found that both monetary and lottery style incentives increased the response rate to postal questionnaires about smoking among respondents aged between 14 and 17 years of age, with the greatest effects on response rates seen for monetary awards. Datta et al. (2001) found in an analysis of incentive use in the National Longitudinal Survey of Youth (NLSY) that monetary incentives particularly increased the response rates of harder to reach young people, with the size of the incentive being an important factor. In a study of young adults, Collins et al. (2000) also found that the size of the monetary incentive was particularly important, with a 25% increase in payment resulting in a 7% increase in response rate.

Seymour (2012) concludes, “It is important to take a balanced view that does not reject all extrinsic incentives as unsuitable in research with young people. Ethically used and sensitively developed, extrinsic incentives can complement powerful altruistic intrinsic motivations to improve the research experience for young people and researchers.”

Issues of Victimization Recall and Data Quality

In surveys that rely on reports from youth and parents about victimization, questions have been raised about the quality of the reports. Most studies assessing reliability of childhood victimization and other forms of adversity have been conducted on adults. These generally suggest the most common type of misreporting is underreporting – that is, individuals who experienced childhood abuse, for example, report not having been abused in childhood (Fergusson et al. 2000; Hardt and Rutter 2004).

Although some researchers have suggested underreporting may arise from traumatic dissociation (Williams and Finkelhor 1995), another explanation is that adult respondents may have simply forgotten since most respondents are asked to recall experiences that occurred many years, often decades, ago (Widom and Shepherd 1996). For this reason, researchers have suggested that studies asking about events that occurred in more time-proximate periods will be more accurate. Therefore, direct surveys of children and youth are preferable when trying to determine incidence and prevalence rates of childhood events (Saunders and Adams 2014).

That being said, there have been a number of studies that address the reliability of reports of major childhood events in surveys of adults. Reliability studies generally examined consistency of self-reported experiences, such as abuse, asking about childhood exposure in the same manner on two or more occasions. A study by McKinney et al. (2009), examined the reliability of child physical abuse (CPA) reports at two time points, 5 years apart, among a sample of adults. They found fair to moderate reliability of adult self-reported CPA for most act-specific questions about CPA, with Kappa’s ranging from 0.37 to 0.46. Fergusson et al. (2000) studied the stability of child abuse reports from a longitudinal birth cohort study of New Zealand young adults, who were questioned at the ages of 18 and 21 about their childhood exposure to physical punishment and sexual abuse. The researchers also found that reports had fair to moderate agreement, with Kappa values for test–retests of abuse around 0.45. Another study by Pinto et al. (2014) evaluated the reliability of self-reports of young adults who were identified in childhood by Child Protective Services (CPS). Comparing reports of a variety of adverse childhood experiences across two evaluations that were 6 months apart, they found good to excellent agreement (ICC values were greater than or equal to 0.65, across 10 categories of adversity, with physical abuse having the highest ICC value).

Findings from several studies of adults suggest that more serious or traumatic childhood events are more consistently reported than less serious ones. For example, McKinney et al. (2009) found that respondents who reported moderate (e.g., hitting with hand by caregiver) or only one type of child physical abuse were more likely to be inconsistent reporters of child physical abuse than respondents reporting severe (e.g., hitting with object) or multiple types of child physical abuse. These results are also consistent with Aalsma et al.’s (2002) study which found that respondents who endorsed more than one item on a four-item child sexual abuse measure were more than five times as likely to be consistent reporters of child sexual abuse than those who only endorsed one item (Aalsma et al. 2002). Costello et al. (1998) also found that more serious or traumatic childhood events, such as physical violence exposure, were more consistently reported than less serious ones (Costello et al. 1998). This is consistent with neurobiological research on memory showing that stress is involved in regulating various memory processes, often having memory-enhancing effects that help individuals retain information (de Quervain and McGaugh 2014). Thus, stress induced just prior to encoding has been shown to preserve or enhance memory for negative emotional events, relative to neutral events (Payne et al. 2007).

One additional concern regarding recall problems is bias recall. Widom et al. (2004), for example, argued that current health status can influence the recall of prior experiences, such that currently distressed respondents are more likely to recall negative events in the past. This can be a serious problem when assessing impact, since recall bias can inflate the dose-response relationship between stress exposure and a psychological outcome. These types of state-dependent or mood-congruent recall processes have been suggested by a number of studies (Kihlstrom et al. 2000; King et al. 2000; Schraedley et al. 2002). This type of bias in recall has not been supported by all studies, however. Pinto et al. (2014), for example, did not find a significant correlation between changes in self-reported experiences and changes in self-reported symptoms. This is consistent with other studies that found reports of adverse childhood experiences to be unrelated to health state at the time of the report (Brewin et al. 1993; Fergusson et al. 2000; Monteiro and Maia 2010).

The problem of unreliable and/or inaccurate recall is reduced when questions are unambiguous and behaviorally specific. Focusing on specific behaviors avoids the need for respondents to make judgments about their experiences. Ambiguous questions are more likely to lead to different interpretations at different moments in time. This is consistent with Dohrenwend’s (2006) critique of life events measurement, arguing that event items often represent broad categories of events that contribute to unreliability of recall, since respondents can have a variety of experiences in mind when responding. The author suggests that such “intra-category” variability, and resulting recall problems, are reduced substantially when life event items are less ambiguous by specifying particular inclusion and/or exclusion criteria.

The issue of “telescoping” events in victimization surveys is also related to recall and can importantly influence the quality of prevalence estimates. Telescoping is when respondents recall an event, but incorrectly date it as having happened earlier or later than it actually did. (Daigle et al. 2016; Gaskell et al. 2000). There are two types of telescoping: “forward” and “backward.” Forward telescoping occurs when an event is erroneously remembered as having occurred more recently than it did. In other words, the respondent pulls more distant events into the time frame being asked about (e.g., past year). A backward telescoped event is erroneously remembered as having occurred earlier than its actual date. In general, empirical data show that forward telescoping is more likely to occur than backward telescoping (Zineil 2008). Typical strategies to address telescoping include a “life calendar” approach where landmark events that are highly salient in the respondent’s life are used to help mark the beginning of the referent period and increase the accuracy of event timing. Another common strategy, used primarily in panel studies, is “bounding” whereby a prior interview is used as a temporal point of reference for the respondent. This latter technique has been used in the NCVS (National Criminal Justice 2014) and some research suggests that bounding in the NCVS reduced measurement error by helping to guard against overestimating incidents of victimization (Planty 2003). Unfortunately, these strategies are time consuming (i.e., life calendar) and/or expensive (multiple interviews) and are less feasible for self-administered formats.

There are also developmental issues that need to be considered when surveying youth. Data quality in youth surveys will inevitably depend on age-related cognitive development. Borgers et al. (2000) summarized literature on the stages of cognitive development that characterize different age groups and outlined their implications for data quality. According to the authors, when children reach the stage of “concrete operations” (age 8–10), they can be successfully surveyed. They are beginning to understand temporal relations and classification but are still very literal in their interpretation of questions, so question wording needs to be simple, unambiguous, and avoid negatively phrased items. They are also prone to losing interest and concentration and are particularly susceptible to response sets, especially when they lose motivation or do not understand a question. The researchers suggest that the use of CASI formats for this age group may be helpful in reducing item nonresponse and increasing interest in participating. The researchers labeled the developmental stage from 11 to 15, “formal thought.” Cognitive functioning is well developed at this stage, respondents can give consistent answers, and standardized questionnaires similar to adults can be successfully used. However, they point out that this age group is very context sensitive, and differences in study location and the presence of others can influence data quality. Lack of motivation and boredom are particular problems in this age group. According the researchers, 16- and 17-year-olds can more or less be treated as adults in surveys, but again, the presence of others can importantly influence data quality, especially when asking sensitive questions. Finally, the researchers point out that reading ability will affect data quality in all age groups; children with lower reading scores tend to produce more missing data and internal consistency of multi-item scales tends to increase with the age (education) of the child, although age-related effects are small (Borgers et al. 2000).

Although little research has addressed reliability of recall of child victimization events among youth, the literature reviewed above suggests some implications. First, developmental research on the cognitive capacities of children suggest that youth from the around the age of 10 can understand and report on events that happen to them consistently, provided that item wording is unambiguous and reading level appropriate (Borgers et al. 2000). Second, victimization measures that are comprised of behaviorally specific items, like the Juvenile Victimization Questionnaire (JVQ), will be associated with better and more accurate recall of events, relative to measures comprised of broader or more ambiguous items. Third, more serious victimizations, such as sexual assault, and victimizations that are chronic or repeated, such as bullying, may yield more accurate lifetime recall than less serious or isolated, single occurrence victimizations. Finally, while recall problems may underestimate the prevalence of victimization, false positives (reporting events that did not occur) are probably not a major concern.

Utilizing Caregiver Proxy Reports of Violence Exposure among Younger Children

Research on the accuracy of parents as proxy reporters of children’s victimization is limited. Most studies relevant to this issue have assessed the level of correspondence between parents and children on victimization reports.

In general, studies suggest that agreement across different reporters is higher when the questions being asked are objective and observable, such as when asking about the occurrence of an event, rather than more subjective, such as inquiring about quality of life (Rajmil et al. 2013). Similar to its benefits for improving recall discussed earlier, event measures that are behaviorally specific and unambiguous are more likely to yield greater consistency across reporters. High agreement, however, will depend on all parties’ knowledge of the event. In terms of proxy reports in violence research, there is some research suggesting that parents generally report fewer victimization events that occur at school (Harper et al. 2012; Holt et al. 2009) or in the neighborhood (Ceballo et al. 2001; Hill and Jones 1997), such as witnessing community violence (Lewis et al. 2010). However, this appears to be less of a problem with younger children (Ceballo et al. 2001), because parents of younger children spend more time directly caregiving, supervise activities more closely, and because younger children disclose more to their parents than older children. In contrast to school and community violence, there is some evidence that parents provide as many or more disclosures of family-perpetrated violence, or violence that occurs at home, relative to youth (Grych 1998; Jouriles and Norwood 1995; Raviv et al. 2001; Thomson et al. 2002).

A recent study (Compier-de Block et al. 2017) found that, although correspondence between parent and youth pairs was modest, parents and children on average reported an equal level of emotional and physical abuse. The researchers also found that there was more reporting convergence between parents and younger children on emotional abuse than between parents and older children (adolescents). However, in comparison to their children, parents reported somewhat less emotional neglect. This later finding is consistent with research showing greater agreement for more objective behaviorally specific items. As Compier-de Block et al. (2017) point out, emotional neglect is a less tangible subject than acts of abuse since it encompasses acts of omission (e.g., the absence of expressions of warmth), which may make it more difficult for parents to recognize and report.

Another way that this issue has been addressed has been to compare rates of victimization among youth of similar ages, when one age group was based on youth self-report and another on parent proxy reports. Specifically, Finkelhor et al. (2005a) compared caregiver proxy respondents describing past-year victimizations of their 8- and 9-year-olds with the self-report of children ages 10 and 11 describing their own experiences. Only peer or sibling victimization and assault showed significant differences with caregiver proxies reporting more incidents. This may simply reflect actual developmental differences in peer/sibling perpetrated exposures. Of particular note was equivalent levels of parental maltreatment reported by both caregiver proxies and self-reporting children, helping to dispel concern about caregiver reticence to report on this topic.

Summary and Key Points

The following provides a summary and key points encompassed in this review of ethical and methodological issues involved in collecting child victimization data from youth and parents. Although there are fewer studies directly addressing these issues with respect conducting youth victimization surveys, compared to the much larger literature focused on general surveys of adults, several conclusions and recommendations can be derived from the literature reviewed in this chapter.

Participant distress. Research on the impact of youth being asked about (and disclosing) victimization events in surveys shows that distress is relatively rare, and when it occurs it is generally mild and short-lived. However, youth who disclose victimization events often report more distress than those who were not exposed to such events. This appears to be more likely among younger children (e.g., 10–12) who disclose victimization. However, even youth who report some level of upset usually indicate that they do not regret participating, and often report both positive as well as negative feeling about the survey. As a whole, the literature suggests that youth victimization surveys pose relatively little risk for participants.

Minimizing risk. Several strategies, most of which reflect standard IRB guidelines and protocols, have been employed in youth victimization research in efforts to minimize risk to participants. These include ensuring that consent/assent is informed, and participation is voluntary, that anonymity or confidentiality of survey data in maintained, and that relevant resources are made available to participants who are distressed or desire information or assistance. It is crucial that consent/assent language is simple and developmentally appropriate, and that youth understand the voluntary nature of their participation. The limited research that has been conducted suggests that youth generally understand their rights as research participants and that such understanding does not differ by trauma exposure, although younger youth may be less likely to fully understand their right to refuse when parents have given permission.

Mandatory reporting. Informing respondents about mandatory reporting reduces the willingness to disclose sensitive information. As such, consent language that includes statements about mandatory reporting of child abuse, for example, is likely to lead to nonparticipation by high risk respondents and/or underreporting of such incidents. This can create serious problems since a crucial goal of child victimization research is to help craft public policy by identifying the numbers of children affected. Web-based and CASI/ACASI modes of data collection can allow anonymity of research responses so that, even if a researcher is present, he/she is blind to the content of interview responses. Thus, in these self-administered formats, mandatory reporting is not an issue when disclosures are made in response to survey questions.

Survey mode and disclosure. Most research on survey mode comparisons finds that, when collecting sensitive information, self-administered surveys yield significantly more disclosures than data collection modes that involve the presence of an interviewer (whether it be on the phone or in-person). However, while web-based survey formats are often associated with greater disclosure of sensitive information (and the least social desirability bias), they also typically yield the lowest response rates.

Report reliability. Research shows that using victimization event measures that are unambiguous and behaviorally specific helps to increase reliability and reduce recall problems. Developmental research on the cognitive capacities of children suggest that youth from around the age of 10 can understand and report on events that happen to them consistently, provided that item wording is unambiguous and reading level appropriate. Although research on this issue is limited, there does not appear to be any major impediments to gathering self-report information from children as young as age of 10.

Incentives. Studies on the use of incentives in research, among both adults and youth, have generally found that participation is significantly increased when incentives are offered, especially monetary incentives. However, the use of incentives, especially with youth, continues to be controversial. Some have expressed concern that monetary compensation will exert undue influence on youth’s decision to participate, while others have suggested that ethically used extrinsic incentives are fully appropriate. Although research on this topic is extremely limited and provides little guidance, we are aware of no research to date that has demonstrated harmful outcomes associated with providing monetary incentive to youth (or parent) participants in victimization surveys. The child victimization field would benefit from additional studies that assess youths' perceptions of incentives and how they relate to participations patterns.

Parental proxy reports. Several studies show moderate concordance between parent and child reports of victimization, with children reporting somewhat more victimization events that occur at school and in the neighborhood, and parents reporting somewhat more events that occur at home. Analyses specific to NatSCEV are encouraging, with similar rates on most all forms of victimization for 8 and 9-year-olds (oldest group using parent proxy reports) and 10–11-year-olds (youngest self-report group). Although the literature is limited, it does not signal serious concern about parental proxy reporting for younger children.


A crucial benefit of child maltreatment and other victimization research is to help in crafting public policy by identifying the numbers of children in need and accurately accounting for the effects of victimization. As such, it is critical to collect data on these sensitive issues with as much accuracy as possible while also reducing risks to participants. As revealed in this literature review, there are numerous challenges inherent in the research process when attempting to obtain this important information. The broader trends of declining response rates in all types of survey research, together with the particular difficulties of assessing the victimization experiences of children and youth, make the research process especially challenging. At the same time, much of methodological research reviewed, while not extensive, suggest promising directions. At a minimum, it is clear, given the substantial importance of obtaining up-to-date and accurate information from children and parents on violence exposure, that more research is needed. An urgent priority should be conducting studies that explicitly address the methodological and ethical challenges of obtaining data on child victimization in both clinical and community settings.



  1. Aalsma, M. C., Zimet, G. D., Fortenberry, J. D., Blythe, M., & Orr, D. P. (2002). Reports of childhood sexual abuse by adolescents and young adults: Stability over time. Journal of Sex Research, 39(4), 259–263. Scholar
  2. Alderson, P., & Morrow, V. (2011). The ethics of research with children and young people: A practical handbook. Los Angeles: Sage.CrossRefGoogle Scholar
  3. American Association for Public Opinion Research. (2014). AAPOR Guidance for IRBs and Survey Researchers. Retrieved from
  4. American Psychological Association. (2017, January 1). Ethical principles of psychologists and code of conduct. Retrieved from
  5. Anderman, C., Cheadle, A., Curry, S., Diehr, P., Shultz, L., & Wagner, E. (1995). Selection bias related to parental consent in school-based survey research. Evaluation Review, 19(6), 663–674. Scholar
  6. Beck, A. J., Cantor, D., Hartge, J., & Smith, T. (2013). Sexual victimization in juvenile facilities reported by youth, 2012. Washington, DC: U.S. Department of Justice, Office of Justice Programs, Bureau of Justice Statistics.CrossRefGoogle Scholar
  7. Borgers, N., De Leeuw, E., & Hox, J. (2000). Children as respondents in survey research: Cognitive development and response quality 1. Bulletin de methodologie Sociologique, 66(1), 60–75.CrossRefGoogle Scholar
  8. Brewin, C. R., Andrews, B., & Gotlib, I. H. (1993). Psychopathology and early experience: A reappraisal of retrospective reports. Psychological Bulletin, 113(1), 82–98.CrossRefGoogle Scholar
  9. Bruzzese, J.-M., & Fisher, C. B. (2003). Assessing and enhancing the research consent capacity of children and youth. Applied Developmental Science, 7(1), 13–26.CrossRefGoogle Scholar
  10. Carter-Visscher, R. A., Naugle, A. E., Bell, K. M., & Suvak, M. K. (2007). Ethics of asking trauma-related questions and exposing participants to arousal-inducing stimuli. Journal of Trauma & Dissociation, 8(3), 27–55.CrossRefGoogle Scholar
  11. Ceballo, R., Dahl, T. A., Aretakis, M. T., & Ramirez, C. (2001). Inner-City Children’s exposure to community violence: How much do parents know? Journal of Marriage and Family, 63(4), 927–940.CrossRefGoogle Scholar
  12. Chu, A. T., DePrince, A. P., & Weinzierl, K. M. (2008). Children’s perception of research participation: Examining trauma exposure and distress. Journal of Empirical Research on Human Ethics: An International Journal, 3(1), 49–58.CrossRefGoogle Scholar
  13. Clacherty, G., & Donald, D. (2007). Child participation in research: Reflections on ethical challenges in the southern African context. African Journal of AIDS Research, 6(2), 147–156.CrossRefGoogle Scholar
  14. Collins, W. A., Maccoby, E. E., Steinbery, L., Hetherington, E. M., & Bornstein, M. H. (2000). Contemporary research on parenting: The case for nature and nurture. American Psychologist, 55(2), 218–232.CrossRefGoogle Scholar
  15. Compier-de Block, L. H. C. G., Alink, L. R. A., Linting, M., van den Berg, L. J. M., Elzinga, B. M., Voorthuis, A., … Bakermans-Kranenburg, M. J. (2017). Parent-child agreement on parent-to-child maltreatment. Journal of Family Violence, 32(2), 207–217.
  16. Cooper Robbins, S. C., Rawsthorne, M., Paxton, K., Hawke, C., Rachel Skinner, S., & Steinbeck, K. (2011). “You can help people”: Adolescents’ views on engaging young people in longitudinal research. Journal of Research on Adolescence, 22(1), 8–13. Scholar
  17. Costello, E. J., Angold, A., March, J., & Fairbank, J. (1998). Life events and post-traumatic stress: The development of a new measure for children and adolescents. Psychological Medicine, 28(6), 1275–1288.CrossRefGoogle Scholar
  18. Council for International Organizations of Medical Sciences (CIOMS) & World Health Organization (WHO). (2008). International ethical guidelines for epidemiological studies. Geneva: CIOMS.Google Scholar
  19. Cromer, L. D., Freyd, J. J., Binder, A. K., DePrince, A. P., & Becker-Blease, K. A. (2006). What’s the risk in asking? Participant reaction to trauma history questions compared with reaction to other personal questions. Ethics & Behavior, 16(4), 347–362.CrossRefGoogle Scholar
  20. Daigle, L. E., Snyder, J. A., & Fisher, B. S. (2016). Measuring victimization: Issues and new directions. In B. M. Huebner & T. S. Bynum (Eds.), The handbook of measurement issues in criminology and criminal justice (pp. 249–276). Oxford, UK: Wiley Blackwell Publishers.Google Scholar
  21. Datta, A. R., Horrigan, M. W., & Walker, J. R. (2001). Evaluation of a monetary incentive payment experiment in the National Longitudinal Survey of youth, 1997 cohort. Paper presented at the federal committee on statistical methodology conference.Google Scholar
  22. de Quervain, D. J. F., & McGaugh, J. L. (2014). Stress and the regulation of memory: From basic mechanisms to clinical implications neurobiology of learning and memory special issue. Neurobiology of Learning and Memory, 112, 1. Scholar
  23. Dohrenwend, B. P. (2006). Inventorying stressful life events as risk factors for psychopathology: Toward resolution of the problem of Intracategory variability. Psychological Bulletin, 132(3), 477–495. Scholar
  24. Dolbear, G. L., Wojtowycz, M., & Newell, L. T. (2002). Named reporting and mandatory partner notification in New York state: The effect on consent for perinatal HIV testing. Journal of Urban Health, 79(2), 238–244. Scholar
  25. Edwards, K. M., Haynes, E. E., & Rodenhizer-Stämpfli, K. A. (2016). High school youth’s reactions to participating in mixed-methodological dating violence research. Journal of Empirical Research on Human Research Ethics, 11(3), 220–230. Scholar
  26. Ellonen, N., & Pösö, T. (2011). Children’s experiences of completing a computer-based violence survey: Ethical implications. Children & Society, 25(6), 470–481.CrossRefGoogle Scholar
  27. Esbensen, F.-A., Miller, M. H., Taylor, T., He, N., & Freng, A. (1999). Differential attrition rates and active parental consent. Evaluation Review, 23(3), 316–335. Scholar
  28. Fagerlund, M., & Ellonen, N. (2016). Children’s experiences of completing a computer-based violence survey: Finnish child victim survey revisited. Journal of Child Sexual Abuse, 25(5), 556–576. Scholar
  29. Fergusson, D. M., Horwood, L. J., & Woodward, L. J. (2000). The stability of child abuse reports: A longitudinal study of the reporting behaviour of young adults. Psychological Medicine, 30(3), 529–544.CrossRefGoogle Scholar
  30. Finkelhor, D., Hamby, S. L., Ormrod, R. K., & Turner, H. A. (2005a). The JVQ: Reliability, validity, and national norms. Child Abuse & Neglect, 29(4), 383–412.CrossRefGoogle Scholar
  31. Finkelhor, D., Ormrod, R. K., Turner, H. A., & Hamby, S. L. (2005b). The victimization of children and youth: A comprehensive, national survey. Child Maltreatment, 10(1), 5–25. Scholar
  32. Finkelhor, D., Vanderminden, J., Turner, H., Hamby, S., & Shattuck, A. (2013). Upset among youth in response to questions about exposure to violence, sexual assault and family maltreatment. Child Abuse & Neglect, 38(2), 217–223. Scholar
  33. Fogel, J., & Nehmad, E. (2009). Internet social network communities: Risk taking, trust, and privacy concerns. Computers in Human Behavior, 25(1), 153–160. Scholar
  34. Friedman, M. S., Chiu, C. J., Croft, C., Guadamuz, T. E., Stall, R., & Marshal, M. P. (2016). Ethics of online assent: Comparing strategies to ensure informed assent among youth. Journal of Empirical Research on Human Research Ethics, 11(1), 15–20. Scholar
  35. Gaskell, G. D., Wright, D. B., & O’Muircheartaigh, C. A. (2000). Telescoping of landmark events: Implications for survey research. The Public Opinion Quarterly, 64(1), 77–89.CrossRefGoogle Scholar
  36. Gnambs, T., & Kaspar, K. (2015). Disclosure of sensitive behaviors across self-administered survey modes: A meta-analysis. Behavior Research Methods, 47(4), 1237–1259. Scholar
  37. Grych, J. H. (1998). Children’s appraisals of interparental conflict: Situational and contextual influences. Journal of Family Psychology, 12(3), 437–453. Scholar
  38. Hardt, J., & Rutter, M. (2004). Validity of adult retrospective reports of adverse childhood experiences: Review of the evidence. Journal of Child Psychology & Psychiatry, 45(2), 260–273.CrossRefGoogle Scholar
  39. Harper, C. R., Parris, L. N., Henrich, C. C., Varjas, K., & Meyers, J. (2012). Peer victimization and school safety: The role of coping effectiveness. Journal of School Violence, 11(4), 267–287. Scholar
  40. Henderson, M., Wight, D., Nixon, C., & Hart, G. (2010). Retaining young people in a longitudinal sexual health survey: A trial of strategies to maintain participation. BMC Medical Research Methodology, 10(1), 9. Scholar
  41. Hill, M. (2005). Ethical considerations in researching children’s experiences. In S. Greene & D. Hogan (Eds.), Researching children’s experience (pp. 61–86). London: Sage.Google Scholar
  42. Hill, H. M., & Jones, L. P. (1997). Children’s and parents’ perceptions of children’s exposure to violence in urban neighborhoods. Journal of the National Medical Association, 89(4), 270–276.Google Scholar
  43. Holbrook, A. L., Green, M. C., & Krosnick, J. A. (2003). Telephone versus face-to-face interviewing of national probability samples with long questionnaires – comparisons of respondent satisficing and social desirability response bias. Public Opinion Quarterly, 67(1), 79–125.CrossRefGoogle Scholar
  44. Holt, M. A., Kaufman Kantor, G., & Finkelhor, D. (2009). Parent/child concordance about bullying involvement & family characteristics related to bullying & peer victimization. Journal of School Violence, 8(1), 42–63.CrossRefGoogle Scholar
  45. Hurley, J. C., & Underwood, M. K. (2002). Children’s understanding of their research rights before and after debriefing: Informed assent, confidentiality, and stopping participation. Child Development, 73(1), 132–143.CrossRefGoogle Scholar
  46. Hussemann, J. M., Mortimer, J. T., & Zhang, L. (2016). Exploring the correlates of parental consent for Children’s participation in surveys: An intergenerational longitudinal study. Public Opinion Quarterly, 80(3), 642–665. Scholar
  47. Jaffe, A. E., DiLillo, D., Hoffman, L., Haikalis, M., & Dykstra, R. E. (2015). Does it hurt to ask? A meta-analysis of participant reactions to trauma research. Clinical Psychology Review, 40, 40–56. Scholar
  48. Jorm, A. F., Kelly, C. M., & Morgan, A. J. (2007). Participant distress in psychiatric research: A systematic review. Psychological Medicine, 37, 917–926.CrossRefGoogle Scholar
  49. Jouriles, E. N., & Norwood, W. D. (1995). Physical aggression toward boys and girls in families characterized by the battering of women. Journal of Family Psychology, 9(1), 69–78. Scholar
  50. Kihlstrom, J. F., Eich, E., Sandbrand, D., & Tobias, B. A. (2000). Emotion and memory: Implications for self-report. In The science of self-report: Implications for research and practice (pp. 81–99). Mahwah: Lawrence Erlbaum.Google Scholar
  51. King, M., Coxell, A., & Mezey, G. C. (2000). The prevalence and characteristics of male sexual assault. In G. C. Mezey & M. B. King (Eds.), Male victims of sexual assault (2nd ed., pp. 1–15). Oxford: Oxford University Press.Google Scholar
  52. Knight, E. D., Runyan, D. K., Dubowitz, H., Brandford, C., Kotch, J., Litrownik, A., & Hunter, W. (2000). Methodological and ethical challenges associated with child self-report of maltreatment solutions implemented by the LONGSCAN consortium. Journal of Interpersonal Violence, 15(7), 760–775. Scholar
  53. Kraut, R., Olson, J., Banaji, M., Bruckman, A., Cohen, J., & Couper, M. (2004). Psychological research online: Report of Board of Scientific Affairs’ advisory group on the conduct of research on the internet. American Psychologist, 59(2), 105–117. Scholar
  54. Kreuter, F., Presser, S., & Tourangeau, R. (2008). Social desirability bias in CATI, IVR, and web surveys the effects of mode and question sensitivity. Public Opinion Quarterly, 72(5), 847–865. Scholar
  55. Krumpal, I. (2013). Determinants of social desirability bias in sensitive surveys: A literature review. Quality & Quantity, 47(4), 2025–2047. Scholar
  56. Kuyper, L., de Wit, J., Adam, P., & Woertman, L. (2010). Doing more good than harm? The effects of participation in sex research on young people in the Netherlands. Archives of Sexual Behavior.
  57. Langhinrichsen-Rohling, J., Arata, C. M., O’Brien, N., Bowers, D., & Klibert, J. (2006). Sensitive research with adolescents: Just how upsetting are self-report surveys anyways? Violence & Victims, 21(4), 425–444.CrossRefGoogle Scholar
  58. Lewis, T., Kotch, J., Thompson, R., Litrownik, A. J., English, D. J., Proctor, L. J., … Dubowitz, H. (2010). Witnessed violence and youth behavior problems: A multi-informant study. American Journal of Orthopsychiatry, 80(4), 443–450.
  59. Liu, C., Cox, R. B., Washburn, I. J., Croff, J. M., & Crethar, H. C. (2017). The effects of requiring parental consent for research on adolescents’ risk behaviors: A meta-analysis. Journal of Adolescent Health, 61(1), 45–52. Scholar
  60. Lothen-Kline, C., Howard, D. E., Hamburger, E. K., Worrell, K. D., & Boekeloo, B. O. (2003). Truth and consequences: Ethics, confidentiality, and disclosure in adolescent longitudinal prevention research. Journal of Adolescent Health, 33(5), 385–394.CrossRefGoogle Scholar
  61. Martinson, B. C., Lazovich, D., Lando, H. A., Perry, C. L., McGovern, P. G., & Boyle, R. G. (2000). Effectiveness of monetary incentives for recruiting adolescents to an intervention trial to reduce smoking. Preventive Medicine, 31(6), 706–713. Scholar
  62. McCarry, M. (2012). Who benefits? A critical reflection of children and young people’s participation in sensitive research. International Journal of Social Research Methodology, 15(1), 55–68.CrossRefGoogle Scholar
  63. McClinton Appollis, T., Lund, C., de Vries, P. J., & Mathews, C. (2015). Adolescents’ and adults’ experiences of being surveyed about violence and abuse: A systematic review of harms, benefits, and regrets. American Journal of Public Health, 105(2), e31–e45.CrossRefGoogle Scholar
  64. McKinney, C. M., Harris, T. R., & Caetano, R. (2009). Reliability of self-reported childhood physical abuse by adults and factors predictive of inconsistent reporting. Violence and Victims, 24(5), 653–668.CrossRefGoogle Scholar
  65. Meinck, F., Steinert, J. I., Sethi, D., Gilbert, R., Bellis, M. A., Mikton, C., … Baban, A. (2016). Measuring and monitoring national prevalence of child maltreatment: A practical handbook (9289051639). Retrieved from World Health Organization/Regional Office for Europe, Copenhagen. Available from:
  66. Metzger, D. S., Koblin, B., Turner, C., Navaline, H., Valenti, F., Holts, S., … Seage, G. R. (2000). Randomized controlled trial of audio computer-assisted self-interviewing: Utility and acceptability in longitudinal studies. American Journal of Epidemiology, 152(2), 99–106.Google Scholar
  67. Moilanen, K. L. (2016). Why do parents grant or deny consent for adolescent participation in sexuality research? Journal of Youth and Adolescence, 45(5), 1020–1036. Scholar
  68. Monteiro, I. S., & Maia, A. (2010). Family childhood experiences reports in depressed patients: Comparison between 2 time points. Procedia – Social and Behavioral Sciences, 5, 541–547. Scholar
  69. Mosher, W. D., Chandra, A., & Jones, J. (2005). Sexual behavior and selected health measures: Men and women 15-44 years of age, United States, 2002 (Vol. 362). Hyattsville: US Department of Health and Human Services, Centers for Disease Control and Prevention, National Center for Health Statistics.Google Scholar
  70. National Criminal Justice (2014). National Crime Victimization Survey (NCVS) Technical Documentation (September 2014). Bureau of Justice Statistics (BJS), US Dept of Justice, Office of Justice Programs, Washington, DC.Google Scholar
  71. O’Neil, K. M., Penrod, S. D., & Bornstein, B. H. (2003). Web-based research: Methodological variables’ effects on dropout and sample characteristics. Behavior Research Methods, Instrumentation, & Computers, 35(2), 217–226.CrossRefGoogle Scholar
  72. Ondersma, S. J., & Chase, S. K. (2006). A novel methodology for longitudinal research in child maltreatment: Can quasi-anonymity yield better data and better participant protection? Paper presented at the American professional society on the abuse of children, Nashville.Google Scholar
  73. Palys, T., & Lowman, J. (1999). Informed consent, confidentiality and the law: Implications for the tri-Council policy statement. Burnaby: Simon Fraser University.Google Scholar
  74. Payne, J. D., Jackson, E. D., Hoscheidt, S., Ryan, L., Jacobs, W. J., & Nadel, L. (2007). Stress administered prior to encoding impairs neutral but enhances emotional long-term episodic memories. Learning & Memory, 14(12), 861–868.CrossRefGoogle Scholar
  75. Penslar, R. Levin., National Institutes of Health (U.S.). Office for Protection from Research Risks. (1993). Protecting human research subjects: institutional review board guidebook. [2nd ed.] [Bethesda, MD]: U.S. Dept. of Health and Human Services, Public Health Service, National Institutes of Health, Office of Extramural Research, Office for Protection from Research Risks.Google Scholar
  76. PEW. (2015). From telephone to the web: The challenge of mode of interview effects in public opinion polls, 48. Retrieved from website:
  77. Pinto, R., Correia, L., & Maia, Â. (2014). Assessing the reliability of retrospective reports of adverse childhood experiences among adolescents with documented childhood maltreatment. Journal of Family Violence, 29(4), 431–438. Scholar
  78. Planty, M. (2003, May 15–18). An examination of adolescent telescoping: Evidence from the national crime victimization survey. Paper presented at the 58th annual AAPOR conference, Nashville.Google Scholar
  79. Powell, M. A., Graham, A., Taylor, N. J., Newell, S., & Fitzgerald, R. (2011). Building capacity for ethical research with children and young people: An international research project to examine the ethical issues and challenges in understanding research with and for children in different majority world contexts. Retrieved from Research Report for the Childwatch International Research Network. DunedinGoogle Scholar
  80. Radford, L., Corral, S., Bradley, C., Fisher, H., Bassett, C., Howat, N., & Collishaw, S. (2011). Child abuse and neglect in the UK today. London: National Society for the Prevention of Cruelty to Children.Google Scholar
  81. Rajmil, L., López, A. R., López-Aguilà, S., & Alonso, J. (2013). Parent–child agreement on health-related quality of life (HRQOL): A longitudinal study. Health and Quality of Life Outcomes, 11(1), 101. Scholar
  82. Raviv, A., Erel, O., Fox, N. A., Leavitt, L. A., Raviv, A., Dar, I., … Greenbaum, C. W. (2001). Individual measurement of exposure to everyday violence among elementary schoolchildren across various settings. Journal of Community Psychology, 29(2), 117–140.<117::Aid-Jcop1009>3.0.Co;2-2.
  83. Reddy, D. M., Fleming, R., & Swain, C. (2002). Effect of mandatory parental notification on adolescent girls’ use of sexual health care services. JAMA, 288(6), 710–714. Scholar
  84. Saunders, B. E., & Adams, Z. W. (2014). Epidemiology of traumatic experiences in childhood. Child and Adolescent Psychiatric Clinics of North America, 23(2), 167–184. Scholar
  85. Schraedley, P. K., Turner, R. J., & Gotlib, I. H. (2002). Stability of retrospective reports in depression: Traumatic events, past depressive episodes, and parental psychopathology. Journal of Health and Social Behavior, 43(3), 307–316.CrossRefGoogle Scholar
  86. Sedlak, A. J., Bruce, C., Cantor, D., Ditton, P., Hartge, J., Krawchuk, S., … Shapiro, G. (2012). Survey of youth in residential placement: Technical report. SYRP report. Rockville: Westat.Google Scholar
  87. Seymour, K. (2012). Using incentives: Encouraging and recognising participation in youth research. Youth Studies Australia, 31(3), 51.Google Scholar
  88. Sieber, J. E. (2001). Summary of human subjects protection issues related to large sample surveys (NCJ 187692). Retrieved from Washington, DCGoogle Scholar
  89. Simon Rosser, B., Gurak, L., Horvath, K. J., Michael Oakes, J., Konstan, J., & Danilenko, G. P. (2009). The challenges of ensuring participant consent in internet-based sex studies: A case study of the Men’s INTernet sex (MINTS-I and II) studies. Journal of Computer-Mediated Communication, 14(3), 602–626.CrossRefGoogle Scholar
  90. Singer, E., & Couper, M. P. (2008). Do incentives exert undue influence on survey participation? Experimental evidence. Journal of Empirical Research on Human Research Ethics, 3(3), 49–56.CrossRefGoogle Scholar
  91. Smith, T., & Sedlak, A. J. (2011). Addressing human subjects issues on the National Survey of youth in custody. Paper presented at the 66th Annual Conference of the American Association for Public Opinion Research, Phoenix.Google Scholar
  92. Thomson, C. C., Roberts, K., Curran, A., Ryan, L., & Wright, R. J. (2002). Caretaker-child concordance for Child’s exposure to violence in a preadolescent inner-city population. Archives of Pediatrics & Adolescent Medicine, 156(8), 818–823. Scholar
  93. Tourangeau, R., & Yan, T. (2007). Sensitive questions in surveys. Psychological Bulletin, 133(5), 859–883. Scholar
  94. Tourangeau, R., Rips, L. J., & Rasinski, K. (2000). The psychology of survey response. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
  95. UN Office on Drugs and Crime. (2010). World Drug Report 2010. (United Nations Publication, Sales No. E.10.XI.13).Google Scholar
  96. Vitiello, B. (2008). Effectively obtaining informed consent for child and adolescent participation in mental health research. Ethics & Behavior, 18(2–3), 182–198.CrossRefGoogle Scholar
  97. Wallace, D., Hedberg, E., & Cesar, G. (2014). The effect of survey mode on socially undesirable responses to open-ended questions: A mixed method approach. Chicago: NORC at the University of Chicago.Google Scholar
  98. Widom, C. S., & Czaja, S. J. (2006). Reactions to research participation in vulnerable subgroups. Accountability in Research, 12(2), 115–138.CrossRefGoogle Scholar
  99. Widom, C. S., & Shepherd, J. R. (1996). Accuracy of adult recollections of childhood victimization: Part 1. Childhood physical abuse. Psychological Assessment, 8(4), 412–421.CrossRefGoogle Scholar
  100. Widom, C. S., Raphael, K. G., & DuMont, K. A. (2004). The case for prospective longitudinal studies in child maltreatment research: Commentary on Dube, Williamson, Thompson, Felitti, and Anda (2004). Child Abuse & Neglect, 28(7), 715–722.CrossRefGoogle Scholar
  101. Williams, L. M., & Finkelhor, D. (1995). Paternal caregiving and incest: Test of a biosocial model. American Journal of Orthopsychiatry, 65(1), 101–113.CrossRefGoogle Scholar
  102. Ybarra, M. L., Langhinrichsen-Rohling, J., Friend, J., & Diener-West, M. (2009). Impact of asking sensitive questions about violence to children & adolescents. Journal of Adolescent Health, 45, 499–507.CrossRefGoogle Scholar
  103. Young, A. L., & Quan-Haase, A. (2009). Information revelation and internet privacy concerns on social network sites: A case study of Facebook. Paper presented at the proceedings of the fourth international conference on communities and technologies.Google Scholar
  104. Zajac, K., Ruggiero, K. J., Smith, D. W., Saunders, B. E., & Kilpatrick, D. G. (2011). Adolescent distress in traumatic stress research: Data from the National Survey of adolescents-replication. Journal of Traumatic Stress, 24(2), 226–229.CrossRefGoogle Scholar
  105. Zineil, S. (2008). Telescoping. In P. J. Lavrakas (Ed.), Encyclopedia of survey research methods. London: Sage.Google Scholar

Copyright information

© The Author(s) 2020

Authors and Affiliations

  1. 1.Crimes against Children Research Center (CCRC) & Department of SociologyUniversity of New HampshireDurhamUSA

Section editors and affiliations

  • Ernestine Briggs
    • 1
  • Javonda Williams
    • 2
  • Michelle Clayton
    • 3
  • Stacie LeBlanc
    • 4
  • Viola Vaughan-Eden
    • 5
  • Amy Russell
    • 6
    • 7
  1. 1.Associate Professor of Psychiatry and Behavioral SciencesDuke UniversityDurhamUSA
  2. 2.School of Social WorkUniversity of AlabamaTuscaloosaUSA
  3. 3.Associate Professor of PediatricsEastern Virginia Medical School/Children's Hospital of The King's DaughtersNorfolkUSA
  4. 4.New Orleans Child Advocacy CenterNew OrleansUSA
  5. 5.School of Social Work - PhD ProgramNorfolk State UniversityNorfolkUSA
  6. 6.Owner/Principle Consultant & TrainerRussell Consulting Specialists, LLcVancouverUSA
  7. 7.Executive DirectorArthur D. Curtis Children's Justice Center

Personalised recommendations