New research shed light on the varying rates of rape victimization reported in existing research. The findings highlight different factors that can affect the research results, including the specific ways that rape is defined and measured. The study has been published in the journal Aggression and Violent Behavior.
The existing research literature on this topic contains substantial variation in reported victimization rates. This variability could be attributed to differences in measurement and methodologies used across studies. In order to inform policy and practice effectively, the researchers behind the new study said it was crucial to obtain clear and specific prevalence data on rape.
The researchers aimed to conduct a meta-analysis to examine how methodological factors predict the variation observed in the prevalence of rape among women in the United States. By analyzing a wide range of studies and samples, the meta-analysis could enable a more comprehensive exploration of the impact of methodological differences.
“Research is so often thought of as a ‘black box’ in which something secret happens inside and then ‘poof!’ results come out,” explained lead author Rachael Goodman-Williams, an assistant professor at Wichita State University.
“I am interested in shining light inside this black box whenever possible so that we can more clearly see how what we do, as researchers, impacts what we find. In the case of sexual assault victimization research, it can seem very straight forward — you ask participants whether they’ve experienced sexual assault, total the number of participants who say yes, divide by your total participants, and there’s your victimization prevalence rate.”
“In reality, there’s so much more going on! My goal was to elucidate some of those behind-the-scenes factors so that we can more clearly understand the experiences of those who choose to participate in our research,” Goodman-Williams told PsyPost.
The researchers conducted a systematic review of the published literature using the ProQuest database, specifically the PsycINFO and PsycARTICLES databases. These databases were chosen for their coverage of research in psychology and related fields.
The inclusion criteria for the articles required that they were written in English, published after January 1, 1980, peer-reviewed, classified as empirical studies by ProQuest, and met the study’s operational definition of rape. Duplicate articles were removed, resulting in a sample of 5,289 articles to screen for inclusion.
The data had to be collected in the United States, involve adult women who did not require guardian consent, and exclude participants recruited based on victimization history or membership in high-risk groups. Studies that did not use behaviorally specific questions to screen for rape were also excluded, as non-specific questions tend to underestimate the prevalence. The inclusion criteria resulted in a final sample of 84 articles reporting data from 89 independent samples.
The results showed that between 4.6% and 48.9% of participants reported experiencing completed rape. The pooled effect size across studies was 17.0%, indicating that, on average, 17% of participants reported being victims of rape. The analysis also revealed a large amount of variation between studies, suggesting that factors specific to each study could influence the prevalence rates.
The researchers found that studies that included incapacitation as a perpetration tactic and had a higher average participant age tended to report a larger proportion of rape victims. Military samples also had a significantly higher proportion of victims compared to college samples.
“Variation in research findings doesn’t necessarily mean that research is unreliable, but rather that there are reliable differences in your findings when you study something in different ways,” Goodman-Williams explained. “In a time where we, as a society, are struggling with misinformation, I want people to know that if Study A finds one thing and Study B finds another, that doesn’t mean we should throw up our hands and say ‘Well, I guess we’ll never know!'”
“Rather, we can ask what Study A did differently than Study B and try to understand where the differences can from and what they mean. More specific to the findings in my particular study, it’s important to understand how ‘rape’ or ‘sexual assault’ is being defined when we talk about its prevalence. Studies that defined ‘rape’ as penetration through force or threat of force found significantly lower prevalence rates than studies who defined ‘rape’ as penetration through force, threat of force, or intoxication/incapacitation.”
“This makes complete sense, but it’s easy to miss unless you look at a lot of studies together like a meta-analysis can do,” Goodman-Williams told PsyPost. “What’s more, those differences can depend on other study factors. In my study, this meant that defining rate in a way that didn’t include intoxication/incapacitation affected studies conducted with college students more than studies conducted with community samples. If you’re trying to compare rates of rape in college versus community samples, you’ll likely find something different depending on whether or not you include intoxication/incapacitation in your definition of rape.”
Interestingly, the researchers found that the recruitment method did not predict significant variation in the proportion of victims identified. In other words, the way participants were recruited did not seem to affect the results in terms of identifying rape victims.
This finding supports previous research that suggested knowing a study was about sexual assault did not affect the participation of victims and non-victims differently. It indicates that participants were willing to disclose their experiences of rape regardless of the recruitment method used.
“Not necessarily in my findings, but in my process of conducting the research I was very surprised by how little detail about some of these factors was included in published articles,” Goodman-Williams remarked. “I originally wanted to include recruitment language rather than recruitment method, but studies so rarely included this information in their articles that I couldn’t include it as a variable without contacting hundreds of authors directly. It highlights how important it is to not just think about what you want to share when publishing your research, but also what others may want to learn, and to include as much detail as is feasible to support future study.”
The study provided important insights into the prevalence of rape and the factors that influence variation in prevalence rates. The findings have implications for research, policy, and practice, emphasizing the need for accurate measurement and consideration of subgroup differences. But as with all research, there are some caveats.
“The biggest caveat is that there are so many other variables we could have included but couldn’t due either to statistical power or the infrequency with which the information was included in published articles,” Goodman-Williams explained. “I would like to conduct a similar study in the future but with a smaller starting pool of articles so that it would be feasible to contact study authors and include some variables that we were not able to include in this iteration.”
“Whenever you’re doing research, keep more detailed notes than you think will be necessary!” she added. “For a meta-analysis in particular, detailed exclusion codes were key. Also, don’t rush the initial steps. It’s tempting to jump right in to get started, but many of the initial decisions you make in a study, like how you define your sample or operationalize your key constructs, cannot be undone so it is very important to take your time thinking those decisions through.”
The study, “Why do rape victimization rates vary across studies? A meta-analysis examining moderating variables“, was authored by Rachael Goodman-Williams, Emily Dworkin, and MacKenzie Hetfield.