A paper published in the 6th 2011 issue of Psychotherapy and Psychosomatics analyzes the fact that clinical researchers have more and more problems in recruiting patients and interpretation of results should take this phenomenon into account. This analysis of standard research texts offers several opportunities to solve such research problems.
Participation rates in research studies, whether measured as the percentage of people eligible from a defined population who are recruited or the percentage of recruited people who participate in prespecified follow-ups, are dropping in the USA and perhaps are or will be elsewhere. With that drop, confidence in research findings is dropping, too. This phenomenon is not limited to observational studies/surveys; it is also occurring in experimental studies and clinical trials. With a perfect participation rate (100%) at recruitment, we are confident that the results (assuming they have internal validity) are generalizable. But the perfect participation rate is rarely possible for a reasonable sample size of a public health or clinically important target population.
A quick review of some standard research texts did not reveal a number that participation rate had to exceed to be scientifically acceptable. Researchers quickly mentioned ‘above 90%’, ‘80%’ or ‘60%’ as the target participation rate for recruitment. From this admittedly convenience sample, it appears that informal standards are being used. On the one hand, a selected number has the same drawback as using p = 0.05 to determine statistical significance. With 80% as the arbitrary target, 80.1% would be deemed scientifically acceptable and 79.9% would be rejected. Picking such a magical number would mean that people would invent, and have invented, ways to inflate their participation rate to reach an ‘acceptable’ rate. Due to this creativity, it is critical that the calculation of the participation rates for recruitment and for follow-up, when applicable, are reported. This transparency is reflected in the CONSORT requirement of presenting the flow of participants throughout a clinical trial.
For studies with follow-up, the analysis includes contrasting who was lost to follow-up (and why) to those for whom the researcher has data. For recruitment, the approach is more difficult. In the USA there is an increasing difficulty in obtaining this information due to regulations, such as the Health Insurance Portability and Accountability Act and other confidentiality policies or measures. However, if characteristics of the target population are known, the researcher can compare them with the characteristics of the recruited sample.
The researcher can also incorporate a substudy with minimal data collection within the larger study to characterize people who did not agree to the full study. It has been argued that participation rates are dropping due to the sheer number of studies being performed, especially those over the Internet. Thus, the movement to have solid numbers for policy decisions, train clinicians to be research literate, communicate with college students by administration and, at least in the USA, marketing disguised as survey research means people are feeling overwhelmed and less trustful of ‘research’. They also argued that willingness to spend time in other community activities (‘volunteerism’) is also declining as well as overall free time in the USA and other western countries.
Another barrier or deterrent to participation is the time commitment and the formalized consent process required in the USA. Some studies have a longer consent process than the intervention being studied. Potential participants often say they do not trust consent sheets because, in the words of the people approached, they are written by lawyers to protect the university. In an university online surveys information sheet may require scrolling down multiple screens to reach the agree button. In contrast, in another university implementing the same survey, the information sheet is condensed to one screen. This is not to say that consent is bad; it is to say that we need to look at this from the research participant’s perspective.
There can be differences in the quality of the data collected by mode of data collection, especially in the ability to clarify or follow up on questions and recognize fatigue or deception in the participant. For online surveys, researchers have recognized the challenges and developed guidelines for them well described in the Journal of Medical Internet Research . Due to its ease of constructing and low cost of administering – no postage, no research assistants contacting people – online surveys are particularly appealing to researchers with low budgets and are multiplying. Online surveys must still address questions of who is the target population and what is the sample size needed. If mass e-mailing is used (e.g. to all medical students in the USA), the researcher still needs to stop and think about who and how many people are realistically going to respond.
The researcher should strongly consider using mixed modes of data collection to improve participation – such as online surveys followed by postal surveys – or other changes to the protocol. Repeated follow- up attempts regardless of method can increase the participation rate. As technology changes (e.g. online speed is becoming faster, more people use the Internet, landlines becoming more expensive), people need to examine when and where the studies were conducted when interpreting effectiveness/biases of different strategies. One approach to enhancing participation requires resources but it does not rely on just increasing incentives. It means hiring and training staff who are personable, flexible and receptive to ongoing training and monitoring. It also relies heavily on knowing the population and how people are potentially lost to follow-up and proactively arranging institutional agreements and obtaining consent at the time of recruitment to contact hospitals, jails, etc.
The need for follow-up and the methods are explained to participants at recruitment and at every contact. Importantly, they did not wait until a follow-up assessment is due to begin to contact the person using comprehensive information collected at the time of recruitment on the individual and current/past social network. Finally the system relies on active monitoring to quickly make changes as needed. Other recommendations for increasing the participation rate are discussed.
To conclude, participation rates are definitely lower in the USA than they used to be. Lessons from other countries may inform ways to increase and maintain high participation rates. This decline means we as readers must be suspicious, especially of findings with small effect sizes, and examine the recruitment process and retention for potential biases. If the authors do not present sensitivity analyses, then we need to be analytically agile enough to do it. The decline means that we as researchers must be transparent about the recruitment process, devote resources to maximizing it and probe for biases by additional analyses. Importantly, statistical techniques will not solve the problem. It means that we as reviewers must demand the participation rate be honestly and prominently presented.
It does not mean that we reward studies with high participation but meager contributions and discard innovative studies on the basis of an arbitrary participation rate number. It means we must continue to be intelligent and critical consumers of research findings. If the findings are robust across studies with different biases, then it is more likely that the findings can be generalized. Finally, although it is not often stated, it means that we as funders are paying more as the lower participation rates mean more effort has to be expended to obtain the target sample size.