Closing the chasm between research and practice: evidence of and for change
Lawrence W. GreenDepartment of Epidemiology and Biostatistics, School of Medicine, Helen Diller Comprehensive Cancer Center and Center for Tobacco Research and Education, 66 Santa Paula Avenue, San Francisco, CA 94127, USA. Email: lwgreen@comcast.net
Health Promotion Journal of Australia 25(1) 25-29 https://doi.org/10.1071/HE13101
Submitted: 12 November 2013 Accepted: 25 November 2013 Published: 26 March 2014
Journal Compilation © Australian Health Promotion Association 2014
Abstract
The usual remedy suggested for bridging the science-to-practice gap is to improve the efficiency of disseminating the evidence-based practices to practitioners. This reflection on the gap takes the position that it is the relevance and fit of the evidence with the majority of practices that limit its applicability and application in health promotion and related behavioural, community and population-level interventions where variations in context, values and norms make uniform interventions inappropriate. To make the evidence more relevant and actionable to practice settings and populations will require reforms at many points in the research-to-practice pipeline. These points in the pipeline are described and remedies for them suggested.
Introduction
The usual answer to how to bridge the gap between research and practice or policy is to disseminate scientific findings more efficiently. Perhaps the question should not be how do we get more and better dissemination and implementation of the existing science to practitioners and policymakers, but instead, how do we ask the right questions in the first place and, in turn, how do we get better adaptation of the research practices into the real world.
Jonathan Lomas, former Director of the Canadian Health Services Research Foundation, illustrates this divide of the two world views of science and policy with the following scenario, describing a brief exchange between a group in a hot air balloon, and a man below.1
‘Where am I?’ the people in the balloon asked. ‘You’re 30 meters above the ground in a balloon’ was the reply below. ‘You must be a researcher?’ ‘Yes, how did you know?’ ‘Because what you told me is absolutely correct but completely useless.’ From the ground, ‘You must be a policymaker.’ Yes, how did you know?’ ‘Because you don’t know where you are, you don’t know where you’re going, and now you’re blaming me.’
The dialogue could be applied equally if those in the balloon were health promotion program planners or practitioners, because the last phrase is what the practitioners often feel: that researchers do not know where they are going with their research and the researchers blame the practitioners for not faithfully applying the products of research.
Challenges and opportunities
Among the various challenges and opportunities, the overarching challenge is to close the gap between the evidence for implementation that policymakers, practitioners and communities need and what they are getting from researchers. The overarching opportunity is for the academic community to reform some of the peer review, editorial, and impact-factor scoring tendencies that have distorted the criteria for grant making, publishing, systematic reviews for practice guidelines, and for academic appointments, promotion and tenure in the health professional schools of most universities.
Opportunities in the conduct of research could extend participatory research principles to work with policy makers, program planners and practitioners in setting priorities on research questions relevant to the community,2 and greater use of natural experiments incorporating surveillance, monitoring, and evaluation of programs with quasi-experimental and continuous quality improvement methods.3 The ultimate stretch in blending rigor and reality – internal validity and external validity – would be to combine participatory research with multisite randomised controlled trials (RCT) that would expand the external validity of the results of those trials while retaining internal validity.4
Dissemination of research (peer review tendencies)
The most frequently quoted statistic on the gap or ‘chasm’ between researchers and practitioners is that ‘it takes 17 years to turn 14% of original [clinical] research to the benefit of patients’.5 These two estimates were derived from a series of estimated points of attrition in the flow of original research through the pipeline of publication and subsequent vetting in systematic reviews that produce guidelines for practice. For example, studies found that between 76 and 80% of clinical research findings are ultimately lost to attrition over time: 18% of it never gets submitted for publication because of negative results and investigators knowing that editors have a bias against publishing negative results, which is well documented.6 Then, as Balas et al. reconstructed the data from previous reviews: 46% of the submitted manuscripts are lost between submission and acceptance.5 This is where the peer review process comes particularly into play, along with editorial tendencies, with their biases against negative results, non-randomised trials, small or qualitative studies. The rejection is typically attributed to methodological limitations. Little or nothing is lost between acceptance and publication. We were still losing ~35% between publication and indexing into bibliographic databases by the mid-1990s, although this has probably improved since then with electronic publishing, indexing and cataloguing. Then another 50% were lost between getting the evidence into databases and their systematic review or meta-analysis that would lead to guidelines for evidence-based practice and eventually to textbooks. What might be surprising to most who have railed about practitioners not adopting evidence-based practices is that nothing is lost between the publication of guidelines and textbook summaries of systematic reviews and the practitioner implementation of them. It is simply a matter of the time it takes for 100% of such recommended practices to be applied, probably because the recommended practices based on controlled research requires various adaptations to fit the variety of practice circumstances, populations and needs. So the practitioners should not be blamed for not implementing them, eventually.
The cumulative time lost between the phases of vetting evidence from original research to eventual application is the other part of the quotable ‘17 years to apply 14% of original research.’ Balas et al. further compiled from various studies the time lapses between the foregoing phases.5 It takes ~6 months between submission and acceptance of a manuscript by a journal, another 6 months between acceptance and publication, ~3 months between publication and indexing, and about 6--13 years between accumulating and cataloguing electronic databases of published evidence and their eventual systematic review by one of the review commissions (e.g. Cochrane or the US Preventive Services Task Force). The estimate Balas accepted from the early 1990s was 9.3 years for ultimate implementation. We may reasonably assume an improved efficiency of online publication, cataloguing and systematic reviews since these estimates were computed. Yet, we might also expect growing scepticism among policy makers and practitioners about the appropriate fit of much of the research evidence for their varied populations, settings and circumstances, especially with the growing recognition of disparities in the health advances of populations.
Pipeline Fallacy
The ‘Pipeline Fallacy’, refers to an academic and bureaucratic notion of funding, producing and vetting research that can be delivered to policy makers and practitioners as evidence-based ‘best practice’ guidelines, or even as requirements.7 A pipeline that flows one-way from theory to basic and highly controlled experimental research inhibits practical processes of informing research from the experience of practitioners and patients or communities that live with the studied problem in all its varieties on a daily basis. The pipeline process begins with identifying priorities for research funding, then squeezes those ideas into requests for proposals and review of research grants that meet the criteria or personal biases of peer reviewers. Outcomes from the research are then further squeezed into publication priorities and peer review. This is further shaped into systematic reviews of published studies that qualify for inclusion, and research synthesis into guidelines for ‘evidenced-based practice.’ Eventually, coming out the end of the pipe is a relatively small, highly distilled subproduct of the original research that does not take into consideration the implementation problems of the practitioner or policymaker with respect to funding, population needs, time and work demands, local practice or policy circumstances, professional discretion in adjusting interventions to individual needs, and the credibility of the results to those who practice in very different contexts than those in which the evidence was generated.
One driver of this directive approach has been the evidence-based medicine movement, a movement that has been, thankfully, successful in ‘clearing the decks’ of medical practices that did not have adequate evidence to justify them. It seems, however, that as this evidence-based medicine approach is applied to public health and health promotion, it encounters contradictory practicalities of intervention on behaviour, communities, policies and environments, and the methodological differences in conducting randomised trials and in inferring and generalising results from controlled experiments in the context of these interventions.
Addressing the challenge
A few government agencies have recognised this gap and have attempted to bridge the relationship between researchers and practitioners or policymakers and to address the challenge of translating research into practice. This includes the National Health and Medical Research Council8 of Australia and the Agency for Healthcare Research and Quality9 in the US. These organisations often see a gap as a pothole that needs to be filled, a rut in the road, as it were. So they seek to fill it with their Translation of Research into Practice initiative, with annual conferences, education programs and more varied communications, as they attempt to move that evidence through the pipeline faster and more efficiently. They put some emphasis on clearing the way of the practitioners so they are able and supported in adopting and implementing evidence-based practice guidelines, as well as appreciating the need for some innovation and adaptation in all of that, rather than simply practicing every guideline with slavish ‘fidelity’.10
This conceptualisation of the problem, unlike the pipeline metaphor, is reminiscent of another fallacy that anthropologists have dubbed the ‘Fallacy of the Empty Vessel’.11 This concept exemplified some health education efforts in the mid-20th century, when many mass media messages were based on the assumption that the public were an empty vessel, implicitly devoid of prior attitudes, values or behavioural constraints. The concept was that if enough information was poured into that empty vessel, it would fill up and good practices would spill over. The community and practitioners, however, are not empty vessels. They are composed of funding opportunities, population needs and demands, practice circumstances and professional discretion issues. To support change, this complexity needs to be recognised, along with the many benefits of practitioners, policy makers and researchers working together.
The Broad Street pump story
John Snow is usually given credit for the first success in controlling a cholera epidemic by establishing the evidence of a link between cholera incidence and the Broad Street pump in London, and recommending the removal of the pump handle. Crediting John Snow alone with this public health success, however, underplays the importance of some very important others and their collaborative and parallel approaches to identifying the problem, gathering evidence, and the translation of this evidence into policy and practice, as noted by Hanson et al.12 Sir Edwin Chadwick, a politician who did not like smells, wanted to clean up all of those neighbourhoods and his campaigns through Parliament were a very large part of the story of policymaker and practitioner receptivity to the evidence John Snow brought on the Broad Street problem. Reverend Henry Whitehead was a community activist and his role is written about increasingly as people try to understand the dynamics that led to the changes. William Farr was a bureaucrat and statistician who kept the records in the Ministry of Health, such as it was at that time.13 It was the combination of these several individuals that gave momentum and closure to what we now call the London case study in epidemiology and community intervention, a milestone and benchmark for public health reform12 and for the importance of practice-based and community-based evidence.14,15
Alternatives to and enhancements of RCT
The prevailing standard of evidence is the RCT, but this has its limitations in community-based programs. The biggest overriding limitation is that it decontextualises most of the evidence it produces. It starts with an intervention tested by comparison of average effects with those of a control condition, i.e. no intervention or an alternative intervention. As shown in Fig. 1, it builds on the evidence and the theories about mediating and moderating variables expected to change based on previous evidence and theory. It then looks for change and outcome variables measured and compared between experimental and control groups. The generalisation of these interventions is problematic, however, because when taken out into the real world one must destandardise it. It needs to be tailored to the population or various subpopulations, and to the circumstances in which it will be applied. The interventions in the experimental trial are reduced to a simplistic form to minimise the confounding variables. Everything else is held constant, which is not the way it is in the real world. The interventionists have no discretion in adjusting the intervention to individual cases or circumstances. They are highly trained and supervised to adhere to strict protocols. The analysis of subgroups is discouraged and often censored by editors because the subgroups are not randomised.16,17 This is most unfortunate because subgroup differences in response to the intervention would be the most useful evidence to program planners and practitioners in community health promotion as they seek to implement and evaluate the intervention in question.
Many alternatives to and variations on the RCT methodology exist for evaluating health promotion interventions in real time and in living communities.18–20 Opportunities also exist to extend participatory research principles21,22 from community-based participatory research to participatory research at all levels to work with policymakers and practitioners (all the players) in the conduct of natural experiments. The collection of data from surveillance, monitoring and evaluation of programs and continuous quality improvement efforts can be made much more relevant and in turn useful to policy makers, program planners and practitioners.3 Finally, the concept of combining participatory research with multisite RCT would expand the external validity of the results of those trials.4 The often-exclusive preoccupation with internal validity in academically controlled and published research makes external validity the fundamental problem in many if not most of the research translation issues.
Community-based research
Fig. 2 suggests four overlapping spheres of research for community health promotion: community-based,23 academic-based,24 and participatory research,21,25 and community-based participatory research. The highly controlled academic research overlaps with community-based research to the extent that it is being done in communities. Participatory research overlaps with community-based participatory research and with other community-based research and practice-based research. So they are not distinct categories but rather overlapping categories. The process of planning and conducting research needs to have the right balance for the right questions that match a community’s needs.
Striking the right balance involves reconciling at least two paradoxes. The internal validity--external validity paradox suggests that the more rigorously controlled a study is, the less reality-based it becomes. So it cannot be taken to a larger scale or generalised to other settings or populations, as it does not have very much to do with those other worlds. An RCT is usually testing the efficacy of an intervention – whether it works under more or less ideal circumstances – not its effectiveness.
The specificity–generalisability paradox suggests that the more relevant and particular a study is made to the local context, the less generalisable it may be to others. This often makes community-based participatory research less generalisable. The counter to this, however, is that when results from a study in one community are taken to another community, they will at least have greater credibility to the practitioners in the other community because the study was done in a real setting, more like their own, under circumstances more similar to their own, than the average RCT conducted in or under the control of academic settings. The homophily–social influence paradox occurs when community health workers such as indigenous aides who communicate well in the community are advanced up the professional hierarchy of the agencies that hire them, they become more socially distanced from their community. This, in turn, may undermine their effectiveness. Career ladder advancement is their right if they are effective, but it may in turn undermine some degree of their effectiveness.
The number one complaint from practitioners about evidence represented in ‘best practice’ guidelines based on systematic reviews of RCT evidence, according to a study of practitioners’ problems with evidence-based practice, was their perceived lack of external validity.16 They did not use the term ‘external validity’ but that is what they were essentially saying, which raises the issue of how objective and subjective evidence is weighed.
Weight versus strength of evidence
Academic scientists and usually professional practitioners view health and evidence about it through different lenses than the lay public in most communities, as suggested by Fig. 3. The scientists and professionals tend to have greater visual acuity (with their validated instruments) and place more weight on objective indicators of health. The layperson has greater acuity of visual and other senses that place more credence on subjective indicators.26 If behaviours are the main point of the intervention, then subjective indicators are often more important than objective. At the very least more credence should be given to the subjective indicators of health.
How does this play out in relation to blending science, policy and practice? The tendency to favour internal validity over external validity in the funding, conduct and publication of research, and in the systematic review of research to produce guidelines for practice, means that the strength of evidence is favoured over the weight of evidence.17 Giving greater credence to a wider range of evidence – from evaluation of community-based practice as well as academically controlled experiments – should produce a more balanced weighing of evidence than the single-minded internal validity criteria favoured in the current funding, conduct and systematic reviews of research evidence.
Conclusion
There have been great advances in public health and health promotion in the 20th century in reduction of cardiovascular deaths and stroke deaths, tobacco control, immunisation, injury control, especially automobile injuries, and in occupational health injuries and death. Much of these advances have been achieved without randomised controlled trials preceding their implementation in policy and practice. If these successes are to continue and to gain traction in other areas such as alcohol, HIV and obesity control, there needs to be a continued recognition of the value of RCTs but also greater appreciation of and blending with other sources of data particularly practice-based evidence from surveillance, monitoring and evaluation, so that there is greater relevance to practice and better adaptation of the evidence into the real world.
Acknowledgements
This paper is based partly on presentations made at National Dissemination & Implementation Conference, and Medical Research Council Conference of Investigators, and a Monash University public lecture, Melbourne, Australia, November 2012; also at the Cancer Council of New South Wales, Sydney, October 2013, and at the Australian Regional Conference on Translational Research, University of Newcastle, NSW, November 2013. I am indebted to Jonine Jancey, Editor of the journal, for arranging the transcription of a draft of the manuscript from an online video version of the lecture.
References
[1] Lomas J. Keynote presentation. European public health conference, Amsterdam; 2009.[2] Green LW, Glasgow RE, Atkins D, Stange K (2009) Making evidence from research more relevant, useful, and actionable in policy, program planning, and practice: Slips “Twixt Cup and Lip”. Am J Prev Med 37, S187–91.
| Making evidence from research more relevant, useful, and actionable in policy, program planning, and practice: Slips “Twixt Cup and Lip”.Crossref | GoogleScholarGoogle Scholar | 19896017PubMed |
[3] Institute of Medicine. Evaluating obesity prevention efforts: a plan for measuring progress. Washington, DC: National Academy Press, 2013.
[4] Katz DL, Murimi M, Gonzalez A, Nijike V, Green LW (2011) From controlled trial to community adoption: the multisite translational community trial. Am J Public Health 101, e17–27.
| From controlled trial to community adoption: the multisite translational community trial.Crossref | GoogleScholarGoogle Scholar | 21680935PubMed |
[5] Balas EASW, Garb CT, Blumenthal DS, Boren A, Brown GD (2000) Improving preventive care by prompting physicians. Archives of Internal Medicine 160, 301–308.
| Improving preventive care by prompting physicians.Crossref | GoogleScholarGoogle Scholar |
[6] Dickersin K, Min Y (1993) Publication bias: the problem that won’t go away. Ann N Y Acad Sci 703, 135–146.
| Publication bias: the problem that won’t go away.Crossref | GoogleScholarGoogle Scholar | 8192291PubMed |
[7] Green LW (2008) Making research relevant: if it is an evidence-based practice, where’s the practice-based evidence? Fam Pract 25, i20–4.
| Making research relevant: if it is an evidence-based practice, where’s the practice-based evidence?Crossref | GoogleScholarGoogle Scholar | 18794201PubMed |
[8] National Health and Medical Research Council. How to put the evidence into practice: implementation and dissemination strategies. Handbook series on preparing clinical practice guidelines. Canberra: NHMRC; 2000.
[9] Clancy C. Keynote presentation at Translating Research into Practice conference. Washington, DC: Agency for Healthcare Research and Quality; 2003.
[10] Cohen DJ, Crabtree BF, Etz RS, Balasubramanian BA, Donahue KE, Leviton LC, et al (2008) Fidelity versus flexibility: translating evidence-based research into practice. Am J Prev Med 35, S381–9.
| Fidelity versus flexibility: translating evidence-based research into practice.Crossref | GoogleScholarGoogle Scholar | 18929985PubMed |
[11] Brown PJ, Barrett R. Understanding and applying medical anthropology. 2nd edn. New York: McGraw-Hill Higher Education; 2010.
[12] Hanson DW, Finch CF, Allegante JP, Sleet DA (2012) CDC National Center for Injury Prevention and Control; Closing the gap between injury prevention research and community safety promotion – revising the public health model. Public Health Rep 127, 147–55.
[13] Newsom SWB (2006) Pioneers in infection control: John Snow, Henry Whitehead, the Broad Street pump, and the beginnings of geographical epidemiology. J Hosp Infect 64, 210–6.
| Pioneers in infection control: John Snow, Henry Whitehead, the Broad Street pump, and the beginnings of geographical epidemiology.Crossref | GoogleScholarGoogle Scholar |
[14] Green LW, Ottoson JM. From efficacy to effectiveness to community and back: Evidence-based practice vs practice-based evidence. In Green, L, Hiss, R., Glasgow R., et al. eds. From Clinical Trials to Community: The Science of Translating Diabetes and Obesity Research. Bethesda: National Institutes of Health, 2004, pp. 15–18.
[15] Green LW (2006) Public health asks of systems science: To advance our evidence-based practice, can you help us get more practice-based evidence? Am J Public Health 96, 406–9.
| Public health asks of systems science: To advance our evidence-based practice, can you help us get more practice-based evidence?Crossref | GoogleScholarGoogle Scholar | 16449580PubMed |
[16] Rothwell PM (2005) Subgroup analysis in randomised controlled trials: importance, indications, and interpretation. Lancet 365,
| Subgroup analysis in randomised controlled trials: importance, indications, and interpretation.Crossref | GoogleScholarGoogle Scholar | 15652609PubMed |
[17] Green LW, Glasgow R (2006) Evaluating the relevance, generalization, and applicability of research: Issues in external validation and translation methodology. Eval Health Prof 29, 126–53.
| Evaluating the relevance, generalization, and applicability of research: Issues in external validation and translation methodology.Crossref | GoogleScholarGoogle Scholar | 16510882PubMed |
[18] Hawkins NG, Sanson-Fisher RW, Shakeshaft A, D’Este C, Green LW (2007) The multiple baseline design for evaluating population-based research. Am J Prev Med 33, 162–8.
| The multiple baseline design for evaluating population-based research.Crossref | GoogleScholarGoogle Scholar | 17673105PubMed |
[19] Mercer SL, DeVinney BJ, Fine LJ, Green LW, Dougherty D (2007) Study designs for effectiveness and translation research: Identifying trade-offs. Am J Prev Med 33, 139–54.
| Study designs for effectiveness and translation research: Identifying trade-offs.Crossref | GoogleScholarGoogle Scholar | 17673103PubMed |
[20] Sanson-Fisher RW, Bonevski B, Green LW, D’Este C (2007) Limitations of the randomized controlled trial in evaluating population-based health interventions. Am J Prev Med 33, 155–61.
| Limitations of the randomized controlled trial in evaluating population-based health interventions.Crossref | GoogleScholarGoogle Scholar | 17673104PubMed |
[21] Green LW, George A, Daniel M, Frankish CJ, Herbert CH, Bowie W, et al. Study of Participatory Research in Health Promotion: Review and Recommendations for the Development of Participatory Research in Health Promotion in Canada. Ottawa: Royal Society of Canada. ISBN 092006455. 1995.
[22] Minkler M, Wallerstein N. Community-based participatory research for health: From process to outcomes (2nd ed.). San Francisco Jossey Bass; 2010.
[23] Israel BA, Schulz AJ, Parker EA, Becker AB (1998) Review of community-based research: assessing partnership approaches to improve public health. Annu Rev Public Health 19, 173–202.
| Review of community-based research: assessing partnership approaches to improve public health.Crossref | GoogleScholarGoogle Scholar | 9611617PubMed |
[24] Nyden P (2003) Academic Incentives for Faculty Participation in Community-based Participatory Research. J Gen Intern Med 18, 576–85.
| Academic Incentives for Faculty Participation in Community-based Participatory Research.Crossref | GoogleScholarGoogle Scholar | 12848841PubMed |
[25] Cornwall A., Jewkes R. (1995) What is participatory research? Social Science & Medicine 41, 1667–1676.
| What is participatory research?Crossref | GoogleScholarGoogle Scholar |
[26] Green LW, Kreuter MW. Health program planning. 4th edn. New York: McGraw-Hill; 2005.