top of page

GUEST COLUMN.

wired-outline-41-quotation-mark-second.gif

PLUGGING THE LEAKS IN THE EVIDENCE PIPELINE

By Gary VanLandingham, director of the Reubin O'D. Askew School of Public Administration & Policy at Florida State University

While the use of evidence-informed policymaking has grown substantially in recent years, it faces several challenges that have slowed and could even reverse its progress. Some of these issues are largely beyond the control of public managers and include growing political polarization and the rejection of empirical facts by some political factions. However, other challenges are more fixable, particularly the barriers that limit our ability to locate information on which programs are effective and which are not.


To meet this need, a growing number of research clearinghouses have been established in the U.S. and other countries.  These entities curate evaluation studies and assign evidence and/or impact ratings to individual programs, typically posting these results on searchable websites. In theory, the tens of thousands of program evaluations that have been conducted since the 1960s should have enabled us to identify what interventions are effective in addressing many social problems.  Alas, this is not the case because there are many leaks in the evidence pipeline that produces this knowledge. 


A major leak occurs because many evaluation reports cannot be accessed by the clearinghouses, which scan academic journals, government websites, and research funders to identify studies that have assessed program impacts. This is largely because many evaluation sponsors do not make the studies they have funded available to outside stakeholders.  Additionally, leading journals tend to favor submitted articles that test academic theories or examine new initiatives, rejecting those that examine existing programs or report negative findings.  As a result, many important evaluation findings may be lost to the field. 


Another leak occurs when evaluators use inadequate research designs when testing programs.  The Washington State Institute for Public Policy (WSIPP) has scanned evaluation literature for over 20 years to inform its assessments of program impacts.  Over this period, it has reviewed over 30,000 studies but found that less than 5% were of sufficient quality to be incorporated into its assessment of program results.  While the growing availability of administrative databases holds the promise of making it much easier and cheaper to conduct randomized control trials when evaluating program outcomes, such high-quality studies are still relatively rare. As a result, only a small fraction of the program evaluations that are conducted annually contribute to the determination of “what works”.     


A third leak occurs because clearinghouses for evaluations themselves can be difficult to locate and use.  There is no centralized listing of these entities, and recent studies have identified over 70 such clearinghouses, most of which focus on individual policy areas such as K-12 education, substance abuse, or reentry from prison. As a result, a government that operates programs across many policy areas must search for clearinghouses that assess each area. 


Further, once these clearinghouses are found, it can be difficult to interpret their results because they vary greatly in the way they report results.  For example, clearinghouses use different nomenclature to identify programs with the highest level of effectiveness evidence, designating such programs as “Effective” (Crimesolutions.gov), “Model Plus” (Blueprints for Healthy Youth Development), “1” on a scale of 1 to 5 (California Evidence-Based Clearinghouse for Child Welfare), “Strong” (What Works Clearinghouse) and “Positive Impacts” (Teen Pregnancy Prevention Evidence Review). 


The Results First Clearinghouse Database, maintained by the Penn State University Evidence to Impact Collaborative, has attempted to make such comparisons easier by using a traffic light system to portray programs rated by ten research clearinghouses.  However, while the Clearinghouse Database displays ratings of over 4,150 programs across eight policy areas (as of February 2024), potential users must find this resource, and it excludes ratings issued by other clearinghouses. 


These challenges greatly hinder our ability to identify programs that work and use this knowledge to guide policy and budget choices.  To address this problem, governments, evaluation sponsors, and the research clearinghouses should take steps to plug the leaks in the evidence pipeline.  This should include:


·        Establishing requirements that all publicly funded evaluation studies be posted on public websites and/or submitted to a centralized portal to make their findings accessible to the research clearinghouses and others interested in learning whether currently funded programs are helping to resolve critical social problems. 

 

·        Creating a working group of evaluation sponsors, including federal and state governments and prominent foundations to craft uniform standards for research designs used in outcome evaluations.  These standards should seek to ensure that such studies will generate high quality findings that will contribute to our knowledge of “what works”.


·        Building a collaborative network of research clearinghouses that agree to standardizing their processes for compiling and reporting program impact findings.  Additionally, the clearinghouses should at a minimum provide links on their websites to other clearinghouses that publish ‘what works’ ratings on relevant policy areas.

 

·        Finally, efforts such as the Results First Clearinghouse Database that aggregates “what works” information across multiple clearinghouses should be expanded to make it much easier for policymakers and program managers to access and use this information.  


There is a critical need for governments to operate programs that can effectively address the many wicked problems that face nations around the world. Efforts to better create, aggregate, and disseminate knowledge that identifies such programs would go a long way towards achieving this goal.

 

The contents of this guest column are those of the author and not necessarily those of Barrett and Greene, Inc.

 

#StateandLocalPerformanceManagement #Evidence-BasedResearch #StateandLocalGovernmentManagement #PublicSectorEvidenceChallenge #ResultsFirstClearinghouse #WhatWorks #ProgramEvaluation #EvidenceInformedPolicyMaking #EvidenceInformedManagement #PoliticalPolarization #WashingtonStateInstituteForPublicPolicy #WSIPP #InaccessibleProgramEvaluation #PennStateEvidenceToImpactCollaborative #BarriersToEvidenceResearch #StateandLocalEvidencePipeline #AssessingStateProgramImpact #AssessingLocalProgramImpact

GUEST COLUMN ARCHIVES

GUEST COLUMN ARCHIVES.
 

bottom of page