skip to content
Primary navigation

What is Evidence?

Policymakers at each level of government are using research to guide policy and funding decisions. Such research findings, combined with context from local practitioners, gives policymakers access to valuable information about effectiveness. Commonly, these studies are evaluations which answer the following questions about a program or service:

  • Process evaluations: Has the program been implemented as intended? What types of implementation issues have emerged and how can they be addressed? What new ideas are emerging that can be tried out and tested?
  • Outcome evaluations: Did participants report desired changes after completing the program? For whom did the program work best?
  • Impact evaluations: Is the program effective at achieving desired outcomes? How did participants do in achieving desired outcomes relative to comparable nonparticipants?

The Results First Initiative rates programs and services in Minnesota using impact evaluations only. This type of evaluation is designed to identify cause and effect relationships between the program/service and desired outcomes. 

Impact on outcomes - definitions
Proven Effective A Proven Effective service or practice offers a high level of research on effectiveness for at least one outcome of interest. This is determined through multiple qualifying evaluations outside of Minnesota or one or more qualifying local evaluation. Qualifying evaluations use rigorously implemented experimental or quasi-experimental designs.
Promising A Promising service or practice has some research demonstrating effectiveness for at least one outcome of interest. This may be a single qualifying evaluation that is not contradicted by other such studies but does not meet the full criteria for the Proven Effective designation. Qualifying evaluations use rigorously implemented experimental or quasi-experimental designs.
Theory Based A Theory Based service or practice has either no research on effectiveness or research designs that do not meet the above standards. These services and practices may have a well-constructed logic model or theory of change. This ranking is neutral. Services may move up to Promising or Proven Effective after research reveals their causal impact on measured outcomes.
Mixed Effects A Mixed Effects service or practice offers a high level of research on the effectiveness of multiple outcomes. However, the outcomes have contradictory effects. This is determined through multiple qualifying studies outside of Minnesota or one or more qualifying local evaluation. Qualifying evaluations use rigorously implemented experimental or quasi-experimental designs.
No Effect A service or practice rated No Effect has no impact on the measured outcome or outcomes of interest. Qualifying evaluations use rigorously implemented experimental or quasi-experimental designs.
Proven Harmful A Proven Harmful service or practice offers a high level of research that shows program participation adversely affects outcomes of interest. This is determined through multiple qualifying evaluations outside of Minnesota or one or more qualifying local evaluation. Qualifying evaluations use rigorously implemented experimental or quasi-experimental designs.
[Rating] (Category of Services) These services represent groupings of settings, assessments, tools, and processes that a client may receive dependent on need. If the parent rating is Theory Based, some of the services within the category may be evidence-based, but the services have not been studied holistically. If the parent rating is something other than Theory Based, there is at least one qualifying study that assessed the effectiveness of the services holistically.
[Rating] (Culturally-informed intervention) Research shows that evidence-based policies may not be equally effective for all communities. Moreover, many communities have built their own programs, imbued with culturally-specific context. These programs often have practice-based evidence on effectiveness, but that evidence does not yet use qualifying research designs. We have attempted to note these programs and their own evidence.


The Results First team leverages existing impact evaluations from national clearinghouses in the Results First Clearinghouse database and the Washington State Institute for Public Policy meta-analyses. We  match programs and services delivered in Minnesota to ones that closely resemble programs and services previously evaluated and featured in a national clearinghouse or meta-analysis (with respect to the nature, length, frequency, and target population). In some instances, programs or services in Minnesota have been rigorously evaluated with an impact evaluation, in which case, we use that study to assign a rating.

What are qualifying evaluations?

Qualifying evaluations are impact evaluations that use either a randomized controlled trial (RCT) design or quasi-experimental design (QED) to rigorously assess effectiveness of a program or service on desired outcomes. Both RCTs and QEDs use an evaluation design that includes a treatment and a treatment as usual group.

  • Randomized controlled trial (RCT): Researchers use random assignment to place individuals into treatment and treatment as usual groups. Program or service participants have similar characteristics, except for the treatment they receive. The difference in outcomes at the end of the study is attributed to the treatment offering.
  • Quasi-experimental design (QED): Researchers do not always control placement into treatment and treatment as usual groups using random assignment. QEDs use statistical matching strategies or leverage existing policies to create the two groups. For example, a program or service that uses a wait list could be an opportunity to use a QED. Researchers ensure that both treatment and control groups have similar characteristics at the "starting point" of the evaluation, so the difference in outcomes at the end of the study is attributed to the program/service.
back to top