There are 2 item(s) tagged with the keyword "Implementation fidelity".
By ETR | July 19, 2016
Note: We're posting about some of the presentations ETR researchers and professional development specialists are offering at the Office of Adolescent Health Teen Pregnancy Prevention Grantee Conference July 19-21.
Here’s a challenge facing anyone delivering evidence-based teen pregnancy prevention programs. Educators usually want to adapt programs to boost their relevance with the teens they’re working with. Program managers need to be sure any adaptations are done ways that maintain the fidelity and effectiveness of a program. If there is an evaluation component in the project, managers also need to be sure that adaptations have been noted and are taken into consideration when data is analyzed and reported.
How do you feel about fidelity monitoring of your teen pregnancy prevention programs? Have you faced challenges balancing these dynamics between adaptation and program fidelity?
BA Laris, MPH. Evaluations: Adaptations/Fidelity. Tuesday 7/19/16 1:00-2:00 p.m. Topical Roundtable in the Evaluation Section, Key-3.
By Pam Drake, PhD | August 21, 2013
When we want to evaluate how well an evidence-based program (EBP) works, one of the important variables we need to measure accurately is implementation fidelity. This variable helps confirm that the program is being presented as intended, and that different educators are doing essentially the same things in teaching the program.
With good implementation fidelity, there’s a better chance others can replicate the program’s outcomes. Schools and communities that show good implementation fidelity for a program can affirm they’re taking the correct steps to reach health goals.
Implementation fidelity also helps us interpret outcomes—for example, why an intervention did or didn’t work. We can assess how practical the program activities are, or refine programs by determining which components lead to the outcomes we want.
Displaying: 1 - 2 of 2