Program Evaluation Toolkit for Harm Reduction Organizations

Voices from the Field

Voices from the Field.
Sam Armbruster

Sam Armbruster is an advocate and provider of harm reduction services, acting as the Education and Data Manager at Choice Health Network. With extensive experience providing harm reduction in both rural and urban communities, Sam applies their expertise in public health and multidisciplinary approaches to integrate quantitative metrics with rich qualitative data.

“It’s always been really challenging for me to get the ‘full picture’ from numbers alone.”

Sam describes how the importance of qualitative and narrative information is not always reflected in the traditionally numbers-focused evaluation models used by funders and public health agencies:

“I think that in a lot of evaluation of public health programs more broadly it’s very number-oriented, quantitative data collection – what percentage change in X health outcome took place, how many people attended X educational program, etc. – which is valuable information, but unfortunately, I think that a lot of narrative and personhood gets lost in focusing on outcomes and not context.”

“One of the challenges related to sharing evaluation data with funders is that sometimes some really important information I would like to communicate is lost in translating the data. Most funders have a really explicit formula or set of details they want to receive, and, while I understand that, it can mean that some of the important context I discussed previously is lost.”

“Gathering qualitative information from program participants provides a chance to learn meaningful information that might otherwise be lost.”

For them, elevating the qualitative experiences of program participants is not only a core part of the political dimension of harm reduction, but is also essential for individual programs to provide impactful and relevant services, as well as amplify the voices of their communities in more formal evaluation dialogues.

“Since harm reduction is a social justice movement at its core, I think it’s important to include the context of systems and peoples’ lived experiences in any evaluation that takes place…. It’s simultaneously helpful for our program directly to shape the services we provide, but it has more far-reaching implications as well.

“Our organization is often invited to tables people who use drugs and/or are otherwise marginalized don’t have access to, and having deeper, contextualizing qualitative data means that we are able to share those folks’ stories and knowledge with people who wouldn’t necessarily hear it any other way.”

“I try to include as much detail and context as possible in the narrative sections of reports within the bounds provided.”

Sam shares how it is possible – and ultimately beneficial - to push back on quantitative evaluation demands which lack qualitative variables, whether that means changing the evaluation questions before they are implemented or ensuring that the resulting data is placed in its necessary context in later reports.

“Our team has also been able to have ongoing conversations with funders that request specific quantitative data (most often the measures: syringes distributed, syringes returned, total participant visits, total unique participants) to at least provide further data that paints a more complete photo of the services that people are interested in accessing. It feels important to me that the interpersonal connections that are built with program participants is visible to people viewing data, especially if it is limited to quantitative measures.”

“Our program uses a combination of quantitative and qualitative data collection to inform pretty much any change that takes place in our program.”

Well-designed evaluation and monitoring can inform and help shape every aspect of a program, from the services it offers and the locations it reaches to the way the organization is staffed and trained. As Sam describes, functional evaluation infrastructure can provide a feedback and assessment loop that can be extended to external audiences and become a natural aid for delivering effective harm reduction for years to come.

“We collect quarterly satisfaction surveys that measure satisfaction with staff, program space, and service time; gauge syringe coverage; and provide opportunities for additional feedback. We have also conducted formal smaller-scale surveys to assess supplies participants need, both at the beginning of the program and prior to beginning to offer gender-affirming hormone therapy injection supplies.

In addition to those formal surveys, we have more informally taken feedback from participants to offer alternative injection supplies that participants don’t regularly ask for but that meet their specific needs. This information has also been used to shape the program’s hours, caps on syringes, types of supplies, and the places that we refer to and the information we give participants to prepare them for the experience they will likely have at a given organization.

Surveys, stories, and informal data collection strategies have also shaped some of the advocacy efforts that we engage in, such as engaging community members in discussions about how local medical and hospital policies ultimately exacerbate harm among folks who use drugs.”


Now that we have covered how to carry out your harm reduction program evaluation, in the next and final module we will explore how to use the results of your evaluation to address reporting and communication needs.