top of page

When Your Participants Aren't Who You Think They Are

  • Writer: Michaela Rawsthorn
    Michaela Rawsthorn
  • Mar 13
  • 3 min read

Every program starts with a clear mental picture of who it's for. The original design, the funding proposal, the logic model—all of it is built around a specific person. A specific set of needs. A specific context.


But programs don't stay still. Communities shift. Referral patterns change. Word spreads in unexpected directions. The person who walks through the door in year four may be quite different from the person the program was designed for in year one.


This is a normal, human thing. It's also a problem that most evaluation systems aren't built to catch.


The gap between assumed and actual

Most evaluation frameworks are designed around an intended population. Outcome measures, success definitions, benchmarks—they're calibrated to a particular kind of participant, with a particular starting point and a particular goal.


When the actual population drifts from the assumed one, the evaluation doesn't always signal the change. Outcome numbers may stay roughly the same. Participation rates may hold. On paper, everything looks fine.


But underneath, the program may be quietly misaligned — serving people it wasn't designed for, measuring change it wasn't meant to produce, and missing both the wins and the gaps that actually matter.


Drift happens slowly, then all at once

Population drift rarely announces itself. It tends to accumulate through small, individually reasonable decisions: a referral partner expands their criteria, a program removes a prerequisite to increase access, a community need shifts and staff adapt informally, a funder asks the organization to broaden its reach.


Each of those decisions may be entirely justified. Cumulatively, they can move an organization far from its original design without anyone formally choosing to go there.


By the time the drift becomes visible — when outcomes plateau, or funders start asking harder questions, or staff notice that the work feels harder than it used to — it can be difficult to trace back to a single cause.


What evaluation can do about it

The first step is simply to look. A periodic, honest review of who is actually being served — their demographics, their starting circumstances, their needs relative to the program's design — can reveal drift before it becomes a crisis.


This isn't about policing eligibility. It's about alignment. If the population has changed, the question is whether the program has changed with it — intentionally, with the right supports and outcomes in place — or whether it's operating on assumptions that no longer match reality.


Building demographic tracking into routine evaluation, and reviewing it with the same attention given to outcome data, makes this kind of drift visible. Surfacing it early creates options. Missing it until it's significant usually means harder choices later.


The harder question

Sometimes the drift reveals something more fundamental: that the people now being served have needs the program was never designed to address. That the outcomes being measured don't reflect what success looks like for them. That the program, as currently structured, is a partial fit at best.


That's uncomfortable information. It's also exactly the kind of insight that evaluation exists to provide.


Organizations that can hold that information clearly — and use it to make deliberate decisions about design, population, and scope — are the ones that stay meaningfully aligned to their mission over time. The ones that can't tend to drift further, until the gap between intended and actual becomes too wide to close quietly.

Comments


Signal & Proof, the name of the business

Enter your email and we'll be in touch!

Don't worry. We don't have a newsletter or spammy messages. Your inbox is safe with us.

 

© 2025 by Signal & Proof. Powered and secured by Wix 

 

bottom of page