Time may change me, but I can’t trace time.
When you work in summative evaluation – you know, the evaluation meant to answer if something worked – someone, somewhere is going to say you need pre/post data.
It’s not crazy. And have an outcome statement that includes the word change? Well, you've started moving in a pre/post direction:
Where I Am – Where I Was = My Change
But I’ve been in a bunch of situations where I heard insistence on pre/post methods, and it just… wasn’t a good idea. In the abstract, it sounds great. But in reality, it sometimes just doesn’t make sense.
Here are 4 red flags that a pre/post measure is not your solution.
1: When you don’t really know what’s happening.
Does this sound familiar? You have a program. You know it’s a cool idea, a crowd favorite, or that good stuff happens. But you can't say exactly what that stuff is or looks like. Or the good stuff covers a lot of territory.
With pre/post measures, you have to know exactly what you’re looking for before you get started. Because you're going to have to measure it the same way each time.
For a lot of programs in early stages of development or evaluation, you simply don’t know enough yet to hypothesize exactly what will change.
Slow down. Take time to explore your program outcomes first. Make sure you know there's a there there before you try and measure how much it has changed.
2: When your audience doesn’t know what they don’t know.
This happens when your outcome asks people to assign themselves to a level of knowledge, skill, or confidence.
Have you ever had a transformative learning experience? One where it felt like someone pulled back the curtain on a part of the world that, until now, you thought was straightforward? Where suddenly you see how complicated it actually is?
If we’d asked you to rate your knowledge beforehand, you say 4 out of 5. Because you read things. You know stuff. It's straightforward, right?
After the experience, the world has shifted. There’s so much more information than you’d imagined. And you’ve only scratched the surface. Now? You’d give yourself maybe a 3 out of 5.
Your knowledge didn’t decrease. You clearly know more than you did before.
But your "mental yardstick" shifted. Your view of the goalpost of what it means to have level 5 knowledge moved much further away. The issue is that your pre-rating was inflated because you didn't know how big the ruler actually was.
This happens all the time with professional development programs. People with professional expertise are justified in coming into a program feeling confident in their knowledge and skills. If your PD reveals a topic they only considered at a surface level before, that confidence may get a little shaken.
We’ve also found naive confidence to be a factor with teenagers. Bless their hearts. (How hard can it be? Oh, wait, that hard.)
This is when something called a retrospective-pre / post measure can come in handy. That’s when you ask both questions after a learning experience. “How much did you know before?” “How much do you know now.” You get at change, and you're more sure they’re using the same mental yardstick.
3: When you don’t want it to feel like a test.
People aren’t dummies. They know a test when they see one. There’s no claiming that this is program “feedback” when you hand out a pre-test.
And that can be fine!
But sometimes, you’re working in a setting where you really don’t want people to feel like they are being tested. Maybe it’s antithetical to the relationship you’re building with them. Or it would destroy the vibe of the experience.
Can you imagine getting a pre/post test when you go to your next Broadway show because they want to see how it changed you? Sure, I'd be fascinated to see those results. But that's a quick way to kill the joy.
The measurement approach impacts the experience. You can’t get around it. And sometimes that will take pre/post measurement off the table.
4: When it’s just not practical.
You know that I place a premium on evaluation being practical. If you are coming up with convoluted ways to make a method work, you’re in the danger zone.
(And not in a Maverick in Top Gun way. In a Whoopi warning Demi in Ghost way.)
Inherently, a pre/post measure requires more logistical moves. You need audience access and time to get data before the learning experience. It’s not a small ask.
It means educators carve time out of their program twice. Or capture people before they show up. Or some other convoluted way to reach them beforehand. In museum and informal spaces, that is often impractical.
A related red flag? When the duration of the experience is relatively short. If people will spend more time answering questions than they spend in the learning experience? Reader, your evaluation has jumped the shark.
So, when does a pre/post measure make sense?
I don't deny that pre/post data can offer really powerful evidence. There are times you will want to greenlight a study that looks for change.
Here are a few hallmarks to look for (or ask your evaluator about):
- Predictable Outcomes: You have a solid idea of what is going to change. Maybe you’ve done some exploratory work. Or on-the-ground staff have been listening closely and can help dial-in your measurement tools.
- Outcomes aren't Self-Report: If you’re giving a pop quiz or skill assessment to gauge what people actually know or do, you’re probably good! It's self-assessment of "how much" that gets dicey.
- Duration: A) The experience is long enough to measurably change something to a human. B) The experience lasts longer than the time it takes for people to give data twice.
- School Day Vibes: The more the setting feels like school or work, the less weird it will feel to test people twice. It saddens me to say that. But it’s often true.
Real World Example:
I’m going to call out some work we did a bunch of years back with the very cool teacher PD program Teacher Innovator Institute, with the National Air and Space Museum. They wanted to know about how teachers changed due to the program.
Ch-ch-changes! Did that mean we needed a pre-/post-test?
In this case, the answer is yes... and no. Much like the nation's beleaguered financial advisors keep saying, the answer was to diversify our measurement portfolio.
We followed the rules of thumb to decide when and how to look for evidence.
Pre/Post Questions: Focused on actual classroom practices – how often they did X, Y, or Z. Teachers could objectively report these before and after the experience.
Retro-Pre/Post Questions: Change in confidence or comfort with the super-new practices TII threw at them – like using museums, collections, objects, and tech in new ways. We knew this was ripe for inflated “I don’t know what I don’t know” pre-program confidence.
Post Questions: We were certain cool stuff would happen for these teachers that we simply could not predict. So, we also used exploratory conversations at the end of the first year. What could they teach us about what they learned? (Answer: Lots.)
Have you ever felt the pre/post pressure? Reply and tell me when it worked for you... or when it really hasn't. (I'll tell you my story, if you tell me yours.)