profile

The Evaluation Therapy Newsletter

Measuring Ch-ch-changes: When Pre/Post is not the answer


Hi Reader,

You know that old saying about the weather in New England? Or Michigan? Or Pittsburgh? Or, well, every Northeast/Midwest city or region has a version of it.

Don't like the weather? Wait 15 minutes. It'll change.

It's been nothing but ch-ch-changes in the weather around here. 75 yesterday. 50 today. Maybe Mother Nature is on the collective Emotional Roller Coaster we are all riding these days.

As an evaluator, the word change makes my ears perk up. Measuring change creates presents some challenges. And it has led to a rampant assumption of needing a pre/post assessment. But, the truth is, there are times when pre/post measures are just not a productive choice. Let's unpack why and free you from any automatic assumption.

Also included is:

  • Semi-related resource of how we measured impact of a storytelling show -- one of the places a pre/post test is a tremendously bad idea.
  • Upcoming ASTC Webinar about a Community Science rubric tool

Hang in there,

Jessica

Time may change me, but I can’t trace time.

When you work in summative evaluation – you know, the evaluation meant to answer if something worked – someone, somewhere is going to say you need pre/post data.

It’s not crazy. And have an outcome statement that includes the word change? Well, you've started moving in a pre/post direction:

Where I Am – Where I Was = My Change

But I’ve been in a bunch of situations where I heard insistence on pre/post methods, and it just… wasn’t a good idea. In the abstract, it sounds great. But in reality, it sometimes just doesn’t make sense.

Here are 4 red flags that a pre/post measure is not your solution.

1: When you don’t really know what’s happening.

Does this sound familiar? You have a program. You know it’s a cool idea, a crowd favorite, or that good stuff happens. But you can't say exactly what that stuff is or looks like. Or the good stuff covers a lot of territory.

With pre/post measures, you have to know exactly what you’re looking for before you get started. Because you're going to have to measure it the same way each time.

For a lot of programs in early stages of development or evaluation, you simply don’t know enough yet to hypothesize exactly what will change.

Slow down. Take time to explore your program outcomes first. Make sure you know there's a there there before you try and measure how much it has changed.

2: When your audience doesn’t know what they don’t know.

This happens when your outcome asks people to assign themselves to a level of knowledge, skill, or confidence.

Have you ever had a transformative learning experience? One where it felt like someone pulled back the curtain on a part of the world that, until now, you thought was straightforward? Where suddenly you see how complicated it actually is?

If we’d asked you to rate your knowledge beforehand, you say 4 out of 5. Because you read things. You know stuff. It's straightforward, right?

After the experience, the world has shifted. There’s so much more information than you’d imagined. And you’ve only scratched the surface. Now? You’d give yourself maybe a 3 out of 5.

Your knowledge didn’t decrease. You clearly know more than you did before.

But your "mental yardstick" shifted. Your view of the goalpost of what it means to have level 5 knowledge moved much further away. The issue is that your pre-rating was inflated because you didn't know how big the ruler actually was.

This happens all the time with professional development programs. People with professional expertise are justified in coming into a program feeling confident in their knowledge and skills. If your PD reveals a topic they only considered at a surface level before, that confidence may get a little shaken.

We’ve also found naive confidence to be a factor with teenagers. Bless their hearts. (How hard can it be? Oh, wait, that hard.)

This is when something called a retrospective-pre / post measure can come in handy. That’s when you ask both questions after a learning experience. “How much did you know before?” “How much do you know now.” You get at change, and you're more sure they’re using the same mental yardstick.

3: When you don’t want it to feel like a test.

People aren’t dummies. They know a test when they see one. There’s no claiming that this is program “feedback” when you hand out a pre-test.

And that can be fine!

But sometimes, you’re working in a setting where you really don’t want people to feel like they are being tested. Maybe it’s antithetical to the relationship you’re building with them. Or it would destroy the vibe of the experience.

Can you imagine getting a pre/post test when you go to your next Broadway show because they want to see how it changed you? Sure, I'd be fascinated to see those results. But that's a quick way to kill the joy.

The measurement approach impacts the experience. You can’t get around it. And sometimes that will take pre/post measurement off the table.

4: When it’s just not practical.

You know that I place a premium on evaluation being practical. If you are coming up with convoluted ways to make a method work, you’re in the danger zone.

(And not in a Maverick in Top Gun way. In a Whoopi warning Demi in Ghost way.)

Inherently, a pre/post measure requires more logistical moves. You need audience access and time to get data before the learning experience. It’s not a small ask.

It means educators carve time out of their program twice. Or capture people before they show up. Or some other convoluted way to reach them beforehand. In museum and informal spaces, that is often impractical.

A related red flag? When the duration of the experience is relatively short. If people will spend more time answering questions than they spend in the learning experience? Reader, your evaluation has jumped the shark.

So, when does a pre/post measure make sense?

I don't deny that pre/post data can offer really powerful evidence. There are times you will want to greenlight a study that looks for change.

Here are a few hallmarks to look for (or ask your evaluator about):

  • Predictable Outcomes: You have a solid idea of what is going to change. Maybe you’ve done some exploratory work. Or on-the-ground staff have been listening closely and can help dial-in your measurement tools.
  • Outcomes aren't Self-Report: If you’re giving a pop quiz or skill assessment to gauge what people actually know or do, you’re probably good! It's self-assessment of "how much" that gets dicey.
  • Duration: A) The experience is long enough to measurably change something to a human. B) The experience lasts longer than the time it takes for people to give data twice.
  • School Day Vibes: The more the setting feels like school or work, the less weird it will feel to test people twice. It saddens me to say that. But it’s often true.

Real World Example:

I’m going to call out some work we did a bunch of years back with the very cool teacher PD program Teacher Innovator Institute, with the National Air and Space Museum. They wanted to know about how teachers changed due to the program.

Ch-ch-changes! Did that mean we needed a pre-/post-test?

In this case, the answer is yes... and no. Much like the nation's beleaguered financial advisors keep saying, the answer was to diversify our measurement portfolio.

We followed the rules of thumb to decide when and how to look for evidence.

Pre/Post Questions: Focused on actual classroom practices – how often they did X, Y, or Z. Teachers could objectively report these before and after the experience.

Retro-Pre/Post Questions: Change in confidence or comfort with the super-new practices TII threw at them – like using museums, collections, objects, and tech in new ways. We knew this was ripe for inflated “I don’t know what I don’t know” pre-program confidence.

Post Questions: We were certain cool stuff would happen for these teachers that we simply could not predict. So, we also used exploratory conversations at the end of the first year. What could they teach us about what they learned? (Answer: Lots.)

Have you ever felt the pre/post pressure? Reply and tell me when it worked for you... or when it really hasn't. (I'll tell you my story, if you tell me yours.)

Don't kill the vibe, evaluators.

Years back, Michelle and I worked with the awesome team at The Story Collider. Do you know them?

Among other things, they produce live shows where people tell true, personal stories about science. I describe them as: "Imagine The Moth storytelling show on NPR had a baby with a science museum."

The first time I met the current Executive Director of The Story Collider, Erin Barker was... let's say skeptical of me and everything about my profession. And that reaction was entirely fair.

She's an expert in storytelling. It's a performing art. My work measured science learning with interviews, surveys, focus groups, and even pre-post tests. Her every instinct said: "These two things will not mesh."

Over time, we worked with Erin and the team to find a method to dig into what is the impact of a live, personal, science storytelling show. In a way that didn't interfere with the show experience and allowed us to dig into the subjectivity of the impact.

We learned a lot. And the vibe of zero shows were harmed by our study.

Want to see what we learned about audience impact at live science storytelling shows?

(And how we learned about it?)

Free Webinar: Community Science Attributes Rubric

They had me at the word rubric.

Our friends at the ASTC Community Science Initiative are hosting a free webinar to introduce a new tool -- a rubric -- to help science institutions analyze their Community Science work. Based on the five Attributes of Community Science, the rubric can be used for planning, reflection, or evaluation of community science work.

Angie and I worked with them to develop this tool a couple of years ago. (We're actually applying it to another project now!) And ASTC is thinking about all sorts of creative ways to apply this tool to help their members' work!

April 30 @ 1:00 - 1:30 ET


P.S. Got a question you'd like us to answer in an upcoming newsletter? Hit reply and tell me what's on your mind!

P.P.S. Get this email from a colleague? Sign up to get your very own copy every month.

Why the "Evaluation Therapy" Newsletter?

The moniker is light-hearted. But the origin is real. I have often seen moments when evaluation causes low-key anxiety and dread, even among evaluation enthusiasts. Maybe it feels like a black-box process sent to judge your work. Maybe it’s worry that the thing to be evaluated is complicated, not going to plan, or politically fraught. Maybe pressures abound for a "significant" study. Maybe evaluation gets tossed in your "other duties as assigned" with no support. And so much more.

Evaluation can be energizing! But the reality of the process, methods, and results means it can also feel messy, risky, or overwhelming.

I've found that straightforward conversation about the realities of evaluation and practical solutions can do wonders. Let's demystify the jargon, dial down the pressure, reveal (and get past) barriers, and ultimately create a spirit of learning (not judging) through data. This newsletter is one resource for frank talk and learning together, one step at a time.

Learn more about JSC and our team of evaluators. Or connect with us on LinkedIn:

Copyright © 2025 J. Sickler Consulting, All Rights Reserved

You are receiving this email because you signed up for our newsletters somewhere along the line. Changed your mind? No hard feelings. Unsubscribe anytime.

Wanna send us some snail mail? J. Sickler Consulting, 100 S. Commons, Suite 102, Pittsburgh, PA 15212

The Evaluation Therapy Newsletter

Our monthly Evaluation Therapy Newsletter shares strategies, ideas, and lessons learned from our decades of evaluating learning in non-school spaces - museums, zoos, gardens, and after-school programs. Jessica is a learning researcher who is an educator at heart. She loves helping education teams really understand and build insights from data that they can use immediately – even those who are a bit wary of evaluation.

Share this page