profile

The Evaluation Therapy Newsletter

How do you make measurement meaningful? It's all about grain size.


Hi Reader,

A few weeks ago, Rob and I were meeting with a new client. We were introduced around the office as “the new evaluators.”

One young woman was casually sitting on a low filing cabinet, chatting with colleagues. At this introduction, she leapt to her feet, shook my hand, and said, “Whoops!”

I was like, “Yeah, no, we’re not that kind of evaluator.”

It was a stark reminder of the baggage this work can carry. People assume they're about to be judged against some impossible benchmark. That, when an evaluator walks in the room, there's no room for flexibility or personality -- or sitting on the filing cabinet.

That's not how things work around here. This month we're going to talk about approaching measurement to set up realistic, not impossible, standards.

Also included is:

  • A just-published article about supporting teen leadership
  • Upcoming conference appearances

Cheers,

Jessica

How do you measure, measure a year?

Measurement is a word that gets used a lot in evaluation circles.

It's also a word that gives me the icks. It conjures a mental image of someone with a stern expression, carrying a yardstick, who either says you're up to snuff or raps your knuckles.

That is not who I wanted to be when I grew up.

But measurement is central to our work. If you want to know how much, how far, how long, or how strongly something is happening because of your work... well, you need some variety of measurement.

More importantly, it doesn't need to be so intimidating or arbitrary-feeling. It just takes a little extra time in the planning stage.

One of the biggest things that separates an arbitrary yardstick from a targeted one -- and whether it's successful for you -- is grain size.

Let's break it down.

What do we mean by grain size?

You know that expression about knowing the difference between the forest and the trees? That's grain size. I’m referring to the degree of specificity in what you’re trying to measure.

A lot of times, educators are dealing with outcome language that is grand and sweeping. It's that language from grant proposals. It's big. It's impressive. It brings tears to your Program Officer's eyes. It gets the big fat check in the door.

And that is usually a really big grain size.

It's a great place to start.

It's a terrible thing to try and measure.

Once you (and your evaluator) are ready to think about measurement, you pretty quickly realize that sweeping statement is way too big. Trying to measure at that grain size is a recipe for disappointment.

You need a smaller – and more specific – grain size.

The tricky part here is getting beyond hand-wavy outcome platitudes. Now you have to get serious about what you really do well within that big outcome's zone.

Let’s imagine a hierarchy of related outcomes you could articulate for an environmental education program. It could influence:

  • Environmental sensitivity
  • Caring for nature
  • Caring about this habitat
  • Perceptions of this place we just explored

See the differences in each of those?

The more specific you can get about what change looks like in your learners, the better off you will be in the long run.

How do you figure out the grain size of a measurement tool?

When you're facing measurement time, Google Scholar is a place we all often turn. There are many pre-existing tools. Many are well-researched. My hat is off to the scholars who develop these tools. We thank you.

But a lot of tools you find in literature are operating at a giant grain size. They measure Big Psychological Concept (TM).

Michelle located a tool for measuring awe recently that we were considering for a project. They had to break awe into 6 pieces, because awe is complicated. And even the pieces are big picture. (Think: sense of vastness, self-loss, etc.)

Just because it's a well-researched tool, doesn't mean it's the right tool for you.

Especially if you’re in an environment with pressure to prove outcomes, this is not the time to shrug and say, “any survey will do.” You have to measure the measurement tool.

This is my smell test: Imagine yourself in the shoes of your learners. Read the survey questions carefully. Does it speak to their experience in your program? Does it use their language? Does it feel like things they would say after working with you? And that they would not have already been saying before they met you?

If it doesn't pass the smell test, it's often too big to align with what you do.

But aren't those valid measures??

Yes, they are. But if it failed the smell test, be prepared for results to feel really… meh. And you'll have two options to deal with "meh" results.

Interpretation Option 1: You failed. You need to overhaul the program to pass the test you selected.

I’m not gonna lie, that option makes me gag a little. I don't like informal educators teaching to a damn test.

Interpretation Option 2: You measured an outcome over here, when the real outcome is over there. Or, more likely, your outcomes are happening at a smaller grain size of whatever you tried to measure.

So, how do you dig deeper?

I advise starting with the ears closest to the ground -- the educators who work with learners. Find out examples of what they hear or see that indicates good learning is happening.

This anecdote is likely the tiniest grain size.

Now, you can look for patterns to pull it up a notch or two in size. What commonalities do you find? What is a slightly more general way of saying it? Is there a way to translate that into a tool to capture those takeaways? Or find a tool that speaks to that?

Admittedly, this is not always easy. But the closer you get to the reality of what change looks like in your setting, the more satisfying your data is going to be. And the better positioned you will be to tell your impact story.

What if we just don’t know?!?

That tells me you shouldn’t jump to measure anything yet. Take a beat and explore. Spend time prying open the black box, until you get a sense of what you need to measure.

A little exploration is a worthwhile investment if you are otherwise grasping at straws when it comes to digging into your outcomes.


Real World Example:

We were evaluating an environmental education field trip. The team was interested in whether they had boosted "environmental sensitivity."

I will be honest, I was not 100% clear what that meant in the context of a half-day field trip and 12 year-olds. I asked a lot of questions about what they did, what they heard, what they aimed for -- really -- with their kids.

In the final approach, we tried measuring things two different ways:

  • An environmental attitudes scale from the literature that aligned with big-picture Care for Nature (general);
  • A question we piloted that focused on associations with This Environment (specific).

We measured pre-program and post-program.

Environmental attitudes? *sad trombone* It wasn't that kids didn't care about nature. They were fairly pro-Nature to begin with. And one field trip didn't make kids into more of a tree-hugger than they were when they started.

Perceptions of this environment? *happy trumpet* We saw shifts in certain perceptions of this environment -- ones that aligned with the educators' aims. Positive perceptions went up. Negative went down.

There were two ways that we shrunk the grain size: We looked a perceptions, rather than attitudes. And we looked at this particular environment, rather than the environment as a global concept.

Anyone else ever struggled to find the perfect grain size to measure your story? Reply and share!

New Reading: Building Teen Leaders

Through our work with the Teen Science Cafe Network, I'm routinely floored by the Adult Leaders who have the passion to help teens find their voice, skills, and strengths to become leaders among their peers.

It's not an easy job. And, from what we've heard, the pandemic aftermath made nurturing young leaders difficult in new ways.

In a brand new article published with our collaborators at STEM Next, who manage the TSC Network, we share a strategy the Network used to support Teen and Adult Leaders to help more teen leaders build these leadership skills.

Read about the Teen Catalyst Program

(including our Iceberg Diagram; because we love a visual metaphor)

On the Road Again

Where you can find JSC projects and team members out in the wild:

ASTC Annual Meeting (September 7, Poster-Palooza): Check out our poster on Day 2 Poster-Palooza! It will bring together convergent findings from three different projects that centered on creating high-value Teacher PD.*

*Due to 2025 continuing to be the absolute worst, it's looking like we may miss in-person ASTC this year. But if you'd like to chat about the poster, we can set up a time for a virtual chat! We are very bummed to miss it.

NAAEE Pre-Conference Workshop (October 20, Virtual): Jessica & Angie will be leading a half-day workshop on a practical approach to planning your evaluation. Registration is open now!

P.S. Got a question you'd like us to answer in an upcoming newsletter? Hit reply and tell me what's on your mind!

P.P.S. Get this email from a colleague? Sign up to get your very own copy every month.

Why the "Evaluation Therapy" Newsletter?

The moniker is light-hearted. But the origin is real. I have often seen moments when evaluation causes low-key anxiety and dread, even among evaluation enthusiasts. Maybe it feels like a black-box process sent to judge your work. Maybe it’s worry that the thing to be evaluated is complicated, not going to plan, or politically fraught. Maybe pressures abound for a "significant" study. Maybe evaluation gets tossed in your "other duties as assigned" with no support. And so much more.

Evaluation can be energizing! But the reality of the process, methods, and results means it can also feel messy, risky, or overwhelming.

I've found that straightforward conversation about the realities of evaluation and practical solutions can do wonders. Let's demystify the jargon, dial down the pressure, reveal (and get past) barriers, and ultimately create a spirit of learning (not judging) through data. This newsletter is one resource for frank talk and learning together, one step at a time.

Learn more about JSC and our team of evaluators. Or connect with us on LinkedIn:

Copyright © 2025 J. Sickler Consulting, All Rights Reserved

You are receiving this email because you signed up for our newsletters somewhere along the line. Changed your mind? No hard feelings. Unsubscribe anytime.

Wanna send us some snail mail? J. Sickler Consulting, 100 S. Commons, Suite 102, Pittsburgh, PA 15212

The Evaluation Therapy Newsletter

Our monthly Evaluation Therapy Newsletter shares strategies, ideas, and lessons learned from our decades of evaluating learning in non-school spaces - museums, zoos, gardens, and after-school programs. Jessica is a learning researcher who is an educator at heart. She loves helping education teams really understand and build insights from data that they can use immediately – even those who are a bit wary of evaluation.

Share this page