profile

The Evaluation Therapy Newsletter

Maybe you DON'T need a logic model.


Hi Reader,

I often wonder if every evaluator is, like me, a card-carrying fanatic of The Container Store. (Was I invited to the VIP opening of Pittsburgh’s very first Container Store? Yes. Was that because they knew I'd been driving to Ohio to get my fix for years? Also yes.)

The reason I wonder that is because evaluators love organizing ideas. Putting things in boxes. Connecting those boxes with lines to show order within a system. To give it logic.

For us, this process is second nature. It’s comforting.

For some folks we work with, this process is torture.

This month, I take the heretical position that are many times educators and program people don’t need a logic model. Or at least not yet.

Also included is:

  • Final publication of Research Briefs on our art-science-data research project! (Michelle and I are a mix of proud, excited, and exhausted.)
  • Where to find us at virtual VSA, which starts today!

Hang in there,

Jessica

Logic Models: One thing leads to another.

I believe strongly that sometimes you don't need a logic model.

*record scratch*

Wait, is that a mob of evaluators approaching with pitchforks and torches?

Look, I know this is evaluation heresy. In countless textbooks, courses, and grant RFPs, we are told, "You must first create a logic model." And that is because logic models are incredibly useful tools.

Except when they’re not.

Sidebar: If you are like, "WTF is a logic model??" Get our one-page overview of the parts and the purpose of this nifty (but sometimes troublesome) tool.

The dream is a facilitated, constructive process. That the team shares a vision about the what, how, who, and why of their work. That the pieces fall into place. That you end with a succinct, logical flow.

But that dream is not always reality.

I've also seen logic models thrust upon program folks as a requirement. Where they face some template full of jargon. And they spend hours fitting the right pithy words into the right little boxes.

And after arguing about outputs versus activities versus outcomes, they file the damn thing away, never to be seen again.

That sounds like a waste of time, resources, and patience.

Today, I want to free you from "shoulds" around logic models, and give you options to think about whether (and when) you need one.

Option 1: Just Skip It

Sometimes a project just needs an outcome evaluation. They need to know if the thing they did worked. And the thing they did is straightforward. And possibly happening next week.

When you are clear on your outcomes and ready to know if it worked, I grant you permission to just skip the logic model.

A logic model tells the story of how. How you take a pile of resources and expertise and turn it into a specific experience that creates outcomes for specific people.

If you are focused on “what happened,” and not “how you got there,” you might be frustrated by the logic model process.

This can also be a cost-benefit question. The logic model process takes time, energy, resources. Only you can say if it feels like a value-add.

Could it be beneficial to create a logic model? Absolutely!!

Could you do a robust evaluation if you skipped it? Yup, probably.

I’m in the KISS camp: Don’t overcomplicate things just because some textbook said you should.

Option 2: Evaluate First, Logic Model Second

This is one of my favorite ways to get to a really useful logic model. We do the logic model last. After we've evaluated.

We work with a lot of programs that trust their know-how. They are improvising and making some magic. They know there's a zone of good stuff happening, but heck if they could describe it with any specificity or precisely how they get there.

Personally, I love a program that is built on educator instinct. Because experienced educators have good instincts.

But the traditional logic model process isn't about instinct. It pushes you to articulate a logical vision: How X leads to Y leads to Z.

Sometimes, projects don't have that vision yet. Or it's implicit, rather than explicit. The logic model exercise can leave them grasping at straws and using hand-wavy buzzwords.

In these cases, I like to be a little more constructivist. Instead of letting the logic model start the evaluation plan, let the evaluation kick-start the logic model.

When we do a study first -- poke under a few rocks to see what's going on -- those findings give you the aha moments. You get insight about what you're achieving, with whom, and how, and why. The evaluation gives you the language and specificity you need to make a logic model that speaks to you.

Option 3: Diagram As You Go

Buckle up, my fellow Container Store fans, this one might hurt a little.

When you are dealing with a complicated program, typically, you really do need a logic model. Complexity benefits from connecting some dots.

But sometimes a project is complicated and emergent in its strategy. They are figuring it out as they go along. As they interact with communities. They need a period to live in the messy middle.

When you push a program that needs to be in a messy or creative zone to immediately put things in boxes and connect the dots, it can be stifling.

I’ve done it. I’ve pushed past the side-eye. (Hi, Story Collider team! Thanks for hanging in there anyway!) I've held folks' hands. I've worked hard to get to a decent model. But, as I look back on our thorniest projects, I might not do it that way again.

Sometimes, the program’s strategy can emerge over time. In this case, you could put your logic model together slowly, as you go along.

You stay open to documenting what actually happens and how it works, instead of starting from what is planned to happen. It's more Bob Ross capturing happy little accidents than Paint-by-Numbers revealing a pre-defined image.

Now, this piece isn't as easy as a single sit-down session. It requires having someone with their eye on the ball continuously. Someone to notice and capture the strategies and details. Someone who is paying attention and says, “Hey, that’s an important input!” or “Boy, that sounds like a key activity!”

(This is where I’m going to shout out Angie on our team. She is an ace at capturing these kinds of program details through long and winding roads of development. Not to mention her wizardry of visualizing it all on a Mural post-it wall.)

I feel like I just spent 1,000 words hating on logic models. The truth is that I love a good logic model. But I also recognize that they are an acquired taste, and I'm done force-feeding people metaphorical lima beans.


Real World Example:

We worked with a program that was undertaking their first evaluation. And they wanted to do a logic model.

After talking with them, I realized they'd been 100% running on instinct. They knew their zone. They had a strong sense that they were creating good stuff. But the details of how, what, or why were hand-wavy at best.

Think: A program delivering PD to professionals who are fairly proprietary and self-sufficient. The program gave the PD. They went off and... did stuff.

This program had been thriving on vibes and anecdotes.

So, I proposed we take the standard process and flip it. We'd start with a study to understand their audiences and clarify their actual outcomes. We'd get some cold, hard data about what was really going on.

After the findings brought clarity to how their audience actually benefited from all that they did, we were ready to logic model it up. Throughout the "putting words in boxes" process, we kept calling back to the evaluation report:

Hey, don't forget that whole "Impressing my boss" outcome! How does that fit in to the picture?

The program folks were also better able to articulate the most important why and how of their strategy. Because the experience was grounded in their audience's words. The logical links became explicit, at last.

Anyone else suffered through a logic model process that went nowhere? Found a cheat-code to make it work creatively? Reply and share!

Art-Science-Data: Research Results

Last Fall, Michelle and I were excitedly telling everyone we could about our cool research project that explored how arts-based teaching methods build data literacy skills in middle schoolers.

Remember the time we talked about the incredible power of slowing the f*** down?

Well, finally, we’ve published 4 Research Briefs with our findings from all of that work!

We have results about:

  • Classroom Implementation: The ways teachers implemented the art-science-data framework and what worked – and what didn’t
  • Shifts in Teachers: How the PD process impacted teachers’ mindsets, values, confidence, and practices.
  • PD Secret Sauce: The critical elements of the entire PD process to lead to professional change.
  • Student Impacts: How arts-based teaching shifted students’ data skills – better than standard science teaching.

We’re super excited to share these. We opted to do Research Briefs to get results directly to end-users as quickly as possible. And to include way more visuals than we get in a journal.

Michelle and I love to talk about this project, so hit reply if you have questions or want to learn more!

Get your copy of any of these Research Briefs!

And one Not-So-Brief.

(There was a lot to say about the students. Sue us.)

VSA Conference Starts Today!

If you’re at the virtual Visitor Studies Association Annual Conference, come talk with us!

The full JSC team is there! Look for Jessica, Angie, Michelle, and Rob at sessions and mingling events and say hello!

And check out our two sessions on the program:

Let's Data Party! Collaboratively Crafting Evaluation Insights and Inspiring Action: Tuesday, 7/15, 2:45 ET (Michelle presenting)

How Arts-Based Practices Improve Data Literacy Teaching & Learning: Wednesday, 7/16 2:00 ET (Jessica & Michelle presenting)

P.S. Got a question you'd like us to answer in an upcoming newsletter? Hit reply and tell me what's on your mind!

P.P.S. Get this email from a colleague? Sign up to get your very own copy every month.

Why the "Evaluation Therapy" Newsletter?

The moniker is light-hearted. But the origin is real. I have often seen moments when evaluation causes low-key anxiety and dread, even among evaluation enthusiasts. Maybe it feels like a black-box process sent to judge your work. Maybe it’s worry that the thing to be evaluated is complicated, not going to plan, or politically fraught. Maybe pressures abound for a "significant" study. Maybe evaluation gets tossed in your "other duties as assigned" with no support. And so much more.

Evaluation can be energizing! But the reality of the process, methods, and results means it can also feel messy, risky, or overwhelming.

I've found that straightforward conversation about the realities of evaluation and practical solutions can do wonders. Let's demystify the jargon, dial down the pressure, reveal (and get past) barriers, and ultimately create a spirit of learning (not judging) through data. This newsletter is one resource for frank talk and learning together, one step at a time.

Learn more about JSC and our team of evaluators. Or connect with us on LinkedIn:

Copyright © 2025 J. Sickler Consulting, All Rights Reserved

You are receiving this email because you signed up for our newsletters somewhere along the line. Changed your mind? No hard feelings. Unsubscribe anytime.

Wanna send us some snail mail? J. Sickler Consulting, 100 S. Commons, Suite 102, Pittsburgh, PA 15212

The Evaluation Therapy Newsletter

Our monthly Evaluation Therapy Newsletter shares strategies, ideas, and lessons learned from our decades of evaluating learning in non-school spaces - museums, zoos, gardens, and after-school programs. Jessica is a learning researcher who is an educator at heart. She loves helping education teams really understand and build insights from data that they can use immediately – even those who are a bit wary of evaluation.

Share this page