Logic Models: One thing leads to another.
I believe strongly that sometimes you don't need a logic model.
*record scratch*
Wait, is that a mob of evaluators approaching with pitchforks and torches?
Look, I know this is evaluation heresy. In countless textbooks, courses, and grant RFPs, we are told, "You must first create a logic model." And that is because logic models are incredibly useful tools.
Except when they’re not.
Sidebar: If you are like, "WTF is a logic model??" Get our one-page overview of the parts and the purpose of this nifty (but sometimes troublesome) tool.
The dream is a facilitated, constructive process. That the team shares a vision about the what, how, who, and why of their work. That the pieces fall into place. That you end with a succinct, logical flow.
But that dream is not always reality.
I've also seen logic models thrust upon program folks as a requirement. Where they face some template full of jargon. And they spend hours fitting the right pithy words into the right little boxes.
And after arguing about outputs versus activities versus outcomes, they file the damn thing away, never to be seen again.
That sounds like a waste of time, resources, and patience.
Today, I want to free you from "shoulds" around logic models, and give you options to think about whether (and when) you need one.
Option 1: Just Skip It
Sometimes a project just needs an outcome evaluation. They need to know if the thing they did worked. And the thing they did is straightforward. And possibly happening next week.
When you are clear on your outcomes and ready to know if it worked, I grant you permission to just skip the logic model.
A logic model tells the story of how. How you take a pile of resources and expertise and turn it into a specific experience that creates outcomes for specific people.
If you are focused on “what happened,” and not “how you got there,” you might be frustrated by the logic model process.
This can also be a cost-benefit question. The logic model process takes time, energy, resources. Only you can say if it feels like a value-add.
Could it be beneficial to create a logic model? Absolutely!!
Could you do a robust evaluation if you skipped it? Yup, probably.
I’m in the KISS camp: Don’t overcomplicate things just because some textbook said you should.
Option 2: Evaluate First, Logic Model Second
This is one of my favorite ways to get to a really useful logic model. We do the logic model last. After we've evaluated.
We work with a lot of programs that trust their know-how. They are improvising and making some magic. They know there's a zone of good stuff happening, but heck if they could describe it with any specificity or precisely how they get there.
Personally, I love a program that is built on educator instinct. Because experienced educators have good instincts.
But the traditional logic model process isn't about instinct. It pushes you to articulate a logical vision: How X leads to Y leads to Z.
Sometimes, projects don't have that vision yet. Or it's implicit, rather than explicit. The logic model exercise can leave them grasping at straws and using hand-wavy buzzwords.
In these cases, I like to be a little more constructivist. Instead of letting the logic model start the evaluation plan, let the evaluation kick-start the logic model.
When we do a study first -- poke under a few rocks to see what's going on -- those findings give you the aha moments. You get insight about what you're achieving, with whom, and how, and why. The evaluation gives you the language and specificity you need to make a logic model that speaks to you.
Option 3: Diagram As You Go
Buckle up, my fellow Container Store fans, this one might hurt a little.
When you are dealing with a complicated program, typically, you really do need a logic model. Complexity benefits from connecting some dots.
But sometimes a project is complicated and emergent in its strategy. They are figuring it out as they go along. As they interact with communities. They need a period to live in the messy middle.
When you push a program that needs to be in a messy or creative zone to immediately put things in boxes and connect the dots, it can be stifling.
I’ve done it. I’ve pushed past the side-eye. (Hi, Story Collider team! Thanks for hanging in there anyway!) I've held folks' hands. I've worked hard to get to a decent model. But, as I look back on our thorniest projects, I might not do it that way again.
Sometimes, the program’s strategy can emerge over time. In this case, you could put your logic model together slowly, as you go along.
You stay open to documenting what actually happens and how it works, instead of starting from what is planned to happen. It's more Bob Ross capturing happy little accidents than Paint-by-Numbers revealing a pre-defined image.
Now, this piece isn't as easy as a single sit-down session. It requires having someone with their eye on the ball continuously. Someone to notice and capture the strategies and details. Someone who is paying attention and says, “Hey, that’s an important input!” or “Boy, that sounds like a key activity!”
(This is where I’m going to shout out Angie on our team. She is an ace at capturing these kinds of program details through long and winding roads of development. Not to mention her wizardry of visualizing it all on a Mural post-it wall.)
I feel like I just spent 1,000 words hating on logic models. The truth is that I love a good logic model. But I also recognize that they are an acquired taste, and I'm done force-feeding people metaphorical lima beans.
Real World Example:
We worked with a program that was undertaking their first evaluation. And they wanted to do a logic model.
After talking with them, I realized they'd been 100% running on instinct. They knew their zone. They had a strong sense that they were creating good stuff. But the details of how, what, or why were hand-wavy at best.
Think: A program delivering PD to professionals who are fairly proprietary and self-sufficient. The program gave the PD. They went off and... did stuff.
This program had been thriving on vibes and anecdotes.
So, I proposed we take the standard process and flip it. We'd start with a study to understand their audiences and clarify their actual outcomes. We'd get some cold, hard data about what was really going on.
After the findings brought clarity to how their audience actually benefited from all that they did, we were ready to logic model it up. Throughout the "putting words in boxes" process, we kept calling back to the evaluation report:
Hey, don't forget that whole "Impressing my boss" outcome! How does that fit in to the picture?
The program folks were also better able to articulate the most important why and how of their strategy. Because the experience was grounded in their audience's words. The logical links became explicit, at last.
Anyone else suffered through a logic model process that went nowhere? Found a cheat-code to make it work creatively? Reply and share!