It’s gonna take patience and time to do it right, child.
A reader posed a big question to me recently:
“How do you evaluate community engagement and impact? Specifically when engaging community in collaborative research? What characterizes the relationships between researchers and communities beyond individual project timelines?”
There’s a lot we could unpack here, so I'll offer a few points of advice for evaluating community engagement initiatives – whether they’re focused on research or other formats.
1: Define Community Engagement
“Community engagement” is a term that can mean many different things when people toss it around. The very first step is to get clear on what you mean in your setting.
Some organizations use “community engagement” really broadly, referring to almost any kind of "outreach." It might include one-way communication, education events, or listening sessions. And those are all great!
But those kinds of experiences are very different from research conducted collaboratively with communities. As always, I want us to step back from buzzwords. Instead, get specific about the actual activities, experiences, and objectives of whatever it is you’re doing.
2: Identify Your Type of Engagement
As you define your community engagement, it can be helpful to recognize the level of engagement you are using. Different forms of community engagement lend themselves to different processes and different outcomes.
Fortunately, there are some really useful frameworks to identify your level of engagement with community. The one that has always stuck with me came out of a CAISE report way back in 2009 (RIP, CAISE).
The authors offered a framework to distinguish three major types of Public Participation in Scientific Research. They articulated the difference in terms of the balance of roles between community and researchers. I find this simple framework useful to this day:
- Contributory Projects: Researchers design and lead the study. Community contributes to the study (usually data they’ve collected).
- Collaborative Projects: Researchers design and lead the study. Community contributes in multiple ways, going beyond just data collection. (But the Researchers are still in the driver’s seat.)
- Co-Created Projects: Researchers and Community do everything together, including conceiving of and designing the study.
On this continuum, with each level, community members have a more meaningful stake and role in the process and results. But that also requires more of everyone involved. More training, working together, negotiation of roles, time, patience, and money.
And, in turn, fewer people can be involved at that dep level in a co-created projects. There's more depth, less breadth.
Co-Created is not better than Contributory. It’s just different.
There is value in a scientist-designed study where the community makes it possible to understand more places and answer more questions. But you can't expect the same outcomes. A different process has different outcomes.
Owning the level you are operating at – and not trying to be something you’re not – is a critical first step to finding an evaluation strategy that will be successful.
3: Who Defines “Impact”?
There's an important discussion to be had about who will define “impact.” If you want to evaluate “impact” – who defines what you are looking for?
The way it often plays out points to divergent perspectives.
A lot of times, it’s the Big Institution that wants to “evaluate impact of community engagement.” And when that perspective sets the priorities, the framing of “community impact” takes on a decidedly... benevolent tone.
The language around evaluation often assumes a Transformation Machine. Where communities enter the Engagement Portal and exit Changed For The Better.
It's not intentionally paternalistic. I think it's simply a consequence of centuries of paternalistic BS driving the structures and language of the Non-Profit Industrial Complex. We lack ways of talking about impact beyond “changed others for the better.” Or when we talk about impacts differently, it somehow doesn't feel "enough" because it sounds so different.
The alternative is to let the community define "impact" on their terms, and measure from there. This requires an approach that allows the community to define the point of it all.
In this view, I ditch the word "impact" (which I kind of hate), and reframe it around what was the benefit or what was enabled or enhanced. I really want to hear what matters in their own words.
The fact is: the Community may not care about what the Big Institution cares about. Which means you may not measure the knowledge and skills you know they gained. Because they may see it as validating their lived experience. Those are two wildly different outcomes, from a measurement perspective.
Who gets to define what we care about and what we evaluate? Decide early.
4: Consider Evaluating Process Over Impacts
Evaluating community engagement work can use an impact lens or it can take a process lens.
I'm going to advocate for the under-valued process evaluation lens.
If Big Institution is trying to engage in deep community engagement, evaluating process-as-outcome can be critical. If they're engaging in a collaborative way, you need to make sure teams actually "walked the walk."
A process lens will let you find out how the community experienced the collaboration. This is where you start to see where there were shared decision-making processes and where things became more of a one-way street. And I've often found that the deepest and longest-lasting outcomes emerge from talking about process. Because community collaboration should be thought of as a long game.
Again, this doesn’t feel like the sexy impact statements you may want to headline your grant report. But it can be the real evidence of having built sustained, respectful partnerships -- which is a lasting outcome.
5: What We’ve Learned & What You Might Look For
Community engagement cannot happen "at scale." If your funder is only interested in big "butts in seats" type numbers? They are telling you they don't want to fund community engagement without telling you that they don't want to fund community engagement.
Even if your funders are 100% on board, your evaluation will need to find different evidence of success of these partnerships, with indicators that reflect meaningful aspects of the work.
Over the years, we’ve found strong signals that tell us partnerships are working:
- Big Institution leverages their grant-writing prowess to keep the dollars rolling into the partnership (and including the community in the funding).
- Big Institution keeps devoting staff time to work with community, even during funding lapses or droughts.
- Community members or organizations feel comfortable picking up the phone and calling Big Institution about whatever comes up.
- Both partners think of the other for opportunities when they arise.
- Both partners can see how they fit together and complement one another.
- Each partner feels they've been able to do something more by working together than they could have achieved on their own.
Although community engagement projects never look the same twice, we have seen a number of cross-cutting themes that emerge (in slightly different forms):
- It takes time. More than you anticipate.
- It needs investment in planning. Before action.
- It cannot begin or end with funding.
- It needs trust. And trust only comes after it is earned.
- It benefits from organizational partnerships. Community-based organizations already have the trust among the people of a community.
- It is ultimately human-driven, not institution-driven. When the humans change, you need extra attention and time to cultivate transitions and relationships.
- Everyone brings a mindset of "doing together" – not "doing for." No one is blessing the community with their presence.
- Neither solutions nor outcomes will be cookie cutter from one project to the next.
I saw this post on LinkedIn recently, and it summed it up far more succinctly than I have:
Real World Example:
When we worked with ASTC on evaluation of their community science partnership grantees, one of the partnerships proved to be a really enlightening example.
It stuck with me because, by the end of their work, nothing about their final product looked like a traditional community science project. But it was exactly right for its community.
I'm talking about the creation of the Chequamegon Northwoods Food Coalition, a partnership of Cable Community Farm and the Cable Museum of Natural History.
Cable, Wisconsin is a very rural, far northern region of the state. In a snapshot, these partners started working together to improve resiliency in their community's food systems. Science is absolutely foundational. But it is not a problem where a new research study would do diddly squat to solve things.
Ultimately, they aimed to build a coalition of individuals and organizations working toward this common goal. Engagement strategies looked like:
- Creating an advisory committee structure
- Building relationships to fill out the advisory committee
- Slowly ceding control to that committee
- Identifying partnerships region-wide and "selling" the initiative
- Distributing mini-grants to expand capacity of local people growing, distributing, and educating about food systems and agriculture.
When it came time to evaluate outcomes, there was angst that they hadn't done "enough" or that all of the infrastructure and relationship building would be seen as background.
They didn't have a flotilla of public programs, where they could hand out surveys and get an artificial feeling that "we did a good job."
Instead, they had built the relationships, structures, and people necessary to start a coalition – which is a long-term proposition. How do you evaluate that?
- Examine the immediate results of the mini-grant process: Identify changes that led to more locally-sourced food in the hands of more people.
- Document the networks built through these relationships: How many organizations got involved? How many types (growers, distributors, consumers, etc.) are represented? How are they intersecting?
- Look at shifts in leadership: Consider expansion of leadership, shifts of ownership in process, and the commitment of a committee to carry the work forward.
- Examine the partnership process: Focusing on equity in process and power to measure degree of authentic co-ownership.
When it comes to community engagement in your organization — who defines "impact"? Or who defines the "impact" that people care about? Reply and let me know!