We've got a long way to go and a short time to get there.
A reader shared a challenge they're facing, and I know they are not alone:
“We're in an informal setting where we see students very briefly. (4-hour field trips; groups of 75 x K-12 students. What strategies/tools exist apart from regular ol' "fill in the bubble sheet of rating your agreement with the following statements" for surveying? Any particular websites that you've found helpful for surveying that we could use to interface with teachers? Any golden-ticket drawing prompts that you've found cut to the heart of a question? We always run into the problem of not having very much time.”
Sadly, there is no Golden-Ticket-Magic-Prompt that works for every situation. But the way to get close is dialing in on the opportunities and limitations of your situation. When you don't have much time, you need to make every evaluation second count.
Let’s tackle this one problem – and solution – at a time.
Problem 1: Short-Duration Programs
Solution: Define Realistic Outcomes
Reader, did you see "4-hour field trips" and think that sounded downright luxurious? Me too.
In the world of informal learning, we can face programs that get as little as 15 minutes of visitor facetime. And every museum educator can tell you how often their "45-minute" school programs ends up modifying to 25 minutes for reasons completely out of their control.
When time is short, define outcomes that are realistic for the timeframe you have. This can be surprisingly tricky. It really pushes you to zoom in on what is happening in the moment, rather than a lofty wish-list of what could happen.
Your outcomes will drive literally every other decision you make, so do this first. Calibrate goals that align with the program reality.
Problem 2: Doing the Evaluation Takes Too Long
Solution: Prioritize. Ruthlessly.
With short-duration programs, it takes very little for evaluation activities to overwhelm the experience.
My rule of thumb to gauge if you're over-evaluating? The amount of time participants spend doing evaluation activities should be less than 10% of overall program time.
This often means you have to cut stuff from your plan. The more stuff you try to measure, the more tools you need. The more tools you use, the more time it takes.
And time, dear Reader, is the thing you do not have.
Figuring out what to prioritize comes by looking for the "sweet spot" in a kind of Venn diagram:
- What YOU most want to document about this program
- What you suspect PARTICIPANTS most strongly experience
One or two ideas that sit in the sweet spot of these two questions is where to invest your – and their – evaluation energy.
Problem 3: Response Rate
Solution: Leverage the captive audience.
If you are already crunched for time, it can sound reasonable to send an email survey “after the fact.” Or distribute a form “to do on the bus." This avoids wasting program time on data, right?
The truth is, getting feedback after they leave is a crapshoot. Response rate will be low and likely biased toward those with Strong Opinions.
Easy email surveys are not easy when the data you get from them kinda sucks.
I cannot stress enough how important it is to get the data while they’re with you. Let this constraint help you simplify, by finding a plan that can realistically happen on-site.
If collecting data at every session on-site is logistically too much to handle, then don't! Create a plan to sample from your programs that is doable. A Small, Systematic Sample is better than a set of Self-Selected Strong Opinions.
Problem 4: We want student feedback, but it's complicated.
Solution: Make sure conditions allow you to get useful student data.
Especially for field trips, students feel like the right audience because that's who we aim to impact. But that can get tricky when you are short on time.
Viable scenarios for collecting data from students:
- You can commit time to observations.
- Field Trip Educators could integrate an activity (for data purposes) into their agenda.
- You have a strong relationship with teachers who would facilitate something in the classroom.
- Your students are ~12 and older and you really want to do a survey.
Some methods that work best for students need more time and facilitation support than you might have staffing to deploy.
And surveys? Well, it often isn't a great solution for a lot of students. The truth is, it is really hard to write age-appropriate survey questions. And do your students have the literacy skills to read, interpret, and respond in this format? Is that skill equitably distributed?
The younger the students, the more ridiculous this starts to seem.
There are other options. (Skip to Solution 6)
Problem 5: Teachers feel doable, but like a fallback.
Solution: Focus on what teachers are uniquely able to see.
Did you laugh at every bullet point I listed above? Getting data from teachers is a solid option. And it shouldn't feel like a second choice!
Teachers can't tell you what happened in kids' minds and hearts. But, if you switch your focus, they offer a unique and valuable point of view. What salient outcomes a teacher would see, through their eyes?
A few reasons that I love what I can learn from teacher feedback include:
- They see experiences with an educator's eye. They notice behaviors or talk occurring that you really care about, but students could not articulate.
- They know their kids in a way you don't. They can reflect on what's different in their students, compared to a "normal" day to day.
- They are gatekeepers. Teachers make a lot of decisions that affect students' learning. Knowing what they value, see, and think about your program can be very informative for your decisions.
Teachers can reveal a lot about a program's success. Just remember that the prompts will be very different than with students.
Problem 6: Surveys Feel Kinda Lame
Solution: Creative and embedded methods. (But not drawings.)
You don’t want to be the place that gives kids on a cool field trip some lame test-like survey, right?
Creative methods are ways of collecting evidence of outcomes directly with students, outside of conventional surveys and interviews. They offer latitude to make it fun, playful, or aligned with the vibe of your program.
Key to this is making it feel like a natural part of the program that educators facilitate. It is active and participatory. What makes it different from the curriculum activities? The focus is on reflection, rather than instruction.
And it has some form of documentation that you can use as a data source. These can be post-it walls, exit tickets, group notes by the educator, voting walls, etc.
(One exception: Don't use student drawings as your evidence. They're difficult to analyze without a verbal or written follow-up to articulate their ideas into words.)
Real World Example:
We've had to solve this problem in many different circumstances, and the solution is always a customized adjustment that aims to balance their opportunities and limitations.
Let's look at two different museum examples to compare:
Program A:
- Elementary school kiddos
- Museum facilitated field trip programs, lasting 45 minutes (hopefully)
- Several topics available; consistent format
- One-time; same museum space
- Goal: Help elementary students build STEM practices and thinking; help teachers integrate STEM practices into curriculum
Solution: Teacher survey collected at the end of the program (before they leave the room). Focused on teachers' view of the value-add to curriculum, observed learning behaviors in students, and educational strengths and limitations of what they observed.
Program B:
- Elementary school kiddos
- Co-facilitated field trip programs, varied duration (2-3 hours)
- Customized content for each teacher
- Multi-visit; classes come 3+ times over the year
- Goal: Help elementary students feel more belonging in the museum; help teachers integrate more STEM content into curriculum
Solution: Activity, facilitated by the teacher back at school, before the first visit and after the last visit. Discussion activity about perceptions of the museum. Follow-up interviews with teachers.
Program B was able to leverage the existing relationship with teachers (given the customized support they were getting). If we'd tried to use the solution for Program B with Program A, it would have fallen flat. The relationship with the teachers and the depth of experience for kids meant they needed different evaluation plans.
So, which monkey-wrench gets thrown into your evaluation plans more often: too little time with participants, or too few resources to collect data the way you'd want? Reply and let me know.