profile

The Evaluation Therapy Newsletter

Evaluating Short-Duration Experiences


Hi Reader,

February is the shortest month, which feels fitting for this month's topic: What do you do when you don't have much time?

A reader wrote in December with a question that sounded so familiar. In informal education, we do a lot of programs that are short in duration -- field trips, outreach sessions, floor programs. Everything goes by in a flash.

If you also want to squeeze some evaluation into the mix (and why wouldn't you?), there's a special mix of challenges that you will face. You have goals. You want data. But you've got maybe 45 minutes before the buses pull away.

So, let's dig into the problems and find solutions to make every evaluation second count, when there is literally no time to waste.

Also included is:

  • Related reading, from the archives
  • March 6 Webinar: Slowing Down to See More

Cheers,

Jessica

We've got a long way to go and a short time to get there.

A reader shared a challenge they're facing, and I know they are not alone:

“We're in an informal setting where we see students very briefly. (4-hour field trips; groups of 75 x K-12 students. What strategies/tools exist apart from regular ol' "fill in the bubble sheet of rating your agreement with the following statements" for surveying? Any particular websites that you've found helpful for surveying that we could use to interface with teachers? Any golden-ticket drawing prompts that you've found cut to the heart of a question? We always run into the problem of not having very much time.”

Sadly, there is no Golden-Ticket-Magic-Prompt that works for every situation. But the way to get close is dialing in on the opportunities and limitations of your situation. When you don't have much time, you need to make every evaluation second count.

Let’s tackle this one problem – and solution – at a time.

Problem 1: Short-Duration Programs

Solution: Define Realistic Outcomes

Reader, did you see "4-hour field trips" and think that sounded downright luxurious? Me too.

In the world of informal learning, we can face programs that get as little as 15 minutes of visitor facetime. And every museum educator can tell you how often their "45-minute" school programs ends up modifying to 25 minutes for reasons completely out of their control.

When time is short, define outcomes that are realistic for the timeframe you have. This can be surprisingly tricky. It really pushes you to zoom in on what is happening in the moment, rather than a lofty wish-list of what could happen.

Your outcomes will drive literally every other decision you make, so do this first. Calibrate goals that align with the program reality.

Problem 2: Doing the Evaluation Takes Too Long

Solution: Prioritize. Ruthlessly.

With short-duration programs, it takes very little for evaluation activities to overwhelm the experience.

My rule of thumb to gauge if you're over-evaluating? The amount of time participants spend doing evaluation activities should be less than 10% of overall program time.

This often means you have to cut stuff from your plan. The more stuff you try to measure, the more tools you need. The more tools you use, the more time it takes.

And time, dear Reader, is the thing you do not have.

Figuring out what to prioritize comes by looking for the "sweet spot" in a kind of Venn diagram:

  • What YOU most want to document about this program
  • What you suspect PARTICIPANTS most strongly experience

One or two ideas that sit in the sweet spot of these two questions is where to invest your – and their – evaluation energy.

Problem 3: Response Rate

Solution: Leverage the captive audience.

If you are already crunched for time, it can sound reasonable to send an email survey “after the fact.” Or distribute a form “to do on the bus." This avoids wasting program time on data, right?

The truth is, getting feedback after they leave is a crapshoot. Response rate will be low and likely biased toward those with Strong Opinions.

Easy email surveys are not easy when the data you get from them kinda sucks.

I cannot stress enough how important it is to get the data while they’re with you. Let this constraint help you simplify, by finding a plan that can realistically happen on-site.

If collecting data at every session on-site is logistically too much to handle, then don't! Create a plan to sample from your programs that is doable. A Small, Systematic Sample is better than a set of Self-Selected Strong Opinions.

Problem 4: We want student feedback, but it's complicated.

Solution: Make sure conditions allow you to get useful student data.

Especially for field trips, students feel like the right audience because that's who we aim to impact. But that can get tricky when you are short on time.

Viable scenarios for collecting data from students:

  • You can commit time to observations.
  • Field Trip Educators could integrate an activity (for data purposes) into their agenda.
  • You have a strong relationship with teachers who would facilitate something in the classroom.
  • Your students are ~12 and older and you really want to do a survey.

Some methods that work best for students need more time and facilitation support than you might have staffing to deploy.

And surveys? Well, it often isn't a great solution for a lot of students. The truth is, it is really hard to write age-appropriate survey questions. And do your students have the literacy skills to read, interpret, and respond in this format? Is that skill equitably distributed?

The younger the students, the more ridiculous this starts to seem.

There are other options. (Skip to Solution 6)

Problem 5: Teachers feel doable, but like a fallback.

Solution: Focus on what teachers are uniquely able to see.

Did you laugh at every bullet point I listed above? Getting data from teachers is a solid option. And it shouldn't feel like a second choice!

Teachers can't tell you what happened in kids' minds and hearts. But, if you switch your focus, they offer a unique and valuable point of view. What salient outcomes a teacher would see, through their eyes?

A few reasons that I love what I can learn from teacher feedback include:

  1. They see experiences with an educator's eye. They notice behaviors or talk occurring that you really care about, but students could not articulate.
  2. They know their kids in a way you don't. They can reflect on what's different in their students, compared to a "normal" day to day.
  3. They are gatekeepers. Teachers make a lot of decisions that affect students' learning. Knowing what they value, see, and think about your program can be very informative for your decisions.

Teachers can reveal a lot about a program's success. Just remember that the prompts will be very different than with students.

Problem 6: Surveys Feel Kinda Lame

Solution: Creative and embedded methods. (But not drawings.)

You don’t want to be the place that gives kids on a cool field trip some lame test-like survey, right?

Creative methods are ways of collecting evidence of outcomes directly with students, outside of conventional surveys and interviews. They offer latitude to make it fun, playful, or aligned with the vibe of your program.

Key to this is making it feel like a natural part of the program that educators facilitate. It is active and participatory. What makes it different from the curriculum activities? The focus is on reflection, rather than instruction.

And it has some form of documentation that you can use as a data source. These can be post-it walls, exit tickets, group notes by the educator, voting walls, etc.

(One exception: Don't use student drawings as your evidence. They're difficult to analyze without a verbal or written follow-up to articulate their ideas into words.)


Real World Example:

We've had to solve this problem in many different circumstances, and the solution is always a customized adjustment that aims to balance their opportunities and limitations.

Let's look at two different museum examples to compare:

Program A:

  • Elementary school kiddos
  • Museum facilitated field trip programs, lasting 45 minutes (hopefully)
  • Several topics available; consistent format
  • One-time; same museum space
  • Goal: Help elementary students build STEM practices and thinking; help teachers integrate STEM practices into curriculum

Solution: Teacher survey collected at the end of the program (before they leave the room). Focused on teachers' view of the value-add to curriculum, observed learning behaviors in students, and educational strengths and limitations of what they observed.

Program B:

  • Elementary school kiddos
  • Co-facilitated field trip programs, varied duration (2-3 hours)
  • Customized content for each teacher
  • Multi-visit; classes come 3+ times over the year
  • Goal: Help elementary students feel more belonging in the museum; help teachers integrate more STEM content into curriculum

Solution: Activity, facilitated by the teacher back at school, before the first visit and after the last visit. Discussion activity about perceptions of the museum. Follow-up interviews with teachers.

Program B was able to leverage the existing relationship with teachers (given the customized support they were getting). If we'd tried to use the solution for Program B with Program A, it would have fallen flat. The relationship with the teachers and the depth of experience for kids meant they needed different evaluation plans.

So, which monkey-wrench gets thrown into your evaluation plans more often: too little time with participants, or too few resources to collect data the way you'd want? Reply and let me know.

Going Deeper

Want to go deeper on any of this? Here are a few related topics from the archives:

And here's an example of an institution steering into the playful side of measurement: Space Center Houston's internal evaluators created a STEM Identity BINGO card, testing it as a tool for exploring complex construct in young learners.

Webinar: Slowing Down to See More

I'm giving a public talk on March 6 at 11 a.m. ET as part of the University of Arizona's Water Whys Visual SciComm Speaker Series. The session is free and open to the public.

I'll be talking about research we did into arts-based approaches to teaching data literacy. If you want to learn more (without reading a bunch of Research Briefs) come hang out.

Slowing Down to See More: Using the Arts to Improve Scientific Data Literacy

We are bombarded with data visualizations in everyday life. But how do people learn to critically interpret what data mean – and what they don’t? This talk shares a paradigm shift for data literacy education: instead of centering the mechanics of scientific data analysis, we turned to the expertise of arts educators. Jessica will present research showing how a visual arts-based model transformed students’ relationship with geospatial data. You’ll see how slowing down, dissecting symbols, and inviting divergent views enables people to “look under the hood” of complex data visualizations, turning data literacy from an act of rote decoding into generative, evidence-based sensemaking.

P.S. Know someone who'd benefit from these ideas? Forward it along! (Sharing is caring.)

P.P.S. Get this from a colleague? Sign up to get practical insights like these every month.

Why the "Evaluation Therapy" Newsletter?

The moniker is light-hearted. But the origin is real. I have often seen moments when evaluation causes low-key anxiety and dread, even among evaluation enthusiasts. Maybe it feels like a black-box process sent to judge your work. Maybe it’s worry that the thing to be evaluated is complicated, not going to plan, or politically fraught. Maybe pressures abound for a "significant" study. Maybe evaluation gets tossed in your "other duties as assigned" with no support. And so much more.

Evaluation can be energizing! But the reality of the process, methods, and results means it can also feel messy, risky, or overwhelming.

I've found that straightforward conversation about the realities of evaluation and practical solutions can do wonders. Let's demystify the jargon, dial down the pressure, reveal (and get past) barriers, and ultimately create a spirit of learning (not judging) through data.

This newsletter is one resource for frank talk and learning together, one step at a time.

Learn more about JSC and our team of evaluators. Or connect with us on LinkedIn:

Copyright © 2026 J. Sickler Consulting, All Rights Reserved

You are receiving this email because you signed up for our newsletters somewhere along the line. Changed your mind? No hard feelings. Unsubscribe anytime.

Wanna send us some snail mail? J. Sickler Consulting, 100 S. Commons, Suite 102, Pittsburgh, PA 15212

The Evaluation Therapy Newsletter

Our monthly Evaluation Therapy Newsletter shares strategies, ideas, and lessons learned from our decades of evaluating learning in non-school spaces - museums, zoos, gardens, and after-school programs. Jessica is a learning researcher who is an educator at heart. She loves helping education teams really understand and build insights from data that they can use immediately – even those who are a bit wary of evaluation.

Share this page