profile

The Evaluation Therapy Newsletter

Observing Humans is Hard


Hi Reader,

Back in my teacher training days at Bank Street, "Observation & Recording of Children" was a foundational course. On the first day, the instructor showed us a 2-minute video of a kid playing. She asked us to observe and take notes of the child's behavior.

It was a humbling exercise. My class learned really fast how complicated it is to observe human behavior. You miss details. You infer intent or emotion (rather than describe behavior). You get distracted. Your hand cramps and notes are chicken scratch. So much nuance happens in 2 minutes of play, where do you focus?

Around the JSC team, we believe in the power of observational data. But we've also learned -- and had to teach others -- that it can be more complicated than you expect. This month, let's talk about a few decisions to help you work within that complexity.

Also included is:

  • Some related reading, if this topic speaks to you.
  • Where to find us at summer conferences (without flying through Newark)!

Hang in there,

Jessica

There's definitely, definitely, definitely no logic to human behaviour.

If I ever write a 10 Commandments of evaluation, one of them will be “Don’t ask people to tell you something that you can track, record, or observe in another way.” People can be unreliable narrators because we aren't mentally cataloguing every move we make, every moment of our lives.

That's why observation can be a revealing method. It gives you the unvarnished truth of what people actually do. That’s how you discover that spending 30+ seconds at one object in a museum is, in fact, a really long time.

Observation is powerful.

But it can be really hard to do.

We’ve often encountered people who think collecting observational data will be easy-peasy. There's no recruiting. You don’t have to walk up to a stranger and say, “Hey, wanna talk to me?” No sending survey invitations into the email void. No scheduling focus group dates, times, rooms, food, and incentives.

But systematically observing and recording behavior is really complicated. Because humans are complicated.

Whenever you're dealing with complicated measurement issues, the solution is to make good choices about where you focus. Below are 5 of the most common choices you'll need to make to get to valuable observational data without driving yourself nuts.

1) Zoomed In or Zoomed Out?

I’m here to tell you a cold, hard truth. One observational tool cannot do both of these things.

A Zoomed Out view observes to understand patterns across an entire experience. What happens in general? Across a whole exhibit? Across all school programs?

For that, you need a large grain size. Behaviors can't be situation specific. Think: “Are our interactives getting used?”

A Zoomed In view observes to understand how one strategy works specifically. Think: "Which of the three possible ways of interacting did they use?" or "At which step did they get stuck?"

For that, you need focus. It needs to be specific and customized to what was offered and how people responded.

Do you want to understand the whole? Or one part in depth?

2) One Person or Whole Group?

To achieve precision in observations, you have to focus on just one person for an entire session. You can’t record a bunch of people all at once and get precise observations about everything all of them do.

But observing one human at a time means you need a lot more observations, because each person is a unique, weird actor. Evaluating a program that doesn’t happen very often? This isn't doable.

Good news: There are ways to observe groups and get useful information! You just have to acknowledge tradeoffs that come from a strategy like scan sampling and come up with a systematic way to deploy it.

It's all trade-offs. What's important and what's feasible? Find the balance.

3) Thick Boundaries or Subtle Nuance?

If you know the educational experience well, you can probably envision loads of nuance in behaviors you would hope to see. Let's say, the difference between a parent facilitating an activity and a parent facilitating learning.

You are not wrong! Those are different things.

But when you get out in the field, classroom, or floor, you realize that in order to discern the difference, you need to see and hear every detail of what's going on. Or you find that, half the time, two behaviors overlap a lot. Or people play jump rope around the line between those behaviors.

The more subtlety you want to discern, the more intensive (in time, resources, and skill) your collection and analysis will need to be.

What level of nuance do you need in order to make actionable decisions?

4) Open or Structured?

This one is easy. If you’re new at this, you want structure.

There is a lot to be learned from open, running record, ethnographic-style note-taking. And it is hard AF to do well. (Ask an anthropologist. I'll give you Michelle's number.)

You have to record what you see, not what you infer it means.

You have to quickly know the types of details that are important to capture, and what you can let pass.

You will face a gargantuan analysis task.

When in doubt, give yourself more structure around what you're looking for and how you will record it. You'll thank me later.

5) Less > More

I know, I know. You are curious about so many things! You can list 12 different behaviors that you really want want to know if and how they happen, right? (I get it. And that's why you're my people.)

But if you try to observe and record all of those things simultaneously, in the complicated chaos of humans interacting in the real world, you won't capture any of it accurately. And you'll feel really frustrated in the process.

Recording fewer behaviors more accurately and consistently is always going to give you higher value data. Focus on what is most important to you.


Real World Example:

Observations in museum exhibits is a foundational tool. But observations are also really useful in education programs -- they will just look a bit different.

We worked with an institution that tried and failed to get meaningful insights about school programs from teacher surveys. The data were Blah City.

While planning, we realized the educators were extremely clear about their target skills. They could describe what it looked and sounded like for students to apply science skills within their programs.

That's all we needed! Using that info, we created an observational tool that could be used by an extra educator to demonstrate science skills being used in school programs.

In the process, we made some choices:

Zoomed Out: With a suite of programs, we needed an approach general enough to apply regardless of content, lesson plan, or grade level.

Whole Group: This was a feasibility decision. A) Following one kid per class can get creepy quickly. B) That would take way more time than they have. Group-level patterns would be less quantitatively precise, but work for our goals.

Structured: We focused on specific behaviors to look for and a system to count them. We gave the process structure. There was space for notes and examples, but we considered those "bonus" data.

Less is More: We capped the number of behaviors at 6.

Sharp Lines: A behavior needed to clearly fit in one place – not “a little this, a little that.” (Pilot testing was an important step here!)

Now, they regularly use their tool to evaluate their school programs. And by using it consistently, they:

  1. Have data that tells about the strengths of programs collectively;
  2. See patterns about unique strengths of particular programs;
  3. Apply insights to drive revisions, new facilitation strategies, and updated programs.

Because it doesn't try to do everything, it does what it needs to do well.

And perhaps the best advice for observing behavior is prepare to be flexible. Humans are weird. We do weird stuff. All the time. Be ready to roll with it.

On that note: Would anyone be interested in our team telling you the weirdest and heartwarming-est stuff we’ve seen and heard while collecting data with the public? (Does your answer change if you know that one of my stories involves a psychic?)

Reply and let me know. If it's of interest, I'll wrangle tales from the team's memory vaults for a future issue.

If that got you thinking...

Here are a few other topics we've tackled in the past that you might want to revisit:

Every Step They Take, Every Move They Make: 3 Things We've Learned from Watching Visitors in Museums

The Value-Feasibility Balancing Act: Stop Being Polite and Start Getting Real

I Saw the Sign: The Simplicity & Complexity of ID Labels

On the Road Again

Where you can find JSC team members out in the wild:

NASEM Convocation on the Status of Informal Science Education (TODAY, Washington, DC): If you are reading this the moment it lands in your inbox and you're at the meeting in DC, find Jessica and say hello!

Visitor Studies Association Annual Conference (July 15-17, virtual): Our whole team will be attending VSA this year -- Angie, Michelle, Rob, and Jessica! If you wanted to learn more about fighting for your right to Data Party, this is the conference for you.

(It's a virtual conference, so there's zero risk of having to fly in or out of Newark. Worth it.)

P.S. Got a question you'd like us to answer in an upcoming newsletter? Hit reply and tell me what's on your mind!

P.P.S. Get this email from a colleague? Sign up to get your very own copy every month.

Why the "Evaluation Therapy" Newsletter?

The moniker is light-hearted. But the origin is real. I have often seen moments when evaluation causes low-key anxiety and dread, even among evaluation enthusiasts. Maybe it feels like a black-box process sent to judge your work. Maybe it’s worry that the thing to be evaluated is complicated, not going to plan, or politically fraught. Maybe pressures abound for a "significant" study. Maybe evaluation gets tossed in your "other duties as assigned" with no support. And so much more.

Evaluation can be energizing! But the reality of the process, methods, and results means it can also feel messy, risky, or overwhelming.

I've found that straightforward conversation about the realities of evaluation and practical solutions can do wonders. Let's demystify the jargon, dial down the pressure, reveal (and get past) barriers, and ultimately create a spirit of learning (not judging) through data. This newsletter is one resource for frank talk and learning together, one step at a time.

Learn more about JSC and our team of evaluators. Or connect with us on LinkedIn:

Copyright © 2025 J. Sickler Consulting, All Rights Reserved

You are receiving this email because you signed up for our newsletters somewhere along the line. Changed your mind? No hard feelings. Unsubscribe anytime.

Wanna send us some snail mail? J. Sickler Consulting, 100 S. Commons, Suite 102, Pittsburgh, PA 15212

The Evaluation Therapy Newsletter

Our monthly Evaluation Therapy Newsletter shares strategies, ideas, and lessons learned from our decades of evaluating learning in non-school spaces - museums, zoos, gardens, and after-school programs. Jessica is a learning researcher who is an educator at heart. She loves helping education teams really understand and build insights from data that they can use immediately – even those who are a bit wary of evaluation.

Share this page