Step By Step: There's so much we can do.
For those of you working in the long, winding road of developing exhibits, it can be a zany process. I’ve long suspected that exhibit development is the Origin Story of the phrase “hurry up and wait.”
The process entails flurries of activity, where lots of decisions are made quickly, followed by long periods of creative people busily working to get things from “Vague Notion” to “Built Environment.”
In the midst of that kind of process – long periods of development, punctuated by rapid, high-stakes decisions – the idea of inserting evaluation can feel overwhelming.
When is it the right time? What can it accomplish? Is it worth doing?
Let’s do a step-by-step walkthrough of the process and what evaluation can (and can’t) do at each stage.
Sidebar: What if I don’t work in exhibits?
We have plenty of readers who do not work in exhibit development. So, you could skip this issue, and I’d get it. But….
If your work has a development process to move from idea to real-world thing – whether it’s a curriculum, training, internship, or anything else – you go through very similar stages with similar evaluation needs. (You just have fewer architectural schematics to deal with.)
Exhibit development is a concrete example, but you can see how it mirrors the stages of program development.
Step 1: Concept Development = Exploring Ideas
Phase 1 is when you flesh out the overall exhibit concept: What’s it about? Why? What types of objects/animals/plants/artifacts/stories might it include? What story does it tell?
This is when you might craft the Big Idea. One simple sentence (painstakingly crafted by your team) that becomes the North Star through the design process to come.
At this stage, there are a lot of competing interests. Lots of folks have opinions about what story an exhibit could or should address.
Evaluation Role? Make visitors one of those competing interests.
When you invest in evaluation at this stage, the main benefit is making visitors a more prominent voice as you make decisions about concept, focus, and Big Idea.
Internal voices have expert blind spots. We hold assumptions about what “the public” knows, thinks, likes, or wonders about. (And you know that old saying about “when you assume…” right?)
At this very early stage, evaluation should get perspectives from a key audience that help shape the direction of your project.
Explore ideas with them. This can be: Core associations with your topic. Or questions. Or existing knowledge. As long as it’s something that will let you see your topic through their eyes. That’s what will help you center a narrative that makes other people perk up and say, “Oooh, fascinating.”
Step 2: Schematic Design = Testing Concepts
At this phase, ideas get focused. For the design team, the rubber is meeting the road about what physically fits in the space (and what it all costs). This is where some dreams will die. (Or be saved for a future exhibit.)
All of this logistical and financial scoping needs to stay aligned with that Big Idea. We want design decisions to be driven by what elements advance that overarching story.
Evaluation Role? Help the team edit and focus direction.
Because this stage is about finding focus and making tough decisions, evaluation can give you an outside perspective to help edit out ideas that don’t serve the Big Idea.
Concept testing is a useful activity at this stage. This approach gives visitors something to react to, getting their impressions, reactions, and feelings. What piques their interests? What feels like “old hat”? What sparks a personal connection?
At this still-early stage, I've used drawings of a space, titles & taglines, brief descriptions of the experience, and/or images of objects. Ideas are taking shape, but may still be a little fuzzy. But this phase of evaluation gets the reactions to help you dial in on which ones spark the imagination in the way you want.
In a stage full of tough decisions, this can help you spot easy cuts and must-keeps.
Step 3: Design Development = Prototype Testing
I’m going to say this first: If you have a really limited evaluation budget, in most cases, prototype testing will give you the most bang for your buck. Let’s talk about why.
The design development stage is usually a long phase when all ideas must become real. Not some hand-wavy “we’ll have an interactive to explain how a bill becomes a law.” But its specs, format, how it works, what it covers, etc. etc. etc.
This is where a whole bevy of creative geniuses – from graphic designers to writers to architects to digital designers to educators and more – turn rough ideas into step-by-step learning experiences that an average person will be able to facilitate for themselves.
Evaluation Role? Test Prototypes of Key Pieces
This stage gets you the most bang for your buck because learning in exhibits is kind of a leap of faith. You design this whole experience, and then… let it go. Unlike facilitated programs, you can’t adjust on the fly if someone gets confused.
Without a human intermediary, that design has to do a LOT of MF-ing work.
That is why it can be extremely cost-effective to spend time testing the most critical strategies to see if they work as you intend. And to do that while they are made out of cheap foam core, photos, tablets, and copy paper. Test your first draft of strategies before you pay for the expensive materials and tech builds. (Even if you design interactive-free exhibits, labels are expensive. Test your interpretive approach!)
Sometimes you see visitors completely misconstrue a takeaway because some innocuous design element conveys something wildly off-base.
Sometimes you find that a minor adjustment (to language or design) could lead to a huge improvement in clarity or learning.
Sometimes you see a design that completely hits its mark.
Sometimes you find that an idea just completely misses the mark. Sometimes it reveals that something is much more complex and needs a total rethink.
No matter what, adjusting designs to increase usability or takeaways before you drop tens-of-thousands of dollars to build something? That’s worth it.
Step 4: Exhibit Soft Opening = Test the Fixable
In the movie version of this newsletter, here’s where we insert the wavy lines, the harp music, and a title card that says: “Many Months in the Future.”
Because building all that stuff takes time. And we must be patient until then.
I've found that having a true soft opening is a bit rare. But if you built a timeline with an opportunity to open to the public AND saved money to make adjustments – this is where you have one last opportunity to tweak your design.
Evaluation Role? Test What You Can Change
This stage is called “remedial” evaluation (honestly, a horrible name), when you take advantage of the fully built environment to find things that are not going as planned, figure out why, and adjust.
For this to be productive, rather than frustrating, focus on things that are materially fixable with the time and resources you have left. For instance, you may not be able to change the technical elements of an interactive, but can you tweak signage to make it easier to use? To convey the purpose?
This can involve formally studying. But I find this stage can often happen with informal observation by designers, educators, and real visitors. If it's a glaring problem, you'll probably see it.
But key to all of this? Funds to make changes based on what you learn.
Step 5: Final Exhibit = Summative Outcome Testing
When the exhibit is finally a real thing, the exhibit development should celebrate!
But, inevitably, developers also start to feel anxious in a new way. They so much work into this product, and they can’t help but wonder: Did it work?
And that’s where we come in.
Evaluation Role? Did it work? And what can we learn for next time?
Step 5 is where most projects actually invest in their evaluation. Often because there’s a funder who needs this answer. But also, of course you want to know if the final product worked the way you planned it!
If formative testing prototypes is the cost-effective phase, summative evaluation of outcomes is the emotionally satisfying (and emotionally fraught) phase.
This is where we look at the whole: Do visitors get the Big Idea? How do they use it? What does that tell us?
There is value in taking time to understand what this exhibit did well in its final form. Because most exhibits do something well. And whatever that is, it’s useful to understand and learn from.
It is also a chance to learn from what didn’t happen as expected. Those findings usually points to universal lessons about visitor engagement with ideas or materials.
Either way, findings shouldn't just tell us about this exhibit, but give us “lessons learned” to carry forward into the next design process.
Real World Example:
"Fewer words, and fewer big words."
That was a big “lesson learned” a client team synthesized from our evaluation results during prototype testing.
This was an example of where we tested very low-fi mock-ups of exhibit elements. We had visitors explore them and gauged what landed – and what did not. When I say lo-fi, I'm talking:
- iPad with an interactive quiz game (that we roughed up using survey software)
- Paper card-based version of an interactive game
- iPads with mocked up digital “go deeper” labels, where people could select how much to keep reading on a given story (and when they X out)
- Object images with draft labels, all on foam core
- Renderings of the envisioned final exhibit (far more beautiful than foam core)
No one would mistake it for a final, polished experience. But it was enough for people to test the ideas. And for us to see what they chose to do and how they made sense of it. We learned:
- Just one counter-intuitive direction in an interactive can confuse the whole thing.
- Many people have way less familiarity with fundamental pieces of history than we might hope.
- Those who are familiar appreciate that it might not be common knowledge, and are glad to be reminded of it.
- Familiar stories can be viewed as just as compelling as novel ones.
- And, of course, to use “fewer words, and fewer big words.”
I’ve considered having that advice printed on a t-shirt. Anyone else want one?
Where in this process does evaluation actually happen at your institution? And where do you wish it did? Reply and let me know!