Evaluating Digital Public History
When I was in middle school (at a private school), we had to do these self-evaluations at the end of every quarter, for every class. I think each on was about 3 pages long, and they asked about how we thought we were progressing, and what project we liked best, and what we thought our strengths and weaknesses were. No one I knew liked doing them. It wasn’t that we were 10-14 year olds being asked to evaluate our own performance, it was that we neither saw nor knew how they were used. As far as we knew, they just went and sat in a file, maybe emerging for parent-teacher conferences, but we could not see any impact the evaluations had on our daily lives at school.
I mention this because one of the main impressions I got from this week’s readings was that evaluation processes need to have and be driven by Purpose. Evaluations should contribute to the ultimate goals of a project in some and not just be another hoop to jump through. The feeling of throwing data down a hole is discouraging; returning with useful data which can improve your project or show that it’s being used is rewarding. Evaluation should be a means as well as (or instead of?) an end.
That said, I don’t know that I’m ready to run complex evaluations on my own. One of the projects on which I’ve been working at RRCHNM has included periodic testing, which I count as a sort of ‘formative’ evaluation (to use a term from Birchall et al). Periodic in-process evaluation helps us make sure we’re heading in the right direction and readjust course as needed. Watching the evaluation process, sometimes participating, is helpful for me, at least. Think of it as the lab half of a science course, conducting experiments while still reading the texts.
From this week’s readings, I have an idea of what makes for good evaluations, both in development and use. Keep the goals of the project in mind, think about outcomes not output, look for your strengths as well as weaknesses when reading the data (Preskill). Bear in mind the variety of user experiences before, during, and after interacting with your project (Kirchberg and Tröndle). Consider how you’re planning to use the evaluation findings when deciding what sort of evaluation(s) to conduct (Birchall et all). Make sure the findings are readable and accessible to the people who need them, up to and including training people how to read them (Villaespesa and Tasich). These are only soundbites, obviously, but they capture some of the ideas I’m storing for future use.
- Elena Villaespesa, Tijana Tasich, “Making Sense of Numbers: A Journey of Spreading the Analytics Culture at Tate,” Museums and the Web 2012.
- Banny Birchall, et al., “Levelling Up: Towards Best Practice in Evaluating Museum Games,” Museums and the Web 2012.
- Hallie Preskill, “Museum Evaluation without Borders: Four Imperatives for Making Museum Evaluation More Relevant, Credible, and Useful,” Curator: The Museum Journal 54:1 (Jan. 2011): 93-100.
- Kirchberg, V. and Tröndle, M. “Experiencing Exhibitions: A Review of Studies on Visitor Experiences in Museums.” Curator: The Museum Journal 55:4 (Oct. 2012) 435–452.
- IMLS’s Evaluation Resources