Skip to content

Evaluating Digital Public History

When I was in middle school (at a private school), we had to do these self-evaluations at the end of every quarter, for every class. I think each on was about 3 pages long, and they asked about how we thought we were progressing, and what project we liked best, and what we thought our strengths and weaknesses were. No one I knew liked doing them. It wasn’t that we were 10-14 year olds being asked to evaluate our own performance, it was that we neither saw nor knew how they were used. As far as we knew, they just went and sat in a file, maybe emerging for parent-teacher conferences, but we could not see any impact the evaluations had on our daily lives at school.

I mention this because one of the main impressions I got from this week’s readings was that evaluation processes need to have and be driven by Purpose. Evaluations should contribute to the ultimate goals of a project in some and not just be another hoop to jump through. The feeling of throwing data down a hole is discouraging; returning with useful data which can improve your project or show that it’s being used is rewarding. Evaluation should be a means as well as (or instead of?) an end.

That said, I don’t know that I’m ready to run complex evaluations on my own. One of the projects on which I’ve been working at RRCHNM has included periodic testing, which I count as a sort of ‘formative’ evaluation (to use a term from Birchall et al). Periodic in-process evaluation helps us make sure we’re heading in the right direction and readjust course as needed. Watching the evaluation process, sometimes participating, is helpful for me, at least. Think of it as the lab half of a science course, conducting experiments while still reading the texts.

From this week’s readings, I have an idea of what makes for good evaluations, both in development and use. Keep the goals of the project in mind, think about outcomes not output, look for your strengths as well as weaknesses when reading the data (Preskill). Bear in mind the variety of user experiences before, during, and after interacting with your project (Kirchberg and Tröndle). Consider how you’re planning to use the evaluation findings when deciding what sort of evaluation(s) to conduct (Birchall et all). Make sure the findings are readable and accessible to the people who need them, up to and including training people how to read them (Villaespesa and Tasich). These are only soundbites, obviously, but they capture some of the ideas I’m storing for future use.

Readings:

One Comment

  1. With the exception of the testing you mentioned at CHNM I have only been involved in one user testing scenario. This one was for a new youth program. When testing was completed the responses were broken out into three groups of about equal size: too easy, too hard, just right. I took this to mean the program was about right with some necessary tweaking to strengthen the most popular activities and rework some of the least popular, most difficult activities to bring more students from “too hard” to “just right.” But others that I worked with saw the presence of any who felt that the program was too hard to mean that it should be scrapped entirely. Unlike other weeks’ readings, this weeks set was more about how and why to user test, instead of how to implement user testings’ outcomes at an institution. I think this could have been a helpful addition because people can interpret data in a multitude of ways. You’ll have to learn to not only compile data, but also interpret its meaning.

Comments are closed.