This week my colleague Jacqui McDowell (who has led on supporting our cooking skills study group on planning the evaluation of their cooking courses) reflects on the challenge of finding the perfect methods or tools to find out the difference your course is making.
‘We all know that we have to evaluate our cooking skills classes, both to account to funders for the impact we are making with their money and to learn how to improve them. What folk often struggle with is how to do it best. If I had a £1 for every time I’ve been asked about tools to gather the evidence needed, and that “perfect one”, I’d be a lot better off and less grey. The simple and perhaps unsatisfying answer is that we need to consider each piece of work outcome by outcome (e.g. participants have improved their cooking skills, participants have increased their knowledge on food labelling) to find evidence of impact.
It is about looking at each of our outcome indicators to work out how to capture evidence of whether a change is happening or not, and tailoring how to gather the evidence to suit the source of that evidence. Of course we do need to think about whether we can get evidence of what things were like before and after (or at the end of the course), plus follow up people to check if any positive changes have been sustained or developed later. And to make the case more compelling it’s even better to gather evidence from different sources. This is often the trickiest bit of the puzzle, as in our validated questionnaire (see an example of a ‘validated’ questionnaire – the Cookwell project in our blog about questionnaires) our participants may self-report positive changes that we, or interested third parties just haven’t observed, and that we can’t test the accuracy of, or lay our hands on more objective proof of.
All this means evaluating our impact can feel hard, yet we all know that sense of exhilaration and delight when a participant who couldn’t chop an onion without cutting themselves or were unwilling to try everyday veg or follow a recipe goes and makes a pot of soup in class that everyone else in the class raves about. And when we meet them months later on the street their partner tell us unprompted that they’ve been taught how to make the soup from the course, or the kids start squabbling over which of the recipes they like the best.
So evaluating our impact is a bit like building a jigsaw, just one where we have to work out what the pieces may look like and how to make them before we can start to put them together. Of course there are likely to be pieces we won’t be able to find, or ones that turn out differently to what we expected. And that is OK. Why? Because some evidence is just too darn hard to find or, we haven’t made the difference we expected to, which is where our best learning often comes from!’