Impact measurement, evaluation, benchmarking, non-profit analysis: what works?
Barry Knight and Lisa Jordan can generally be counted on to present a lively session and this was no exception. Proceedings opened with a short video ‘skit’ on foundations’ frequently chaotic approach to evaluation which played to a packed house (standing room only) and was gratefully received by an audience more used to spoken presentations than entertaining videos.
As the Bertelsmann Foundation’s Bettina Windau, an able moderator of the session, pointed out, measurement is a key part of foundation accountability, yet foundations are not overfond of it. It’s a bit like dental check-ups – we know they’re good for us, but we still don’t like them. However, as Lisa Jordan remarked, measurement is critical to answering the key question of whether foundation resources are being efficiently used to create public benefit.
But, as a number of participants pointed out, there are so many methods and tools, how do you choose the one that’s going to work for you? Barry Knight acknowledged the danger of creating a large, expensive industry that did no real good and offered four criteria for choosing a method: it needs to be owned by the organisation; it needs to be useful; it needs to be robust – that is, it delivers valid and reliable results; and it needs to be simple.
Moreover, said Lisa Jordan, evaluation needs to be embedded in the organisational culture. Foundations, remarked someone, are often more ready to evaluate their programmes than themselves but understanding our impact as an institution can help us win the political battle. We’ve been bad at explaining what we do to the public what we do. If we don’t do and justify our privileged status, we have a real problem. On the other side, foundations have freedom and can, at their best, provide what he called venture capital for a new world.’ But they need to tell reliable stories and be honest about what works.
A number of difficulties were raised by the audience: how can you measure advocacy? How can you measure your contribution to social change when such change demands the efforts of a whole constellation of forces? Is it possible to produce one consolidated method from the plethora of means and tools now available? Barry and Lisa grappled with these questions. Both agreed that evaluation needed to be part of the planning stage – begin with baseline data. If you want to find out what you’ve done, you need to know where you’re starting from. Foundations are bit players of course in the context of moving societies in one direction or another, but they can still play crucial influential roles and assess the effectiveness of those roles by asking themselves carefully and honestly what success would look like in regard to this or that social change and what actors they need to influence or to work with in order to bring them about.
As to having a composite method, Lisa Jordan felt it would be both impossible and undesirable. Social change was far too complex for any ‘one size fits all method’.
So work out what it is you want to learn from an evaluation, build it into the planning of a programme where possible, keep it as simple as you can and choose your means to suit the task (don’t use a randomised control test, for example, to evaluate a hearts and minds campaign). Looking around and listening, the group took a lot away from this session – where Barry and Lisa couldn’t answer a question immediately, they were happy to refer participants to useful websites. However – and of course – they couldn’t provide pat answers to some of the perennially vexed questions of the field, notably attribution – how do you know that it’s your hand on the lever that has wrought a particular change, when the world doesn’t work like a machine? Barry and Lisa pointed to the work they were doing on measuring social justice philanthropy and referred participants to it. As they conceded, it is still a work in progress.
Perhaps, the moral that the difficulty of ascription where large effects are looked for shouldn’t mean that you stop trying to measure them. As Atallah Kuttab pointed out from the floor, quoting Einstein: ‘absence of evidence is not evidence of absence.’