How I learned to stop worrying and love evaluation

Andrew Milner

Andrew Milner

Impact measurement, evaluation, benchmarking, non-profit analysis: what works?

Barry Knight and Lisa Jordan can generally be counted on to present a lively session and this was no exception.  Proceedings opened with a short video ‘skit’ on foundations’ frequently chaotic approach to evaluation which played to a packed house (standing room only) and was gratefully received by an audience more used to spoken presentations than entertaining videos.

As the Bertelsmann Foundation’s Bettina Windau, an able moderator of the session, pointed out, measurement is a key part of foundation accountability, yet foundations are not overfond of it. It’s a bit like dental check-ups – we know they’re good for us, but we still don’t like them. However, as Lisa Jordan remarked, measurement is critical to answering the key question of whether foundation resources are being efficiently used to create public benefit.

But, as a number of participants pointed out, there are so many methods and tools, how do you choose the one that’s going to work for you?  Barry Knight acknowledged the danger of creating a large, expensive industry that did no real good and offered four criteria for choosing a method: it needs to be owned by the organisation; it needs to be useful; it needs to be robust – that is, it delivers valid and reliable results; and it needs to be simple.

Moreover, said Lisa Jordan, evaluation needs to be embedded in the organisational culture. Foundations, remarked someone, are often more ready to evaluate their programmes than themselves but understanding our impact as an institution can help us win the political battle. We’ve been bad at explaining what we do to the public what we do. If we don’t do and justify our privileged status, we have a real problem. On the other side, foundations have freedom and can, at their best, provide what he called venture capital for a new world.’ But they need to tell reliable stories and be honest about what works.

A number of difficulties were raised by the audience: how can you measure advocacy? How can you measure your contribution to social change when such change demands the efforts of a whole constellation of forces? Is it possible to produce one consolidated method from the plethora of means and tools now available? Barry and Lisa grappled with these questions. Both agreed that evaluation needed to be part of the planning stage – begin with baseline data. If you want to find out what you’ve done, you need to know where you’re starting from. Foundations are bit players of course in the context of moving societies in one direction or another, but they can still play crucial influential roles and assess the effectiveness of those roles by asking themselves carefully and honestly what success would look like in regard to this or that social change and what actors they need to influence or to work with in order to bring them about.

As to having a composite method, Lisa Jordan felt it would be both impossible and undesirable. Social change was far too complex for any ‘one size fits all method’.

So work out what it is you want to learn from an evaluation, build it into the planning of a programme where possible, keep it as simple as you can and choose your means to suit the task (don’t use a randomised control test, for example, to evaluate a hearts and minds campaign). Looking around and listening, the group took a lot away from this session – where Barry and Lisa couldn’t answer a question immediately, they were happy to refer participants to useful websites. However – and of course – they couldn’t provide pat answers to some of the perennially vexed questions of the field, notably attribution – how do you know that it’s your hand on the lever that has wrought a particular change, when the world doesn’t work like a machine? Barry and Lisa pointed to the work they were doing on measuring social justice philanthropy and referred participants to it. As they conceded, it is still a work in progress.

Perhaps, the moral that the difficulty of ascription where large effects are looked for shouldn’t mean that you stop trying to measure them. As Atallah Kuttab pointed out from the floor, quoting Einstein: ‘absence of evidence is not evidence of absence.’

Add to FacebookAdd to DiggAdd to Del.icio.usAdd to StumbleuponAdd to RedditAdd to BlinklistAdd to TwitterAdd to TechnoratiAdd to Yahoo BuzzAdd to Newsvine

3 responses to “How I learned to stop worrying and love evaluation

  1. Great overview Andrew of an engaging session. The area of evaluation, impact measurement, nonprofit analysis is bewildering, especially with the sudden proliferation of tools eg Foundation Centre Website lists 150 different approaches under its page Tools & Resources for Assessing Social Impact (TRASI). So how do you choose?
    Barry provided four criteria for choosing a tool:
    1. Owned – the system must be “owned” by everyone involved – it is there to be used by everyone everyday.
    2. Useful – it must generate results you can use
    3. Robust – deliver results that are valued and reliable
    4. Simple – not too many moving parts!

    Evaluation and measurement starts before you make a grant! Start with the baseline data and then decide where you want to finish up. Ask yourself:
    a. What does success look like (write a story – make it tangible)?
    b. What do you need to do to get that?
    c. Who do you need to involve?
    c. When can we expect to see the results (build a timeline)?
    And at the end of the project, share what you have learned; publish the results so others can build on the knowledge.

  2. Excellent post on one of the most arcane aspects of foundations. The notion of owned, useful, robust and simple is particularly helpful as I develop an social ROI to report to my larger multinational funders

  3. Andrew, thanks for your post on this inspiring session.

    I was also intrigued by the substance of the session. However, although the introductory movie was shaking, funny and providing a meaningful sharpening of the constructs and level of analysis, the session appeared to be not fully attributing the provided distinction and questions were somehow – from my point of view (sitting in one corner in a packed room) – still too much flipping around.

    In addition, I wanted to mention that the discussion missed a very important strategic distinction by not drawing clear boundaries between institutional strategy and resource based strategy: gaining legitimacy through improving disclosure and accountability was frequently referred to as an important outcome of measuring impact. But the arguments brought forward by Judy had been neglected during the session mirrored by this blog – that creating a learning culture must be a strategic choice when applying impact assessment tools,…

    Impact assessment can not only be seen as a institutional response but learning strategy: boosting organizational effectiveness of foundations in order to diffuse best practices and provide meaningful capacity building among grantees and enabling replicas through smart philanthropy.

    The debate must go beyond response strategy, but improving foundations and the learning they are capable to diffuse, which is in the for-profit sector impossible (or more professionally arranged in more distinguished and formal organizational forms of collaboration) as diffusion deters competitive advantage in the market place.

Leave a comment