The Advocacy Iceberg

The Advocacy Iceberg

February 26, 2013

Glories and triumphs shrunk

The Advocacy Iceberg
The Advocacy Iceberg
Glories and triumphs shrunk
Subscribe on

You have no subscribe urls set, please go to Podcast → Settings → Feed Details to set you your subscribe urls.

Show Notes

The ‘results agenda’ – in its various manifestations – continues to gain momentum.

‘Results agenda’ is an umbrella term that means different things in different contexts but I think it’s fair to say there’s a common basic idea. It’s about gathering robust evidence about results and then focusing on ‘what works’.

So far, so good you might say.

In theory at least, it’s hard to see why anyone could object to that. But this deceptively simple concept gets a bit more problematic when applied to real situations.

And that’s why critiques are burgeoning – within international development, in relation to payment by results, and social impact measurement more generally.

I’m mainly interested in looking at it from a campaigning perspective. Some of the issues overlap with those arising in other fields – especially in situations that are complex and not straightforward (i.e. life generally). Some, I think, may be more specific to campaigning.

‘Results’ could of course be anything – but typically they are narrowly defined as a –specific, measurable – end product. And this reductionist focus carries a whole set of potential disadvantages.

(Seven by my calculation.)

For a start, as well argued elsewhere, it can create perverse incentives, in that it can:

1)     Constrain ambition

If the main or only thing you are being judged against is the ‘result’, there’s an incentive to identify goals that will be relatively easily achieved. And that can be relatively easily measured. This moves campaigning away from being about seeking transformational, systemic change, and reinforces the trend towards looking for thin solutions to increasingly thick problems.

2)     Encourage overclaiming, about both the result and the influence brought to bear.

It exacerbates the tendency for campaigning organisations to be too ready to proclaim ‘we won’ where both ‘we’ and ‘won’ are highly questionable.

3)     Reinforce a focus on upward accountability

It seems pretty clear that the dynamics around reporting to funders and to others mainly interested in what results have been achieved can correspondingly limit (a) accountability to partners and (b) strategic learning.

These things are all difficult enough to get right in campaigning, relying on a delicate balancing act. Disproportionate attention to ensuring and showing results can tip all this over.

I also think that in campaigning terms the results agenda is predicated on, and then further encourages, a misreading of the nature of reality (no less).

The desire to focus on ‘results’ in campaigning implies levels of certainty and objectivity that don’t in fact exist, in that:

4)     The value of the result can always be contested.

The same result can be (and is often) regarded as victory by one party and shameful sell-out by another. Value is defined in terms of (perceptions of) what it could have been. Because history only runs once, this is necessarily a judgement call that will be made differently by different people

5)     How the result came about is also not objectively decipherable.

Even campaign target themselves – assuming they would want to be totally transparent about their motivations, which they generally have no reason to be – could only ever give a partial and patchy account of why they acted in certain ways. Decisions are based on all kinds of influences, and in complex contexts it’s not realistically feasible even to self-identify the range and balance of factors driving every decision.

It’s possible – and valuable – to have a go at untangling the forces at work. But what emerges will only ever be one possible version of how change came about (or not), rather than an unambiguously correct version. There’s no definitively ‘true version’ of events waiting to be uncovered.

6)     Results are not static.

In campaigning, more so than in other disciplines, progess can be non-linear. The same campaign could be widely judged a failure at one point and a great success some time shortly after (and then possibly a failure again, some time after that).

7)     A result may not tell you that much of practical use anyway.

A result isn’t in any case necessarily a sign of ‘effective’ campaigning. Given the plethora of other influences, a mixture of chance and extraneous factors can lead to all sorts of outcomes that have very little to do with the effectiveness, or not, of a particular campaign.

Nor does past performance – even if you could set it out in those terms – necessarily any guide to future prospects. Identified ‘success’ is unlikely to be replicable in the future. So as a tool for decision making it may not be that reliable or helpful.

The overarching point is that, in campaigning, everything (not just the result) is contestable.

In science – by experimenting or through testing the veracity of a theory – opinion can be identified as right or wrong. In campaigning, that’s possible at an operational level (e.g. in testing responses to different messages). But beyond that, in Karl Popper’s terms, nothing is falsifiable.

No amount of evidence will ever definitively close down debates about optimum strategy for example, because the big questions lie in the understanding.

And a lot of that revolves around power: who has it, as well as different people’s understanding of how it plays out in driving or constraining social and political change.

It’s not in question that seeking to explore and question campaigning effectiveness is a worthwhile and fruitful exercise. And that there needs to be some way to go beyond leaving everything to intuition.

But measuring things is not necessarily the best means to untangling the dynamics of change, or understanding what’s going on. It’s more about the quality of the interpretation, the dialogue around that, and the thinking that it stimulates about future strategy.

Taking a good look at outcomes would always be a critical part of any assessment. But a predominant focus on, and interest in, results risks squeezing out the space for more useful analysis.