The history of our knowledge about, and understanding of, the world can be summarised (amongst other ways) as follows:
Speculation. Plausibility. Certainty. Uncertainty. Unknowability.
Aristotle suggested that stones fall to earth because they are trying to get back to their home. Fire travels upwards because it lives in the heavens. He invented logic and was a major catalyst for the Renaissance so we can forgive him these misreadings. The point anyway is that he was speculating.
A few centuries later, Francis Bacon stuffed a chicken with snow and experimental science was born. Processes of data collection and systematic observation opened the door for making more plausible, evidenced assertions.
The apotheosis of the era of (putative) certainty came with Newton’s laws of motion, their universality holding out the hope that everything was predictable if only there could be enough data. In response, philosophers such as Hobbes sought to identify rules of society equivalent to these physical laws.
It’s all been slowly unravelling ever since.
Philosophers like Hume looked for areas of certainty but found only pockets (just because something has always happened in the past, doesn’t mean it will continue to happen, etc). In science, Einstein showed that there is no objective vantage point from which to view reality. And quantum physics introduced so much quirkiness that all bets were off.
In the social sciences, alongside post-modernist assaults on grand narratives, the case that reality is socially constructed has been widely made.
And sociologists such as Duncan Watts are now arguing that network effects (including those enabled by new technologies) multiply the realms of uncertainty to the extent that outcomes are not just uncertain but are fundamentally unknowable.
Watts’ argument is that, in trying to draw lessons from the past, knowing the outcome, we inevitably and naturally build a case around that, and imagine causes because they seem relevant to where we ended up.
But in fact there is so much unknowable along the way – about motivations and factors influencing behaviour for example – and so much chance built in – particularly when there are multiple players and multiple interactions – that any explanation can amount to no more than a partial description: some things happened and others didn’t (but could just as well have).
Any such explanations in any case can give very little or no guide to what will happen again, given massive unpredictability and the necessary fact that no two situations can ever be the same.
History only happens once. And you can only know what will happen in the future by ‘running the system’ (i.e. by watching – or better making – it happen).
Twenty years ago when I was at Oxfam, evaluation/social change guru (then and still) Chris Roche shared with me a copy of an academic paper on chaos and complexity theory. The notion that complexity thinking could appropriately be applied to understanding of how social change happens was, I think, pretty new, at least in the sector, at the time; it has become relatively commonplace since.
Individually and collectively, concepts such as emergence, hypersensitivity to changes in initial conditions, self organisation, system bifurcations, etc, carry highly powerful implications in terms of the degree to which there is volatility and unpredictability in any system, and the nature of the possibilities of and blockages to change.
In recognition of this, there has been a fair amount of thinking about how models of campaign planning and evaluation – and how we go about changing the world – should be adapted to incorporate these new realities.
But I’m wondering if this thinking goes far enough. Ultimately the temptation is always to shoehorn the approach into something recognisable, something that gives the impression that there is some kind of robust underlying strategy, and the possibility of learning lessons in a systematic way.
‘Theories of change’ approaches might be said to be a classic example of this partial approach. Rejecting the usefulness of developing detailed operational plans and focusing more on a strategic overview is all good. The argument for ‘theories of change’ is sound and has been well made. But the approach still assumes a degree of controllability and probability that may not actually accord with reality.
I’m speculating here, so could be as wrong as Aristotle, but I suspect a systematic look back on past plans to see the extent to which expectations, as set out initially, come to pass would invariably reveal major discrepancies between plan and actual. (Evidence of [lack of] success in prediction in all kinds of social scientific areas would certain point to that conclusion being likely.)
If that were true, then arguably the element of planning that tends to be most valued – the output, a physical plan arising from the process – is perhaps the least useful part of it.
What can be more useful is the process itself, insofar as it allows space for critical discussion, enhances conceptual understanding, and creates a forum where assumptions can be challenged, differences can be aired, and perceptions and intelligence shared.
And if we move towards accepting the idea of unknowability, an alternative way to think about campaign planning would instead be that you have (a) a goal, (b) a good idea of the power dynamics surrounding the issue, (c) some robust thinking around multiple possible routes to the goal, and (d) a couple of islands of certainty (known external influencing points for example) in the sea of fog.
And then from there you just take it day by day. As the campaign unfolds, multiple possibilities continually collapse into reality, and you feel your way along the paths that reveal themselves as leading towards the goal.
In campaign evaluation terms, the implication is that what can be learnt from past experience may typically be exaggerated. The argument would be that a review of the past can only tell you that in a particular and unique, and unrepeatable, set of circumstances, there’s a case that such and such contribution was made.
This couldn’t convey any probability of replication; nor does such an analysis necessarily even validate (or not) the strategy, given that something other than what did actually happen could just have easily happened instead.
On that basis, evaluation could still be useful if focused, for example, on how well a particular organisation or group marshalls and mobilises its resources in ways that generate, exploit and leverage advantage, and how internal processes are fit or not for purpose. And it’s always useful to explore how others perceive you.
But expecting any more than that may be just as sensible as Aristotle suggesting that the function that the brain performed was to cool the blood.
blog by Jim Coe
I’m on twitter at @jim_coe
NB these thoughts draw (somewhat loosely) on: Philip Ball, ‘Critical Mass’; Vlatko Vedral ‘Decoding Reality’; David Orrell, ‘The Future of Everything’; & in particular Duncan Watts, ‘Everything is Obvious’