Three Considerations when Measuring “Success” in Development Cooperation: A Conversation with Marcus Jenal

BY: Zenebe B. Uraguchi, Marcus Jenal - 04. May 2019

Two years ago, Zenebe Uraguchi of Helvetas had a conversation with Rubaiyath Sarwar of Innovision Consulting on how fixation on chasing targets leads programmes in development cooperation to miss out on contributing to long-term and large-scale changes. In March 2019, Zenebe met Marcus Jenal in Moldova. Marcus thinks a lot about how complexity thinking can improve development.

This blog is a summary of their dialogue on three thoughts that development practitioners who apply a systemic approach need to consider when measuring success in terms of contributing to systemic change.

By systemic change, we mean changes in the dynamic structures of a system – rules, norms, customs, habits, relationships, sensemaking mechanisms or more generally: institutions and world views – that shape the behaviours or practices of people – individuals, businesses, public sector agencies, civil society organisations, etc.   

ZU: Programmes that apply a systemic approach in development cooperation often struggle to measure the systemic change they effect. A couple of years ago, Michael Kleinman wrote in The Guardian, arguing that, “the obsession with measuring impact is paralysing [development practitioners].” Without being obsessed with measurement, I believe development programmes will require a measurement system that’s right-sized and appropriate in scope and timeframe for effectively measuring impacts.

MJ: For me, the challenge is how to find a broader way of effectively showing successes when attempting to stimulate systemic changes. This means not reducing our success factors to the measures that we know how to measure. We need to keep an eye on how our role is contributing to broader change, for example, by using different methodologies and appreciating the perspectives these provide us with. This’ll, for sure, help in demonstrating how a programme contributed to meaningful change in a system. A programme will need to weave different sources and types of evidence into a coherent story. Of course, we need to also make it clear in the story we tell that there’re other factors that influence the change programmes have contributed to.  

 ZU: In my recent reflection about the Market Systems Symposium in Cape Town, I emphasised the concern that the evidence on impact of programmes that apply a systemic approach is thin. Among others, one of the key challenges is the tension between short-term and long-term results. Can such a tension be managed or reconciled?

MJ: This tension exists in most programmes that apply a systemic approach. On the one hand, there’s a requirement for showing results within a given time frame (e.g. partners have taken new ways of working and showing successes in terms of investment and job creation). This often requires programmes to use incremental interventions with no transformational effect. On the other hand, programmes will also need to invest in more fundamental, long-term systemic changes (e.g. changes in how different institutions interact, improved participation in labour markets).

The key point here is whenever we design interventions or prepare sector strategies, we need to pay attention to explaining how we expect changes to happen and in what sequence. In other words, we need to explicitly state which changes we expect in the short term, in the medium term and in the long term. By categorising the effects of our interventions in this way, I think it’s possible to come up with different types of indicators appropriate for the different stages. Indeed, in using such ways of measuring changes, programmes should work with donors and their Head Offices to manage expectations and tell the narrative on how they expect changes to happen over time.   

ZU: Many development programmes operate in complex and dynamic contexts. I’m aware that adaptive management can sometimes be viewed as an excuse for “making things up as programmes go along”. Yet, the point I’m trying to make is that the context can shift quickly, and strategies need to be adapted continuously. This means that monitoring and results measurement needs to adapt to such changes. For example, having access to reliable and timely information through an agile monitoring and results measurement system is crucial.

MJ: I agree with your point. Development practitioners are still evaluating programmes that work towards facilitating systemic changes by using methods that aren’t adjusted to the shift to a more systemic way of doing development. The evaluation methods are following a “traditional” model adapted to show direct effects (clear discernible cause-effect links, linear effects, lack of feedback loops). For me, this’s to a certain extent unfair towards programmes that take a systemic approach. So, we need to ask ourselves two questions: “what success means” and “how we measure success accordingly” for programmes that work towards systemic change. Only then it’s reasonably possible to show whether an initiative has been successful or not. An immediate follow-up question then needs to be: how can this be done? There’re good examples of methodologies that’re able to capture systemic effects in the evolution community and to a certain extent also in the social sciences.           

ZU:  Systemic approaches aren’t entirely new. The approach puts lessons from decades of development work into a set of principles and frameworks to guide development programmes in their design, implementation and results measurement. If this is the case, then why’re we still struggling to figure out how to effectively measure success (on a system level) in development cooperation? Or is it the case that “development isn’t a science and cannot be measured”?

MJ: As I said above, perhaps it isn’t due to the lack of ability to show these changes but the adoption of the appropriate methods in our field. Oftentimes we start development initiatives with a good intention to change or improve a system. We’re then soon confronted with the question: “how are we going to measure such a change?” As we naturally default to the good practices and standards that are used in our field (or are even forced to use them by our donor), which’re still predominantly based on a linear logic, we’re automatically only measuring the effects that can be captured by such methods: direct effects. This, in turn, again affects the way we design our interventions, or the way we work with partners to stimulate systemic change.

It’s a circular logic, you see: our focus will be on achieving targets defined through measures that we intend to use to measure success with – and if these measures aren’t systemic, our focus will not be on systemic change. This’s what I call “the self-fulfilling prophecy” of measuring changes in development cooperation. Let me provide you an example:

ZU: Great points, Marcus. So, what do we make of our conversation? I see three key messages regarding measurement of success: first, the measures we choose will define or at least influence the way how we work, second, we need to choose the right ways of measuring success that’s in line with the kind of approach we use, and third, the importance of learning from recent experiences of evaluating success in the wider evaluation community.

MJ: That’s a good summary. Let me explain these three takeaways a bit more.