comment 0

‘Make it Happen’ in Development: Sensemaking and Portfolios of Options in the Governance Space. In conversation with Emilia Lischke

The design and implementation of development initiatives in support of policy and governance reforms often contends with systems of interlinked problems and non-linear and unpredictable change processes (Harry Jones 2011). During the last decade or so, there has been recognition that the development sector needs to move away from linear results-based logic and adopt more politically aware, experimental and adaptive approaches to address complex governance challenges. 

But what does it mean in practice?

Duncan Green has written that most of the contributions around adaptive programming come from ‘academics and think tankers’. They often offer useful principles but do not always suggest concrete steps to the people who design and implement programmes and projects. 

In January 2020, I participated in a portfolio sensemaking workshop organised in Helsinki by the Finnish Innovation Agency (SITRA)Gina Belle from the Chôra Foundation was one of the presenters. She introduced the foundation’s work on designing portfolios of strategic options and portfolio sensemaking

To me, that presentation opened the door to a method and process that, on the one hand spoke the language of systems and complexity of development issues, and on the other provided a concrete method to focus, learn, reflect and inform strategic decisions and adaptation of a programme or project. 

So, I started to read more about it and also got in touch with Chôra to learn more about their work. Today I am speaking with Emilia Lischke who is a director at the Chôra Foundation to hear from her about the origin of the method and also her experiences introducing it to the United Nations Development Programme in Malawi.

Exploring social systems transformation, Emilia Lischke
Exploring social systems transformation, Emilia Lischke

Emilia, when did you come across portfolios and sensemaking and why did you find these concepts and ideas interesting?

I came across Chôra’s Portfolio Approach in 2015 at an Australian bank and insurance company called Suncorp. I was on a year-long Fintech fellowship at that time, exploring how customer-focused disruptions in financial services create new opportunity spaces to rethink the relationship between financial services and corporate responsibility. For three months at Suncorp, I joined the Risk and Innovation Division, led by Kirsten Dunlop. This is when I first met Gina Belle and Luca Gatti and became acquainted with their deep understanding and practice of designing and managing Strategic Portfolios. Since then, we have spread the idea and application of a strategic portfolio approach to the the non-profit & development space co-designing and managing a variety of Portfolios around cities, trust, governance & tourism just to mention a few.  

How do you define a portfolio? What is sensemaking? And what happens when we bring them together?

Portfolios remind us that there is no single magic bullet to solving problems, but that it’s the combination of trying out many things and approaches in a way that enables learning from experience and adapting. No one knows what can and what will ultimately catch a system’s attention or shift people’s thinking and behaviour. Portfolios give us a range of experiences and literally options to choose from when deciding on the next step.  What makes Chôra’s approach to portfolios unique is that we link the value of portfolios to the strategic decision making, adaptation and commitment to action. Here sensemaking plays a key role – that’s the process by which we make sense from the experience of evolving portfolios and build them up as genuine solution discovery systems.

You are testing the method with the UNDP country office in Malawi. Tell me more about this? 

We have been working with the UNDP country office of Malawi since 2019. We have started with Sensemaking to help them strategically reposition their governance portfolio. In the course of that we designed new interventions that are being implemented with the government of the country and we are continuing with Sensemaking as a method to dynamically manage their Portfolio. What we have been observing over time, having just recently completed our third sensemaking experience, is how learning and meaning are huge catalysers in human systems. Sensemaking unsettles some of our most hard-wired limitations on agency, hierarchy, and voice in the corporate and organisational world. It unlocks potential in teams and individuals and creates new impetus for collaboration, change, for bold thoughtfulness, and exploratory endeavours even in the least likely places, and I think it is exactly these least likely places that need it the most. 

A key concept underpinning the Chôra Foundation’s Portfolio Sensemaking method is that when we address complex social and policy challenges, we have to have freedom to experiment, and within it, fail. That seems a difficult thing to do or suggest to development organisations aligned with results-based logic.

Under controlled lab environments I can see how this notion of failure and experimentation can lead to incremental problem solving and eventually great discovery. The scientist designs, then performs an experiment and measures its effects. If unsatisfied, she can always stop, return to the starting point, and start again with the intent to make a deliberate small adjustment in her setup. But the more interesting question is whether that notion of experimentation and failure is useful for those who are doing the hard job of addressing complex & evolving social and policy challenges in the world out there. They cannot go back in time and repeat the same intervention, nor can they control the environment to single out the thing that worked or did not. I think such is the nature of policy and social challenges. They need an operating model that entertains principles of adaptation, dynamic exploration and constant reframing. There is no end point and therefore no failing or winning. Every action will have some sort of effect, positive and negative, intended, and unintended. It is not the effects per se, but how we build on them and what we make of them that will make a difference. This is where experimentation is coming short of the messy and interesting complexity of time and changing of contexts.   

I was recently in a conversation with Nora Bateson. She made a point that made me think quite a bit. Problems do not appear out of thin air. They are the result of the behaviour of a system. Thus, I think that in order to address a problem, one has to find a way to influence and change the way the systems behave. How do you support development initiatives to shift the focus from trying to address problems head-on to influencing how systems behave? 

Imagine a darts game where a problem and the solution are clearly defined. The player has several shots, and the result is objectively observable, measurable, and reliable – here success and failure are clearly defined. If you see the challenges and interventions in the development world in this way, I see how one can entertain the idea of experimentation and failure. However, others would argue that society and life are complex adaptive systems. Constantly changing, they do not follow a predetermined principle of points increasing towards the centre of a board. On the contrary, their centre shifts and jumps and areas and boundaries move, fade and reappear somewhere else. And if you start to look at it this way, then you start to realise that no matter how many arrows you have, that they won’t help you to discover where that fleeting central point went and where it might appear next. You literally need to play the game differently, in a way that you don’t shoot at it but you become part of it, you play and dance with it, you immerse yourself into it and learn to sense your way into its behaviour, its needs and possibilities, and derive from there a deeply informed sense of action and resolution.

Emilia Lischke, thank you very much.

Originally posted on Systems Change Finland blog

Photo credit: Alina Grubnyak on Unsplash

comment 0

A shift in perception: exploring ways to work with complexity in international development

I really enjoyed to go back this week to this paper from 2016 by ODI’s Anne Buffardi: When Theory Meets Reality.

Anne Buffardi, exploring ways to work with complexity

I went back to Anne’s paper because I am doing some work on defining some principle the can help design and set up monitoring, evaluation and learning (MEL) systems for development project/programmes that tackle complex social and governance problems.

I have been quite involved over the last few years with the discussions in various communities: PDIA, Doing Development Differently, adaptive programming, and to some extent thinking and working politically.

There are a lot of discussions and conversation about principles that can help shift the way development initiatives are designed, managed, and implemented (PDIA is a bit different as in addition to principles it proposes also concrete tools to manage uncertainty and complexity) . These principles speak to the shift of perception needed in the international development community to move from a mindset that sees problems and possible solutions in isolation from the context and systems that enable them to start dancing with the systems in which they operate.

This shift is important and necessary, but translating these principles into practical actions and ways of working is difficult. Anne mentions some of these challenges in her Methods Lab paper.

I have the same challenge in my work on problem-driven political economy and on MEL systems design. I am on board with the principles. I agree and believe in them but then struggle to apply them to, for example, coming up with a concrete outcome indicators that speak to the complexity and uncertainty that the project is trying to influence.

Anyhow, the journey continues and the paper Anne has written in 2016 has many insights that resonate with me. I am pasting below the key messages of Anne’s paper. Go online and download it. It is a great read.

  • Over the last 50 years ,repeated calls for adaptive learning in development suggests two things: many practitioners are working in complex situations that may benefit from flexible approaches, and such approaches can be difficult to apply in practice.
  • Complexity thinking can offer useful recommendations on how to take advantage of distributed capacities, joint interpretation of problems and learning through experimentation in complex development programmes.
  • However, these recommendations rely on underlying assumptions about relationships, power and flexibility that may not hold true in practice, particularly for programmes operating in a risk averse, results-driven environment.
  • This paper poses guiding questions to assess the fit and feasibility of integrating complexity- informed practices into development programmes.

Here the extract from the conclusions of the paper:

“Complexity thinking can offer useful insights on how to approach situations and challenges faced by development practitioners. However, these recommendations rely upon underlying assumptions about relationships, power and flexibility that may not always hold true in practice, where organisations have established internal and external ways of working. By posing guiding questions to assess the fit and feasibility of integrating complexity-informed practices, this paper aims to help development programmes identify which dimensions of complexity are most appropriate and realistic to address; and, how they can work within their operating constraints to take advantage of distributed capacities, joint interpretation of problems and experiential learning.

Classifying which practices are more and less feasible enables decision-making and management styles to be explicit, rather than couched in aspirational rhetoric about sharing and collaboration, learning and adaptation, which is not realistic in practice. If undertaken before programme implementation, it can help identify decision-making and communication mechanisms that can facilitate interaction among dispersed actors, create incentive structures that reward learning as well as delivery, and allocate sufficient time and resources for data collection, analysis and interpretation to enable monitoring to inform decision-making.”

Photo credit:  Unsplash

comment 0

From Cold Data to Warm Data

Warm Data are contextual and relational information about complex systems.In other words, warm data involve transcontextual information about the interrelationships that integrate a complex system, as well as interwoven complex systems.
Warm Data Lab

I wrote to a colleague yesterday. During a meeting he had asked about a research paper he had written with others. He was rightly worries about making general claims without making explicit the limitations of the work.

His question remained with me. Yesterday evening I spoke about it with my wife at the end of our dinner. About six months ago, she has participated in one of the Warm Data Lab trainings organised by Nora Bateson. The points she made during our conversation were inspired by the training she did with Nora.

Every piece of research has limitations. Every piece of research is like a sketch of a tiny part of the complex reality in which we live. As a sketch, it can only give a glimpse into the reality and systems that we observe and live in. These sketches help draw (or begin to draw) maps of reality, but, as Nora Bateson says, maps are not the territory.

Research findings are just data and information, they are not knowledge. As such they are cold data. To gain meaning and influence the systems they need to become warm data. To become warm data they need to become alive as part of the conversations with the people who are affected by the problems the research has identified. They have to become almost like the fuel of dialogues and of new links and relationships between individuals. Warm data help surface the hidden knowledge that people affected by the problems that we might research. Warm data can change systems because they become part for the web od co-developments of those systems and are part of our collective ability to perceive, discuss, and try to address complex issues. 

The journey continues.

Photo credit:  Sveta Fedarava on Unsplash

comment 0

Three questions about MEL for projects dealing with complexity

Last week I wrote to the Peregrine Discussion Group on Better Evaluation with a couple of questions on M&E or MEL of initiates that try ton address complex social and development problems. I have received many very interesting replies which i am posting below together with my questions.

My questions

1) What do you think are the key principles that we need to have in front of us when designing MEL systems for complex and adaptive (and uncertain) projects and programmes?

2) Do we need a different MEL system and tools?

3) What do you suggest I should read on this topic and question?

The answers

Russel Gasser

1) What do you think are the key principles that we need to have in front of us when designing MEL systems for complex and adaptive (and uncertain) projects and programmes?

Complexity is not cleanly separated from and unrelated to the causal and chaotic domains, the boundaries are not clear lines much of the time, and trying to put in clean separators between what is complex and what is not, is itself a causal/linear approach to a complex issue. Trying to define a fixed set of MEL principles for the complex domain is a causal/linear approach that is necessarily doomed to failure,  both in terms that (a) it assumes that it is possible to predict what will be needed and also (b) it is based on an assumption that a finite range of methods can cover every case.  Both of these assumptions are false in the complex domain – that failure of these assumptions is one way to define what complexity actually is, and separate it from “complicated”, “difficult to understand” and “nothing like what I have experienced before”.  Neither of the first two is complex (other than in everyday conversational use of “complex”) and the third may be complex but actually describes the previous experience of the person, not the complexity of the situation. Causality is a one-way street in the complex domain – it can only be seen with hindsight. If it is truly “complex” then measurement of predicted results can only really measure the degree of luck or skill at prediction in guessing the outcome, not the value or success of the intervention.  Perhaps the most important question to answer for the real world is: How can we create the link from funders who are locked in to the linear/causal model, to providing funding for and using an evaluation that is based on complexity ?  Until you get the complex domain evaluation (or MEL) funded, and until there is the possibility that organizations change what they are doing as a result of evaluation in the complex domain, then it is academic research that lacks clear application. Nothing wrong with academic research, but this forum isn’t likely to be the best place to address it.

2) Do we need a different MEL system and tools?

MEL is not a single process or system.  The toolkit is already huge. Some of the processes and tools are suitable for use in the complex domain, some are not. Answering this question has the same issues as mentioned above:  (a) it assumes that it is possible to predict what will be needed and also (b) it is based on an assumption that a finite range of methods can cover every case. A better approach might be to ask: “What do we need to know and understand in order to be able to decide which tools and methods are likely to be useful in the specific complex intervention we are looking at?”  The specific nature of the complex domain means that we can only really consider one intervention at a time.A really important feature of complexity is that solutions are NOT transferable.  The MEL that works on one complex intervention is unlikely to be useful without changes for other complex interventions (that is the nature of complexity) and we can be certain that it will not be useful for all possible interventions in the complex domain. Monitoring is about the measurement of what is happening. There may be no reason to significantly change this from previous practice, beyond being far more obsessive about knowing what is going on right now and being able to respond quickly to build on success and damp-down failure.  If monitoring is not providing useful information in the causal domain then that won’t change when moving to the complex domain.===Evaluation in the complex domain can’t ask “did we achieve what we expected to achieve?” as that is largely meaningless.  But evaluation can still be any one of a range of approaches, and parts of the OECD DAC are still useful (e.g. “unforeseen consequences” in impacts).  Evaluation in the complex domain can usefully ask questions about what changes resulted from what aspects of the intervention, what was cost-effective and what was a waste of resources, how the intervention approach could be improved, etc.  Techniques like Outcome Harvesting and various story-telling techniques that are capable of identifying what really made a difference, what changed as a result of any intervention, and for whom, may be able to capture outcomes and impacts if scope and focus are correctly set. Learning is still much the same, it involves many of the same core skills for people to appropriate new information, internalize analysis and consequences, reinforcement to embed the knowledge, and a willingness to change beliefs and practices in the light of new information and experience.  If a causal/linear system is not supporting learning and producing new insights and approaches within an organization then transferring it to the complex domain won’t change the lack of learning. Selecting who might learn what, and for what purpose, needs even more detailed attention as complexity increases. Politics and social interaction are examples of interactions that are mostly in the complex domain, insight into “learning” in political and social contexts may provide useful guidance. 

3) What do you suggest I should read on this topic and question?

Dave Snowden’s Cynefin Framework publications and videos, and the Cognitive Edge website.  If you are interested in the theory behind the Cynefin view of complexity then Max Boisot (a tough read…)- Jonny Morell’s blog- John Mayne specifically on Attribution by Contribution- Outcome Harvesting website and blogs – Better Evaluation website on “What is evaluation”. Any web search engine will turn up plenty of other content on complexity as you search for these, and probably lead you to other useful sources – if you find anything really useful then maybe you could share it on Peregrine ?

Silva Ferretti 

Current RBM are totally not fit for purposes. They allow very little space to understand the meaning of results within context and the processes through which they are achieved. Unfortunately, changing approach requires rewiring our way of thinking. From an idea of linear change (if i do A then B happens) and control (I plan, I achieve) towards an idea of complexity (many diverse forces drive change, and they can play differently) and adaptiveness (we have a purpose, we navigate options). 

It also requires us to put our principles first, and to understand and integrate different worldviews. 

As some students in a training c told me; “we had really to stretch and twist our thinking. we feel we, ourselves, changed”. 

If done properly, principled M&E exploring complex and adaptive systems is not only about setting a different way to collect data or process them. 

It is about asking the whole organization to think and work differently in relation to change. To decolonize our thinking. 

And this is the real challenge. 

Bruce Boyes

Hi Arnaldo,

Following on from Russell’s good advice, I would suggest exploring iterative impact-oriented monitoring, as recommended in one of the Overseas Development Institute (ODI) complexity publications, see

Because of the uncertainty and unpredictability that occurs in complex environments, continuous and iterative M&E and the related adaptive management readjustment of actions aimed at achieving the original impacts is much more effective than hoping for the best and then doing MEL at the end. For an example of where I’ve successfully used this iterative MEL approach see

For an analysis of RBM in the context of complexity see Box 2 in

Thomas Winderl

I agree: RBM hasn’t turned out to be the revolution in adaptive management that it was supposed to be. But my feeling is: it’s not the idea that is flawed, but the implementation. In most cases, RBM turned into a fill-out-the-boxes, “do the same but give it a different name”, increasingly bureaucratic exercise.

That’s a shame. I think there IS value to the basic idea of RBM. My proposal to ‘rescue’ RBM: Follow five simple rules. These are:

1.      A relentless focus on outcomes, outcomes, and outcomes

2.      Keep adapting what you do based on what you learned from outcome monitoring

3.      Involve stakeholders at every step to create quick feedback loops

4.      Budget for outcomes and outputs, not for activities

5.      And overall: keep RBM simple, but not simplistic

I’ve written a longer post about this a few months ago. If you want to see more detailed argument, check out

John Hoven

Bruce, have you investigated qualitative methods for iterative, evidence-based investigation of cause-and-effect in a one-of-a-kind situation? The dominant method, process tracing, is used almost exclusively for backward-looking causal inference. However, it is easily adapted to forward-looking design and implementation of a context-specific development / peacebuilding project. Iterative updating of the causal theory proceeds at the pace of learning (every week or two, not every 6 or 12 months.) See

Alix Tiernan

Hi Arnaldo

This is one of our favourite questions 😊.

Here are some practical thoughts on your three questions:

1) Key principles

a) Design the system around processes that repeat regularly and often, and expect to review project documentation at least twice a year, allowing for new strategies, approaches, and activities to creep in, and other originally planned ones to fall out – that’s the essence of adaptation

b) Try to avoid using targets if you possibly can – or if you can’t, then try to use the targets as a comparison to where you are at but NOT as a performance management tool

c) Find creative ways of triangulating and validating your data to ensure that you are taking account of all the different and changing variables and voices (ie. complement surveys with research studies, focus group discussions in communities with meetings with ministers, etc.)

2) Different MEL system and tools

a) you can adapt results frameworks to an adaptive approach but you need to identify some very good qualitative data collection mechanisms that allow you to take into account the effect of the complex context on your project/programme (we have been using Outcome Harvesting for that)

b) make sure you don’t use targets as a performance management tool, because not hitting your target could be the best possible indicator that you are being adaptive!

c) we like using theories of change instead of logframes or results frameworks, because they don’t have to be linear and don’t prescribe specific outcomes – whatever tool you use, make sure you have a way of picking up unexpected results of the work that relate to the expected outcomes, and that you analyse why they happened. Ideally that analysis would then lead to adaptation of further implementation, in an iterative manner.

3) What to read

I will just point you to this document:, which we co-authored, but there are many more from ODI exploring adaptive programming, if you google it (it’s sometimes called adaptive programming, sometimes adaptive management). 

Hope these (rather hands on) thoughts are helpful.

Bob Williams

I’m sorry if I appear to be trolling everyone on this list.  Maybe I’ve been around too long and have seen some of the histories and backstories the ideas being discussed here.  My understanding is that Adaptive Management was developed and adopted because of the widespread failure of RBM rather than being part of the adaptive management story.  

RBM has been criticised as a really bad idea both in theory and practice at least since the mid 1990s.  I recall a hugely negative evaluation of the use of RBM in the UN agencies written at least 15 years ago.  It concluded that it was a very bad idea because it wasn’t sufficiently adaptive for the kinds of circumstances that UN agencies.  The formal response from the UN was that RBM was a problem of implementation not the basic concept, without giving a single piece of evidence.  Hence my dismay to find myself working with a very large UN agency last year that was just introducing RBM at enormous cost.

None of which necessarily undermines the advice given by Thomas.  Just that a bit of history (or at least my version of it) might help a little.

Leslie Fox

… and yet, here we are with most of the major multi- and bilateral donor agencies not to mention a good number of international and local CSOs – those who pay our consultant fees — very much committed to an RBM approach. I’ve worked with most of these organizations and while evaluations have required an RBM approach (achieving results vis-à-vis measuring change in indicators), none of them has been limited to a single methodology to arrive at a full picture of whether change has/is taking place or why. I believe it’s called mixed method.

Frank Page

HI Arnando,

I am going to take a bit of a different tack than others who have answered this question more from the technical side. I am going to look at the institutional side. 

The simple answer I would give is to turn MEL systems into MIS – because, ultimately, M&E data/information is management information. 

For organizations to adapt quickly and effectively in complex environments a number of organizational practices are required.  Some organizational practices required include (this list is not complete) combining authority and responsibility in each position as far down the chain of command as to the right places in the organization possible (in other words, and I like this term, all positions has the agency to do their job and improve their performance) and then provide them and give them access to the information they need to make the best decisions. 

In other words, all staff from field workers, to project managers, to program managers, to VP’s and CEOs should be monitoring and evaluating performance in relation to the vision and mission and making adjustments themselves – as low down the ladder as possible. The M&E function of analysis (at least) needs to be integrated in their jobs, not in a separate department. In addition, doing this also moves much of the learning function into the chain of command, and the parts of the learning function that don’t go into the chain of command could very well go under R&D.

The current practice of separating program and m&el introduces many, some serious, coordination problems that are difficult overcome. 

Thus M&E when becomes MIS, it focusses on collecting original data, pulling data from the organization and its stakeholders, pushing data and information to where it is needed and allowing those in the chain of command to pull the data/information they need.  Then, those who are responsible for actually achieving goals in a complex environment have the information to make changes and adapt quickly. 

Photo credit: Tim Johnson on Unsplash

comment 0

Solving problems versus changing systems

I went for a run last week and listened the interview with Nora Bateson on the New Books in System and Cybernetics podcast.

Nora Bateson is the President of the International Bateson Institute and her web bio mentions “she is an award-winning filmmaker, writer and educator based in Sweden. Her work asks the question: How we can improve our perception of the complexity we live within, so we may improve our interaction with the world? “

The interview was about the book she published in 2006, Small Arcs of Larger Circles, a collection of essays, reflections and poems where she writes about systems and ecosystems and where she applies her own insights and those of her team at International Bateson Institute to education, organisations, complexity, academia, and the way that society organizes itself.

There are several points that struck me from the interview. One is that, solving probelms requires seeing the system in which the those problems emerge. This goes beyond understanding context and circumstances that produce a certain problems. The solution to a problem requires looking at the contexts / systems that contribute to the problem.

Here is an example. Say, I want to get or stay fit and decide to go for runs every second day. That is good for my body (system A). However, if the neighbourhood where I live (system B) has no green areas, just asphalt and cement and the air is polluted by the factory at the outskirt of the city. That will affect how fit I can become. If I struggle with finding a job to due to a recession or maybe robots taking up my job (system C), I might put running and getting fit on the back burner. So, in this example there are three systems that interact and in which my running will be part of. In reality the systems would be many more.

I need to change my behaviour to get fit and stay fit through running. But the systems around me, or better, the systems in which I exist determine how fit I can be. They can help me become more fit.

Photo credit: Photo by Daniel Fazio on Unsplash