I was in Manila last week to attend the practitioners’ forum organised by the Asia Foundation on Adaptive Programming and Monitoring, Evaluation, and Learning.
A really good workshop where to share of experiences about tailoring MEL systems to fit the accountability demands from funders and support the adaptation and iteration of programmes. In attendance there were large and small development programmes from the region, funders, implementing organisations, advisors, and researchers.
I gave a presentation at the beginning of day one where I shared my personal experience in this area as well as findings from past and ongoing projects implemented by colleagues at ODI such as David Booth, Alina Rocha Menocal, Tiina Pasanen, Anne Buffardi and others.
One point I made was that the natural sequencing in the day-to-day of a programme is actually Monitoring, Learning and, more intermittently, Evaluation. So, MLE rather than MEL.
Jaime Faustino presented the six simple tools used by TAF’s Coalition for Change project. These tools show that it is possible to design useful and (at the same time) simple tools for monitoring and learning. An interesting panel on ToCs and their use through an evolving programme. Some of the presentation focused on how programmes design feedback loops to inform the re-design of activities. There was a very interesting and open discussion about what counts as contribution to higher level outcomes for an adaptive and programme.
The Asia Foundation is working on a report of the workshop.
Many interesting points that I am still processing. Here my takeaways:
- I have participated in few of the adaptive programming and DDD meetings over the last couple of years. I found that this workshop, with its focus on MLE, was a step forward in the discussion. Earlier meetings where more general, which was to be expected as the discussion around adaptive programming was starting. But now we are starting to look at specific elements of adaptive programmes.
- Large programmes with large budgets, large teams, tackling wicked hard problems of governance capability have specific challenges and opportunities. They cannot change activities or work streams very quickly but, at the same time, they have sufficient budget to develop experimental and adaptive components and MLE systems as part of their design.
- There was a discussion about the percentage that programmes allocate to MLE. The range was between 3% to 20%. The average at around 10%. This is probably low to support an adaptive implementation. Without insufficient MLE budgets, the risk is that programmes will be asked hard questions about their contribution to outcomes by funders to which they may struggle to answer without adequate MLE resources.
- Large programme has to set up things quickly and often they develop ToC and M&E frameworks as one of the first deliverables without too much information about problems, possible solutions, and approach. One way to manage this tension, which was shared by some programmes, is to design MLE systems based on Key Evaluation Questions, letting indicators and means of verification emerge as the programme evolved.
- Some programmes have shared their work with mobile apps and data analytics (TAF Myanmar / Open Data Labs Jakarta) to collect monitoring data. Interesting and innovative solutions, but overall, current MLE systems do not seem to take full advantage of digital technologies. The Excel spreadsheets are still going strong as well as formats in Word with a lot of time spent on the input and output sides by staff.
- Linked to the previous point, one of the breakout session discussed what MLE will look like 10-15 years from now. We imagined AI that will give us new opportunities to come up with scenarios and ToCs. Monitoring data will be collected through videos and audio recordings. Software will automatically code, analyse, and synthesize the information. Monitoring data will be better at integrating data analysis and AI analysis of changes in social and political context in which a programme operates. Science fiction? Maybe not.
I learned a lot and left with some new questions. It is important to remember that adaptive programming is a mean towards an end. Not an end in itself. In the same ways, a MLE system is mean to support adaptation and iteration of a programme and provide the funder with the information they require. Being adaptive does not mean to try out solutions without a sense of direction.
The question I have is the following: let’s take the Coalition for Change project and some of the policy reforms that the project has contributed to like the property rights reform. The policy change objective has been achieved and is well documented. It is an important policy reform. Is that sufficient or does Coalition of Change (or similar projects) need always to demonstrate a contribution at an higher outcome level, which answer the so what or what does this all add up to question that funders ask? I am in two minds about this.