comment 0

War crashes higher education systems – countries like Somalia need a system-wide reboot

[I am reposting here the blogpost I have written for ODI which was published in June 2020 on ODI Insight]

War and conflict have a systemic impact on higher education. Universities are left damaged by attacks or occupation by armed groups. Staff and students are killed or face forced displacement, while institutions are weakened as post-conflict financial resources are allocated to basic services first. Although this is slowly changing, higher education systems are often not a priority during post-conflict recovery.

My recent report on research cooperation in Somalia finds that the system-wide damage to higher education caused by conflict can only be addressed through systems thinking and collective effort. This is even more relevant during the Covid-19 pandemic, which is challenging governments’ capabilities to respond through a whole of government approach. Lessons from Somalia could also be applied to other conflict-affected countries looking to rebuild their higher education systems.

The politics of higher education in Somalia

Higher education was in a state of near collapse following the Somali Civil War (1988-2004), and it’s still affected by insecurity in some parts of the country to this day. Yet, unusually for a conflict-affected country, the number of universities in Somalia has also boomed since the mid-2000s.

The exact number of universities in the country is unknown. Officially registered institutions with the Ministry of Education, Culture and Higher Education in Mogadishu number 76, while others point to more than 100. Our own study compiles an unofficial list of 126 private universities, far outstripping Somalia’s larger neighbours Kenya, which has 58, and Ethiopia with 36.

The fact that higher education is not regulated in Somalia is key to explaining this boom. The Higher Education Act designed to remedy this has been in stuck in Parliament for over two years. This means that local state governments overseeing higher education institutions do so without an overarching legal framework.

Our study finds that Somaliland is the only state-level government to pass a law which spells out the requirements for university registration. This near total absence of regulation means that it is relatively easy to open a ‘university,’ while national and local authorities have limited powers and capacity to monitor and enforce quality standards in teaching and research.

This has all taken place while the security situation remains very unstable in parts of the country. New parliamentary elections (the first in 50 years) were due to take place in 2020, but may be postponed due to the unfolding Covid-19 pandemic.

Only two out of Somalia’s six federal states (Somaliland and Puntland) have carried on with ambitious plans for higher education with support from development partners and INGOs. The system continues to face multiple, complex challenges.

The challenges facing Somalia’s higher education system

As Somalia’s higher education system re-emerged, some (mainly private) universities began investing in research and research training. These institutions face several challenges, including very few or no staff with PhDs (our study finds just 9% of academic staff hold PhDs), and research publications that fail to contribute to career progression.

Female researchers face heightened barriers linked to insecurity on campuses, while culturally a career in research is often deemed inappropriate for women.

Government agencies also fail to prioritise policy research, and as a result almost all research is driven by development partners.

This is also reflected in the main challenge facing the system – financial resources. Money is scarce and the sector is heavily dependent on international donors, who contributed 45% of the total federal budget in 2019.

‘Wicked hard’ problems and ways forward

The problems detailed above and the many others facing Somalia’s higher education system are considered ‘wicked hard’ problems. This means they are logistically complex, interlinked, politically contentious, and most have no known solution.

Single point solutions have limited impact on a system dogged by wicked hard problems. They provide temporary, lone answers in one area of the system that can cause problems to pop up or persist elsewhere in the system. Wicked problems require an approach that tests innovative solutions in parallel across different parts of the system and at different administrative levels.

NGOs like the Somali-Swedish Researchers’ Association organise student exchange programmes, research collaboration and mentoring support. They are worthwhile initiatives that have shown to improve the research skills of the students they involved. However, they address only a set of problems in the higher education system.

Such initiatives could be complemented by other experiments on interlinked problems elsewhere in the system. For example:

  • Strengthening the capabilities of state-level Higher Education Commissions;
  • Addressing regulatory inconsistencies on university accreditation and teaching quality;
  • At the same time, overcoming political blockages to the Higher Education Act and encouraging policy-makers to use research to inform decisions.

While no single development partner has the resources to address all of the system’s many problems, development partners and the federal and state governments could discuss ways to shift from single projects to designing a portfolio of connected innovations and experiments that learn from each other and that together contribute to transform the higher education system in Somalia.

Just as war and conflict has a systemic impact on higher education, the effort to rebuild post-conflict higher education systems in Somalia (and elsewhere) requires the system-wide reboot students and academics deserve.

Photo credit: AU/UN IST Photo / Ilyas A. Abukar

comment 0

The brave new world of evaluating innovation. In conversation with Petri Uusikylä

In a recent blog, I discussed with Jordi del Bas what it meant to evaluate innovation. My main takeaways from the conversation with Jordi were that when evaluating innovation it is important “to generate evidence of whether innovative solutions work, how and why, so that they can go beyond a pilot and be scaled up to maximize their positive impact”. The OECD Development Aid Committee (DAC) evaluation criteria are not well suited to initiatives that test innovation, as they fit initiatives that are built on a reasonable degree of certainty, predictability and control. 

Talking with Jordi helped me to better understand the gap that exists between the established OECD DAC evaluation criteria of efficiency, effectiveness and impact, and criteria that better fit with and evaluate the experimentation of innovative solutions to social problems: criteria such as feasibility, desirability, viability, acceptability, usability and scalability. When evaluating innovation, it is first and foremost important to assess the learning that designing, testing and adjusting the innovation has generated versus whether the innovation achieved its planned objectives and goals.

I wanted to continue exploring this gap. What interests me is that this gap is not simply about evaluative tools and methods, but it is almost philosophical; it is about the principles, values and viewpoints, as noted by Gunnar Myrdal, that shape our idea of development.

Exploring complexity and innovation, Petri Uusikylä

I met Petri Uusikylä last winter at a presentation on monitoring, evaluation and learning, organized by Fingo in Helsinki. My Overseas Development Institute colleague Tiina Pasanen was presenting her work on monitoring and evaluating adaptive programmes. During the Q&A session I learned that Petri had worked in the area of evaluation for several years and that some years ago he took a strong interest in complexity and systems thinking. I reached out to him to learn more about how his interest in complexity had evolved and how it influences his view of evaluating development and social change.

Arnaldo Pellini – Tell me about your background and your evaluation work in international development

Petri Uusikylä – I have been doing evaluations, performance audits and capacity building in the international development context for 15 years. I have a background in political science and experience in teaching political theory, research methods and European policymaking at the University of Helsinki in the late 1990s. After that I worked at the Ministry of Finance as a senior advisor, and for the last 10 years I have been working as a public policy consultant, EU Twinning advisor and government public policy advisor. I was one of the founders of the Finnish Evaluation Society in 1999, and I am one of its board members. In the 2000s I worked on traditional policy and programme evaluations for about 10 years. In most cases, I worked with the OECD DAC evaluation criteria which I know very well.

AP – Tell me about your interest in complexity and systems thinking, and your research

PU – I have been interested in social network analysis for a long time. It started when I worked at the Political Science Department at the University of Helsinki between 1991 and 1997. That was during the early days of the Internet, before Facebook and other social networking platforms emerged. It was difficult at that time to analyse relational data. There was very little software to do that and it was not very good. In the team I was involved with, we adopted a trial and error approach to social network analysis, which led to some interesting results that we described in a book we published in 1999 called, The Network Society (Verkostoyhteiskunta:  käytännön johdatus verkostoanalyysiin).

I worked on traditional policy and programme evaluation for about 10 years after that. I learned and applied the OECD DAC evaluation criteria in policy evaluation, programmed evaluation and project evaluations. They were guiding my work – the questions I was asking and the evidence I was gathering. They always felt a bit too mechanical for the kind of initiatives I was evaluating, but they were widely used and I could not think of alternatives.

In the autumn of 2014, I remember reading Ben Ramalingam’s Aid at the Edge of Chaos. I was taken aback. It was by far the most brilliant book on international development I had ever read. It felt to me that Ben Ramalingam had put into words many of my thoughts and ideas about development and the evaluation of development initiatives.

Aid at the Edge of Chaos introduced me to the notion that cutting-edge ideas from complex science can be applied to the social, economic and political issues in development, and that those new ideas can also contribute to transforming the way international development works.

Soon after reading Ramalingam’s book, I applied some of its ideas to two evaluations: one for the Finnish Red Cross and one for the Innovation Partnership Program between Finland and Vietnam. The terms of reference for these evaluations required me and the team to follow the OECD DAC criteria, which we did. But we also managed to persuade the clients (Finnish Red Cross and the Ministry or Foreign Affairs) that we could complement the OCED by testing new systems evaluation methods. In the end it all worked out well and helped me to continue exploring the ideas of complexity science and how to apply them to development and evaluation work.

AP – What do you think evaluations capture and what do they miss in the way they are normally carried out?

PU – There has been an enormous growth in the overall evaluation landscape regarding the institutionalization of evaluation in the last decades. There is a new comparative study of the institutionalization of evaluation in Europe in which I was honoured to be one of the authors for the Finnish chapter. Finland compares well in most of these areas. According to this study, Finland is above average in the institutionalization of evaluation in its political and social systems, but lacks the procedures to utilise evaluation finding in political decision-making.

When it comes to the evaluation methodology, current evaluation approaches are rather static in applying mechanistic and linear causal logic, and often rely on rigid, a priori, defined evaluation criteria and methods. Thus, they are not suitable for understanding reasonably complicated or complex policy phenomena. My doctoral thesis last year was on the systems evaluation approach that relies heavily on systems thinking and complexity theories and utilizes systemic evaluation designs derived from these theories. Developmental evaluations, systemic models, social network analysis and the theory of change are examples of this approach. I also suggested that evaluators should adopt a more active role as knowledge brokers and policy interpreters between governments and citizens. This would make it easier for citizens to understand the complex regulatory and policymaking environment, and thus to support the development of an open and democratic society.

AP – How do you see the public sector, of which international development is an element, working with complexity and systems thinking?

PU – In the last 25 years, governance systems have changed. In my opinion, in western democracies hierarchical top-down governance structures are being slowly replaced by more bottom-up, collaborative, networked governance systems and capabilities.

Having said this, I am unsure as to whether governance systems have transformed sufficiently to tackle the wicked problems that are part of our interconnected societies and whether the evaluation methods and approaches which are used to assess the impact of public policies have evolved to meet the needs of new systems of governance.

The governance and leadership systems have to be agile to address and adapt to complex changes. Old management models based on best practices will not be sufficient to achieve success in public sector management. As my colleagues Petri Virtanen and Jari Kaivo-oja stated: “These changes are likely to be so challenging and pervasive that in some countries they go beyond the existing governance capabilities and will result in state failure.”

In a near future, there will be more need to strengthen and increase linkages and collaboration between state and non-state actors to address complex challenges such as inequality, climate change, and the pandemic we are experiencing today.

In a book I co-edited with Dr. Hanna Lehtimäki and Dr. Smedlund entitled Society as an Interaction Space, we present some new perspectives on these linkages which we call relational dynamics between governments, companies, and citizens. In doing so, we merge insights from public service science, political science, institutional logics and value co-creation.

I am currently working in collaboration with MDI Public, the University of Vaasa and Demos Helsinki on a project commissioned by the Prime Minister´s Office and some line ministries to develop a new systems governance model for Finland. As part of this study, we are benchmarking development in systems thinking in the United Kingdom, Denmark, Canada and Singapore. This study is aimed to be completed in spring 2021. 

AP – Tell me about your work with the University of Vaasa and your plans to have a research centre on complexity

PU – I started as a research director at the Complexity Research Group at the University of Vaasa this April. The university has built a strong research profile in the field of governance and complexity thinking over the past 10 years. The newly established Complexity Research Group (Kompleksisuustutkimuksen ryhmä) will start using complexity science in various domains to identify, map and better understand wicked problems in areas such as climate change, energy, public administration, anticipatory governance and so on.

With the help of complexity lenses and through cross-fertilization of findings from various disciplines, the research group is committed to exploring new niches within systems leadership and beyond. I am honoured to join the great team of complexity researchers. My aim as a research director is to bring a strong international and global context to the complexity research carried out at the University of Vaasa in the future.

Petri, thank you very much.

If you republish please add this text: This article is republished from Knowledge Counts, a blog by Arnaldo Pellini under a Creative Commons license. You can read the original article here

Photo credit: Mika Korhonen on Unsplash

comment 0

The 4IR Is Here. Do We Need to Design Development Initiatives Differently?

[I am sharing here a blogpost I have written with Vanesa Weyrauch which was published in March 2018 on on Helvetas Mosaic]

It is easy to get carried away with the promises of technology when we read about the Fourth Industrial Revolution, or 4IR.

In an earlier Mosaic article, Knowledge Systems and Policy Innovation in the 4IR, we wrote that, according to the World Economic Forum there is a good chance that by 2025 we will have: 1 trillion sensors connected to the internet; the first 3D printed car in production; the first government replacing the census with big-data sources and analytics; and artificial intelligence performing 30 percent of the corporate audits in the world. Jamie Susskind describes the mind-boggling possibilities offered by nanotechnology, with nanorobots able to swim through our bodies delivering targeted drugs, or the staggering increase in the number of people connected to the Internet, from 400 million in 2000 to an expected 4.6 billion by 2021. He writes that we are entering “a digital lifeworld characterized by machines that are equal or superior to humans in a range of tasks and activities; technology that is embedded in the physical environment in which we live; and digital technology that more and more records human activities as data and processes it through digital systems”.

There are risks of course, for example David Wallace-Wells writes: “Five years ago, hardly anyone outside the darkest corners of the internet had even heard of bitcoin; today, mining it consumes more electricity than is generated by all the world’s solar panels combined, which means that in just a few years we’ve assembled a program to wipe out the gains of several long, hard generations of green energy innovation.” Another example is OpenAI, the Elon Musk-backed non-profit set up to responsibly push the boundaries of what is possible with artificial intelligence. It has developed an AI system which generates coherent paragraphs of literary or news text (go to 19’45” of the podcast) which is so sophisticated that OpenAI has decided not to release it fully to the public because of the real risk that in the wrong hands it could generate very plausible fake news, spam or reviews. 

Reading about technologies of the future gives the impression that the technological changes we are witnessing have a life on their own. They promise a bright future of efficient production and an economic growth path that is at last within the natural limits of our planet. As argues by Susskind, these technologies are not exogenous forces over which we have no control. There are people behind these technologies and governments will need to strengthen leadership and develop human capital so that they are able to govern the techno-digital transformation in a way that leaves no one behind.

The challenge for results-based management approaches

In our discussion paper, State Capability and Policymaking in the Fourth Industrial Revolution, we argue that the challenge for government as we approach the 4IR is to keep up with the pace of change and understand the likely social and economic impact of technological innovation to be able to regulate it. This represents a huge challenge for middle and low income countries, since policymakers are expected to resolve interdependent policy challenges that imply a high degree of uncertainty while facing significant capability and institutional barriers such as for example limited budgets, decision-making processes organized in silos, weak IT and knowledge management structures, and low investment in evidence generation.

At the same time, the technological changes that are emerging will change the way governments work and policies are decided. Processes that investigate specific problems, design the necessary policies and regulatory frameworks, and deploy them through top-down systems will struggle in this new technology-driven context facing increasing disruption. In the near future, government institutions will have to adopt policies that govern techno-digital transformation by testing technologies and experimenting to ensure they benefit their citizens on the whole, while trying to avoid intensifying inequalities. Agile forms of government will be needed to help regulators and legislators continuously adapt to a new fast-changing social and economic environment, without stifling innovation. This is likely to challenge the role of central government agencies as local institutions may be quicker in adopting and using new technologies and interacting with citizens. Policymaking and decision making are likely to become more decentralized and concentrated in new areas called mega-regions, which combine cities and metro areas and are increasingly powering today’s world economy.

This poses a challenge to the way today’s development initiatives, particularly around governance reforms and public policy, are designed, planned and implemented. Pablo Yanguas, in his Why We Lie About Aid, writes that everyone involved in public policy knows that definitive results are rare, and yet the vast majority of development initiatives are designed to follow a linear results-based logic of input-output-outcome-impact. Most of the evaluations commissioned today by development partners and implementing organizations are asked to verify this results chain.

If we accept that governance systems will fundamentally change as we enter the 4IR and that governments have to start preparing today, then we may need to rethink the way we design and implement development initiatives. This is particularly true for initiatives that aim to support the development of leadership and human capital capabilities of future generations of civil servants, policymakers and researchers to drive these processes.

What are the implications for development programming?

Program design: The emerging literature on adaptive development provides some interesting ideas about how programs can be more open to the uncertainty of outcomes and results, and how to build a more experimental approach into governance initiatives that will increasingly deal with it. The suggestions are to invest time and resources into developing relationships with local partners and discovering common interests around problems. Digital technologies and platforms can help with that. This can help focus on solving problems that are owned, debated and defined by local stakeholders and partners, and which are not predefined. In some cases it can be about identity solutions (also called positive deviances) that partners have been able to develop despite bureaucratic constraints they face daily, and which document and support those initiatives.

Investing in acquiring a good knowledge of the political economy of the space and context in which the development initiative operates can help to design policy solutions that are politically feasible and not just technically sound. To do so, it is important to work with local innovation leaders committed to testing new governance solutions. These are individuals whose leadership is not a side effect of their position in a formal hierarchy (e.g. their job title), but rather a side effect of the respect, appreciation and trust they receive from their peers as well as being lifelong learners, a fundamental skill required by the pace and depth of changes in the 4IR.

Program implementation: test new forms of knowledge co-production to inform governance innovation and policy experiments through greater access to and use of digital technologies to link a wider range of stakeholders as we suggest through the knowledge system pentagram in our paper. Linking, for example, researchers from universities with policy research organizations; or professional knowledge by technical experts and civil servants with citizen knowledge. Through an experimental approach the need for blending different forms of knowledge and apply a more interdisciplinary approach to knowledge generation becomes more prominent. So does the need to develop physical and digital spaces for collaboration that enable testing of solutions, learning, and building on what works while dropping solutions that do not. Program funders can support this adaptive process by allowing a space for experimentation, acceptance that some experiments may fail, and investments in learning to help decide on which solutions to support.

Program teams: An experimental and politically informed program implementation approach requires a program team with the capacity and skills to do this. This involves either finding individuals with experience in adaptive management, demonstrated capacity to understand and collaborate with local leadership, and the ability to support experimental approaches, or investing the necessary financial resources and time in building skills and knowledge within the team and providing them with the space required to maximize the experience they bring with them.

Impact, replication and scaling up: Every development program is under pressure to replicate and scale up sustainable approaches and solutions. But what does sustainability and scaling up mean for a program adopting adaptive and experimental approaches to testing solutions? Replication and scaling up can refer to the uptake of adaptive and experimental principles by government partners to explore the opportunities and challenges that new technologies bring to governance, social and economic systems. It can be useful to explore the opportunities provided by the principles of innovation diffusion, which state that innovation emerges through initiatives designed and implemented by a small number of innovators. The tested solutions are then gradually taken up by a group of early adopters, followed by a larger group of adopters. In this context, innovation leaders can be catalysts for change from within a policy community. The scaling up can be accompanied by investment in documenting the successes of the partners more than those of the project, even though the two may be interlinked. In our opinion, it is a subtle but important difference.

Conclusion

Governance and policymaking process will be different in the 4IR. They will rely more than today on digital technologies and in the co-production of new forms of knowledge, within areas that bridge innovation, research, higher education, and local and professional knowledge.

These changes will not emerge overnight, they will evolve incrementally. Governments have to start preparing today human and governance capabilities that will be required in this imminent future to take advantage of the changes that are emerging and minimize the possible negative outcomes. Similarly, project and programs aimed at collaborating with national governments to support these change processes will need to evolve their approaches to design, implementation, and evaluations of results. In this article we have provided some initial ideas on how to do so.

comment 0

Schools, technology, and the COVID-19 response in Italy: a weak system but a resilient response

At the end of March, Italian novelist Francesca Melandri wrote an opinion piece for The Guardian. She started her article with: “I am writing to you from Italy, which means I am writing from your future. We are now where you will be in a few days.”

The following day, 29 March, Italy overtook China’s COVID-19 infection rate (86,498 to China’s 81,946) and became the country with the highest number of infections in the world.

I was born in Cremona, a city of about 72,000 people in Southern Lombardia. Cremona and its province were among the early COVID-19 hotspots in Northern Italy. The first Coronavirus patient was hospitalised on 22 February. On the same day, the mayor ordered the closure of all schools in the municipality. On 8 March, the whole of Lombardia was put into lockdown, followed two days later by the whole country (Prime Minister’s Decree).

The brief timeline shows that in less than three weeks Italy moved from having some confirmed cases of COVID-19, to hospitals in Lombardia rapidly becoming overwhelmed, to a countrywide lockdown and the sudden closure of schools, with approximately 4.4 million students in elementary and lower secondary and 2.6 million in upper secondary shifting to distance learning.

I am writing this mid-May and Italy has started its Fase Due (Phase Two) which involves a gradual easing of the harshest lockdown measures that have been in place for two months. Schools remain closed, but there is talk of end-of-year exams taking place in mid-June. I was interested to learn about how distance learning has worked out and how technology has supported it. I reviewed mainly Italian-language policy documents and news and also reached out to some friends who live in Cremona and work as teachers there.

COVID-19 hit a weak education system

The education system was not ready when the virus hit. It was as if it shot Italian schools into the 21st century and e-learning while they struggled with considerable institutional, governance and fiscal challenges.  The OECD PISA report of 2018 found that on average 15-year-old students in Italy scored slightly lower than the OECD average in reading and science knowledge. Italy spends a lot less on education than almost every other western country. Spending per student (from primary school to university) equates to $8,966 per annum, compared to $11,502 in Sweden. OECD data show that in 2015 Italy’s investment in education was equal to 3.6 percent of GDP, while the OECD average was 5 percent.

Italy has the largest share of teachers over the age of 50 (59 percent) and the lowest share of teachers aged 25 to 34 years across OECD countries. Some 68 percent of teachers report that improving teacher salaries should be a high spending priority, but resources are scarce.

Italy ranks 24 out of 28 countries on the European Union Digital Economy and Society Index. Three out of ten Italians are not regular internet users, and more than half of the population lacks basic digital skills.

Cremona, Photo by Luca Bravo on Unsplash

Teachers and students were the engine that made distance learning work

The Italian education system is weak. Resources are scarce but teachers have been very resilient during this crisis. The teachers I contacted said they had never had a discussion within their school or with local level education agencies about possible crisis scenarios, such as COVID-19. In other words, there was no plan in place to move quickly into distance learning and the use of technology.

The Ministry of Education (Ministero dell’Istruzione) issued guidelines regarding school closers and managing distance learning. Some of the guidelines were slightly vague, in particular about end-of-year assessments. The government intervened twice with budget allocations of 70 million Euro in March and a further 80 million Euro in April to help schools acquire computers and tablets for students who had no access to these at home.

On 2 March, the Ministry of Education set up a dedicated page for Didattica a distanza (distance learning), with information and links to free-to-use digital platforms and apps, dashboards where teachers could share their experiences and suggestions, and webinars to provide teachers with suggestions and advice about distance learning.

One of the teachers I contacted said he relied on the Google Suite platform that he was already using. However, some of the students struggled at the beginning with the Classroom app.  One of the schools took part in the Microsoft Showcase Schools programme and used Microsoft Teams for both teaching and teachers’ meetings. The system required about two weeks of fine-tuning for students to become used to it, but things have worked out quite well, with most of the students being active and engaged during remote classes. Students who did not have a computer or tablet at home received one from their schools. These platforms helped provide support for learners with disabilities. Docenti di sostegno (support teachers) communicated directly or set up dedicated online groups to assist learners with disabilities, which complemented their regular teaching.

The feeling is that despite the very high toll this crisis has taken on Italy, the many uncertainties about distance learning and the overall weaknesses of the education system, online teaching and learning has worked out better than expected. It is almost as if the experimentation that teachers and students had to go through has unleashed new ideas and creativity that were hidden before. There is anecdotal evidence of teachers feeling that they are better able to personalise teaching through digital platforms, and young students have helped their older teachers use these new technologies.

As one of the teachers I contacted said: “While it is important to have direct interaction in the classroom, this crisis is teaching us that it is not necessary to be in the classroom for all teaching activities.” Behind the crisis, new ideas and ways of teaching are emerging, no matter how weak the system is.

If you republish please add this text: This article is republished from Knowledge Counts, a blog by Arnaldo Pellini under a Creative Commons license. You can read the original article here

Photo credit: Photo by Luca Bravo on Unsplash

comments 15

How to evaluate innovation? In conversation with Jordi del Bas

Innovation is a confusing buzzword. The Oxford Dictionary defines innovating as “making changes in something established, especially by introducing new methods, ideas, or products”.

Other definitions include ‘turning an idea into a solution that adds value from a customer’s perspective.’ (Nick Skillicorn);  ‘ideas that are novel and useful and that can be applied at scale.’ (David Burkus). The list is long.

These definitions point to the fact that there is no clarity and that what innovation is can be very subjective. Is the lack of a clear definition of innovation a problem in terms of evaluating innovation? What do the evaluators of innovation look for when they are called in to do their work?

Innovating through evaluations, Jordi del Bas

I spoke with Jordi del Bas to find out what he thinks about this. Jordi is an evaluation specialist and senior researcher at the EU-Asia Global Business Research Center. Over the years, he has been involved in many evaluations focusing on social/policy innovation.

Arnaldo Pellini – Can you briefly describe your background? 

Jordi del Bas – I am an economist by training. I have specialized in private sector development. Over the last 17 years, I have worked as an independent evaluator for international agencies in several countries throughout Asia, Africa and Latin America. At present, I combine work as an evaluator with research and teaching in business schools and universities. 

AP – Do you have a definition of innovation?

JdB – This might seem a simple opening question but it is not. Really. Innovation is an overused and abused term that means different things depending on who you ask. Over the last decade, there has been an upsurge of articles questioning the word.

I do not have my own definition, but I feel very comfortable with the TSI Foundation (The Netherlands) definition: New ideas that work to address pressing unmet needs. In the specific area of international development, I would define innovation as solving pervasive and new problems in new ways. New ways meaning adopting approaches that differ from those applied in the past.

Whatever definition you go for, to me there are two critical aspects to innovation. The first is that innovation depends on the context because pervasive or new problems are context specific. The second is that innovation is about valuable solutions, that is ideas that work in practice, that solve issues we care about, things we value.  

AP – What are you looking for when you evaluate innovation processes/systems?

JdB – In innovation, we usually evaluate solutions, supposedly innovative solutions. Evaluation in innovation focuses on finding robust evidence of whether innovative solutions work, how and why, so that they can go beyond a pilot and be scaled up to maximize their positive impact. 

AP – Is there a difference when evaluating innovation along with the traditional criteria of efficiency, effectiveness, impact, etc.?

JdB – Yes, there is a difference. The OECD Development Aid Committee (DAC) criteria are usually applied to traditional interventions where the focus is on achieving planned goals. Efficiency, effectiveness and impact criteria put the lens on expected effects. In traditional interventions, you do not expect significant deviations from the way you thought things would happen. It is more about gathering evidence about the outcomes achieved. Traditional interventions build on a reasonable degree of certainty, predictability and control.

In contrast, innovation is about trial and error and about iteration. In innovation, failing (not achieving intended objectives) is a source of critical insight, a source of learning. In a traditional intervention, failure tends to be a problem, as the focus is on achieving planned goals within planned resources. At times, when innovating, you also have an idea of how things could work. Still, the focus is more on testing your assumptions. There is an intentional interest in identifying unplanned and unexpected effects, as we know that valuable insights may be found in assumptions that do not hold true. You might fail over and over and then all of a sudden, find something that works. Implementing innovative solutions means a higher degree of uncertainty, lack of predictability, and lack of control. 

Some of the criteria used to evaluate innovative solutions are: feasibility, desirability, viability, acceptability, usability and scalability. You could eventually evaluate if an innovation has been effective or efficient, but you would do it in terms of generated or acquired learning rather than in terms of whether it achieved planned objectives and goals. In my opinion, however, the focus when evaluating innovation is not on evaluation criteria.  The focus is on finding reliable evidence of whether innovative solutions are having an impact, how and why. In this regard, the NESTA Standards of Evidence set out an interesting approach.

Moreover, I think evaluation in innovation is also about sensemaking. By sensemaking I mean making collective sense of the evidence obtained to inform what is next in the process of solving the pressing unmet needs of my earlier definition.

AP – From what you are saying the evaluation principles and guidelines set out by the OECD DAC and the evaluation methods that align with them do not see to fit well with innovation.

JdB – I think that there is a growing need for methods that allow us to capture the effects of interventions in volatile, uncertain, complex and ambiguous contexts (so-called VUCA). This is the case for innovation, but also for policy changes and reform processes. Evaluation designs that incorporate systems thinking and complexity theory are urgently needed. This is the case with Developmental Evaluation, an approach grounded explicitly in innovation. Michael Quinn Patton, one of its main proponents and developers, promotes developmental evaluation as an approach intended to assist social innovators in developing social change initiatives in complex or uncertain environments. Patton refers to innovations in a broad sense. To him, innovation can take the form of new projects, programs, products, organizational changes, policy reforms and system interventions.

A developmental evaluation can be easily embedded into design processes of solutions and experiments. When this happens, developmental evaluation provides real-time or quasi real-time evidence, informing the innovation process along the life of a project, an experiment, a pilot. Developmental evaluationsalso fit organizational development and learning processes very well.  

Just a few days ago Patton wrote about the implications of the coronavirus pandemic on evaluation. One of his points was that all evaluators should now become developmental evaluators. His argument is that developmental evaluation is about evaluation under conditions of complexity, and this is now becoming the natural environment in which we must operate. We cannot model and predict the effects of interventions in complex systems, and traditional evaluation methods and criteria become almost obsolete.

AP – Are there specific skills or tools that evaluators interested specializing in evaluating innovation need to familiarize themselves with, or know well?

JdB – In my view, three skills characterize the evaluation of innovation: facilitation skills, a versatile attitude, and good analytical skills.

Let’s start from facilitation skills. In innovation, you test, iterate, analyse and share results to collectively interpret what works (or not), how and why. Such feedback loops involve bringing people together and enabling sensemaking processes. This is the reason that good facilitation skills are very useful. By versatile attitude, I mean the capacity to adapt and adjust quickly to emerging findings. Evaluating innovation builds on adaptive frameworks and on iterative feedback loops, which demand to be ready and feel comfortable with modifying courses of action when needed. Good analytical skills are needed for quick turnarounds in data collection, analysis and feedback. When evaluating innovation, data collection, analysis and feedback loops are carried out continuously, and often they need to be run in parallel alongside testing of solutions.

Different viewpoints to understand complex realities. Photo by Ivan Bandura on Unsplash

AP – Tell me more about the sensemaking process

JdB – Several scholars and practitioners have written about sensemaking from different perspectives. Karl Weick addresses sensemaking in organizations and Brenda Dervin in information and communication systems. Klein, Moon & Hoffman define sensemaking in a way that resonates with me, “a motivated, continuous effort to understand connections (which can be among people, places, and events) to anticipate their trajectories and act effectively.

Sensemaking is always linked to emergence, which in turn is linked to complex settings or to processes that cannot be planned or forecasted.  This is usually the case in innovation, where it is very difficult to predict what will happen. We usually understand why things happen in retrospect, looking back, making sense of what has occurred.

My closest experience with sensemaking within a developmental evaluation has been with UNFPA. We analyzed emerging patterns in the data and made sense of it through several rounds of feedback loops with a wide range of business units in the organization. We used visual mapping techniques and discussed the findings in a participatory process that allowed answering questions such as how does ‘what is going on’, affect what you desire to be as an organization?  A question that to me, reflects the essence of organizational development.

AP – The skills you describe are similar to the ones required from managers and experts involved in the design and implementation of experiments to test solutions to public policy problems. These skills are an integral part of adaptive programming, something that is slowly being accepted in international development. I have one last question, Jordi. You mentioned in the beginning of our conversation that innovation has become a buzzword. What is or are the risks associate to this?

JdB – There is a crucial aspect of innovation that is often omitted or downplayed: the assessment of undesired adverse effects of disruptive innovation on society. 

There is a tendency to look at positive impacts for a number of stakeholders, usually the target group of an innovation. We tend to miss questions such as, What is the broader negative unexpected impact of disruptive innovations? For whom are disruptive innovations good or bad in terms of society as a whole?

We are in a world where almost every organization is compelled to innovate in one way or another. If organizations do not innovate, they become second class. Answering these questions is essential if we want innovation to be meaningful and transcend the stigma of a dangerous buzzword that soaks up massive budgets and hides worryingly negative impacts. In this context, it is vital to strengthen the use of impact evaluations in evaluation portfolios for innovation. This would help us answer questions along the lines of, For whose benefit do we innovate? It would allow us to delve deeper into the purpose of innovation.

Thank you very much, Jordi.

If you republish please add this text: This article is republished from Knowledge Counts, a blog by Arnaldo Pellini under a Creative Commons license. You can read the original article here

Photo credit:  Dario Valenzuela on Unsplash