Matthew Levinger is Research Professor of International Affairs at the George Washington University. He directs the National Security Studies Program, an executive education program for senior officials from the U.S. government and its international partners, as well as the Master of International Policy and Practice Program at GW’s Elliott School of International Affairs. Before joining GW, he was Senior Program Officer at the United States Institute of Peace and Founding Director of the Academy for Genocide Prevention at the U.S. Holocaust Memorial Museum.
Complex adaptive systems require tight feedback loops, in order to adjust in an agile manner to new information and changing conditions in the environment. Yet many large institutions—governmental and intergovernmental agencies, nongovernmental organizations (NGOs), and transnational corporations—are still organized as centralized hierarchies operating with industrial command-and-control mechanisms, including rigid protocols for Monitoring and Evaluation (M & E).Read more
In the fields of peacebuilding and international development assistance, procedures for M & E can be overly rigid, coercive, and slow. The very words “monitoring” and “evaluation” can strike fear in the most seasoned practitioners, conjuring images of NSA eavesdropping and sweaty-palmed high school students suffering through standardized exams. M & E protocols often focus on measuring “outputs”—e.g. the number of workshops delivered or wells for drinking water drilled—rather than on the transformative “outcomes” or “impact” that they seek to promote. The session on “Evaluation Methods” at the April 2015 Build Peace conference in Cyprus explored strategies for assessing and enhancing the effectiveness of technology-based peacebuilding programs.
Marshall Wallace, an evaluation expert who participated in the session, offered a cautionary example of M & E at its worst, focusing on an international development organization’s project to build schools in a conflict-torn region. This project placed one new school on a mountaintop between two communities that were at war with each other. As soon as the school was built, a militia from one of the communities seized and occupied the building as a military outpost, at which point fighting broke out over this strategic location, ultimately destroying the school. As soon as the building was destroyed, the fighting died down—only to flare up again after the development organization rebuilt the school in the same location. This pattern repeated itself several times, exacerbating the chronic conflict and instability in the district. And yet, the development organization hailed this project as a success—because each time that the building was rebuilt, it was counted as another new school!
To work effectively in conflict zones, practitioners need to adopt habits of reflective practice, which has been defined as “the capacity to reflect on action so as to engage in a process of continuous learning.”[i] Figure 1 visualizes a model of reflective practice developed by Gary Rolfe: action (the “what”) is followed by learning (the “so what”), which is followed by adaptation (the “what next”), which in turn shapes future actions in an ongoing cycle.[ii] This iterative and collaborative model of “Learning and Adaptation” is better suited to facilitating effective work in complex environments than traditional linear Monitoring and Evaluation mechanisms.
After the opening presentation, the participants in the workshop divided into three working groups, each one focusing one how to develop a “Learning and Adaptation” protocol for a hypothetical peacebuilding project. One group discussed a project to advance national reconciliation and the peace process in Northern Mali, the second discussed a project to prevent violence surrounding football matches in the English Premier League, and the third focused on a project to curb sexual harassment on the street in Cairo. In each case, the participants were urged NOT to try to develop the perfect operational approach from the start, but rather to develop a protocol for learning quickly from failures and successes, and to adapt their approach based on this new information.
The working group on curbing sexual harassment in Cairo was facilitated by Sawsan Gad of the World Bank. After a brief brainstorming session, the group chose as its course of action the idea of creating an online mapping tool to identify danger spots in the city. The group spent most of its time discussing its learning and adaptation strategy for assessing this tool’s effectiveness and making potential refinements.
As a starting point for its strategy of “learning by doing,” the group articulated the following theory of change: “By increasing awareness of the more and less dangerous areas of Cairo, women will be better prepared to avoid street harassment.” The participants discussed various approaches for testing the validity of this theory, and explored how they might adapt their approach in light of what they learned. For example, if certain areas of the city were not generating much data for the project, they might use paper maps to supplement online reporting tools. In the event that the project were to produce useful data, an important priority would be to leverage the impact of that information by generating media coverage of the problem of sexual harassment, as well as to share the data with police and to incentivize constructive policymaking on this issue by local and national authorities.
Sawsan’s group embraced the instruction that they should “start with action,” so that they could learn and adapt quickly from their failures and their successes. But this guideline for discussion generated resistance from some participants in the other groups. As Marshall Wallace, who facilitated the Northern Mali group, noted:
I am always struck how people in working small-group sessions contend that they can’t do a good or “realistic” job because of the limitations inherent in small-group sessions: limited time, lack of information, lack of the “right” people. This complaint ignores that real program design sessions are almost exactly like the small-group sessions at workshops: we never have all the time we want, or all the information we want, yet we still have to make decisions.
When designing a program, Marshall pointed out, it can be liberating to not expect to get everything right the first time around:
This is why I so much enjoyed the reframing of “Learning and Adaptation.” Yes. Stop worrying about the gaps, do something, and see what happens. Learning is the key measurement. What did we learn that changed the way we act? If a team continues to act the same way after “learning,” then they didn’t .
[i] Donald Schön, The Reflective Practitioner: How Professionals Think in Action (New York: Basic Books, 1983), 18.
[ii] Gary Rolfe, Knowledge and Practice (London: South Bank University Distance Learning Centre, 2001).