BY MAYA GAINER


Maya Gainer is a first-year International Development student and an editor of SAIS Perspectives. She previously worked as a researcher at Princeton University's Innovations for Successful Societies program, which took her to six continents to study governance and service delivery.


At the final Development Roundtable of the 2017-18 academic year, IDEV Assistant Professor Dan Honig launched his new book, Navigation by Judgment: Why and When Top Down Management of Foreign Aid Doesn't Work

In the book, Professor Honig argues that when the environment is unpredictable and projects may encounter unexpected challenges, the judgment of local actors can deliver better results. When given the flexibility to steer a project, agents working on the ground can incorporate more localized and up-to-date information into their decision-making, allowing them to better manage unanticipated situations. And while their decisions will not always work out, the process of trial and error enables the aid agency to learn how to handle similar challenges in the future. 

But despite the benefits of judgment, not all aid agencies are willing to give their agents on the ground a decision-making role.For several of the agencies Professor Honig studied, notably USAID, the emphasis was on setting clear targets and metrics, and then meeting those targets as laid out in a work plan designed at the top. This can work sometimes, especially if the environment is predictable and the project’s goals are easily measured and verified—for instance, building roads or drilling wells.But in other cases, Professor Honig argues, aid agencies’ desire for tangible results to report to the politicians who control their budgets and their lack of trust in local agentslead to less effective development programs. 

After Professor Honig’s Development Roundtable talk, SAIS Perspectives sat down with him to learn more about his book and its implications for aid policy. 


Perspectives: You’ve worked extensively abroad, from Liberia to East Timor. Were there any experiences that helped inspire you to study this topic? 

Prof. Honig: In the preface, I talk about an incident in East Timor on a motorcycle that got me thinking about whose judgement we trust when. One of my local colleagues was driving us down this muddy road in the pouring rain, and I thought we should stop for the night. He said, “no, I know this road,” and even though the road had essentially turned into a river, I thought, okay, let’s keep going. And that got me thinking about what it means to know something—why his knowledge of the situation was more meaningful than mine, and why, even though I trusted his judgment, I was hesitant to listen. 

I thought about those questions—who we trust and why—later in my career, especially when I met agents of aid agencies who were doing good work in spite of, rather than because of, the agency they worked for. The people who did the best work seemed to spend a lot of time convincing their bosses that what they were doing was a good idea, or they’d tell their boss they were doing what was in the work plan and then actually do something else that made more sense. And then I’d meet their bosses, who were generally good people who wanted the same things the agents did. If they both want the same thing, why the need for the subterfuge to do good work? And that’s what pointed me towards the inner workings of aid organizations, and the idea that internal processes and controls of aid agencies might be mis-aligned with optimal performance.

Perspectives: When do you think aid agencies should consider incorporating more judgment into the design of their projects?

Prof. Honig: There are a lot of aid agencies that will tell you off the record that they think their projects in certain sectors in certain countries aren’t working. In those situations, there’s room to try something new and politicians are more likely to accept it. 

There’s also an opportunity to leverage comparative advantage. Rather than changing management practices in every aid agency, aid agencies can collectively alter the distribution of projects—which agency does what. The agencies that favor more control could focus on more verifiable types of activities in more stable contexts, and others that are already navigating by judgment in some of their projects could work in the areas where that approach will add the most value. 

Perspectives: What project structures best enable navigation by judgment? How could someone at an aid agency build it into a project’s design?

Prof. Honig: The first step is to understand which agents you trust, and to work to educate their judgment. You don’t just randomly put someone in a decision-making capacity, you train them. And that training shouldn’t just be about conforming to logframes and metrics, but about how they assess a situation, decide what to do, and respond to challenges. 

Another important element is how you assess progress and results. When you navigate by judgment, your assessment of progress comes from the agent themselves, not some metric they report back to you. You can collect as much data as you want, but you’ll always need some subjective interpretation of the data from the people on the ground, especially in the short term. There are plenty of tasks where you can figure out 3 years later whether you did a good job using numbers, but not within 6 months. We need to be data-informed rather than data driven—quantitative data is fuel, and we should learn from it, but that data is an input to the process rather than itself the answer.

The project design also needs to include flexibility and adaptation. You’re going to need to trust the judgment of your advisor close to the ground, but also figure out a way to assess that judgment. To do that, you need a system where your advisor makes a series of small judgment calls, and then the two of you have a conversation to figure out how well the choices they made worked out and adapt if necessary. That kind of iterative process requires more flexible timelines than you typically see in development projects.

It’s also important to note that you don’t have to navigate by judgment in every single aspect of a project. A project that’s very tightly controlled from the top down will have difficulty incorporating judgement into a small piece of it, but an agency that navigates by judgment can decide that in some parts of a project, the best way to plan activities and describe success is by metrics. For instance, in a malaria control project, you might want to both hire health advisors and distribute bed nets. The bed net distribution would be measured easily, but when you’re dealing with more complex issues like uptake and use, you’d need the advisors to assess the situation and take actions based on their judgment. 

Perspectives: What options are there if judgment goes wrong—for instance, if your agent on the ground makes a disastrous decision or engages in corruption? How can an aid agency fix that problem without abandoning navigation by judgment altogether? 

Prof. Honig: When that happens, we need to not throw out the baby with the bathwater. The core of my argument is “soft information”—that there are things that are hard to codify and measure that can help us make decisions. So if something’s going wrong, the first thing to do is send someone more senior and whose judgment has already been validated to figure it out. And that person can say whether we need to change the model, perhaps to introduce more control, or change the people making decisions. Judgment is still crucial, but it’s happening at a different level. When a project goes wrong, there are so many things that could have caused it that either you need to do a very complex RCT or trust somebody to diagnose the problem, and that’s not unique to navigation by judgment. 

Perspectives: Some staff at aid agencies might agree with your argument and be ready to implement what we’ve talked about, but as you discuss in the book, they’re often worried about reporting back to the politicians who control their budgets—and navigation by judgment may produce less concrete or predictable results. How do you think politicians can be brought on board?

Prof. Honig: An important part of convincing the politicians to give this approach a chance will be education and conversation—talking with them about what’s going on and why in some cases it’s not working. And then there will need to be some innovation, trying things out on a pilot basis and adapting based on how things go in the field. 

If someone really wanted to own the book, if there was a politician saying, “we want numbers,” they might say “we can give you numbers or give you results, but not both, and here’s why,” and hand them the book. Of course, it’s not that easy—if it was easy, it would already be done. But we need to engage in conversation and be transparent, not by just generating meaningless numbers that we call results, but by discussing what the numbers really mean and what’s really happening in the projects.


Congratulations to Professor Honig on the publication of his book, and many thanks for diving into the details with us! Thanks as well to the International Development program for a wonderful series of Development Roundtable talks this year. To read about other Development Roundtable events, click here.


PHOTO CREDIT: Riccardo Gangale/VectorWorks from Flickr Creative Commons licensed under U.S. Government Works

Comment