With rigorous economic research and practical policy solutions, we focus on the issues and institutions that are critical to global development. Explore our core themes and topics to learn more about our work.
In timely and incisive analysis, our experts parse the latest development news and devise practical solutions to new and emerging challenges. Our events convene the top thinkers and doers in global development.
CGD’s research on aid effectiveness focuses on the policies and practices of bilateral and multilateral donors. Combining strong research credentials and high-level government experience, our experts analyze existing programs, monitor donor innovations, and design innovative approaches to deliver more effective aid. CGD research also provides insight into how policies ranging from trade to migration to investment undermine or complement foreign aid policies.
Get Aid Effectiveness Updates
CGD experts offer timely research, analysis, and policy ideas for the world’s emerging development challenges. Sign up to get the latest updates from CGD!
How well do your country's policies make a positive difference for people in developing nations? That’s the question CGD seeks to answer each year in our Commitment to Development Index (CDI). It’s a ranking of 27 of the world’s richest nations based on seven policy areas: aid, finance, technology, environment, trade, security, and migration.
The team behind the CDI, deputy director of CGD Europe Ian Mitchell and policy analyst Anita Käppeli, join me this week on the CGD podcast to discuss why these rankings matter and how countries stack up.
In first place this year is Denmark, followed by Sweden, Finland, France, and Germany. Greece, Japan, and South Korea rank at the bottom—though South Korea actually ranks first on the technology component.
Among the countries in the middle are the UK, tying with the Netherlands for 7th place, and the US, all the way down at 23rd. In the future, how might these scores be impacted by the changing politics of the two nations?
“On Brexit, there’s real potential for this to affect the CDI score,” Mitchell tells me in the podcast. “The UK will take control of its own migration policy more fully and it will have its own trade policy and it will take control of agricultural policy form the EU. All of those things feature in the Commitment to Development Index.”
As for the the Trump Administration’s America-first approach, Mitchell says, “It’s surely in the interest of countries to see other countries developing to reduce the security risk, to make sure there’s lower risk of disease emerging . . . and the CDI is a framework for prioritizing action on that.”
Overall, Käppeli tells me, the CDI is a reminder to countries that “policy coherence is an issue; that they should not pursue policies in [only] one field—for instance, give a lot of aid, but then close the boarders for products from developing countries.”
“The CDI is holistic,” Mitchell adds, pointing out that the CDI’s focus on policy is “complementary” to the Sustainable Development Goals’ focus on outcomes: “If you think about how we’re going to achieve the SDGs, then looking at the CDI [is] a great way to do that.”
How well do your country's policies make a positive difference for people in developing nations? That’s the question CGD seeks to answer each year in our Commitment to Development Index (CDI). The team behind the CDI, deputy director of CGD Europe Ian Mitchell and policy analyst Anita Käppeli, join me to discuss why these rankings matter, how countries stack up, and how their scores may be impacted by the shifting political environment.
Today, we published this year’s Commitment to Development Index (CDI), which ranks 27 of the world’s richest countries in how well their policies help to spread global prosperity to the developing world.
We will be presenting the Index and our recommendations at the high-level period of the UN General Assembly (UNGA) later this month. As political leaders prepare to meet for UNGA, here are some key takeaways from our research that should help guide their policies and discussions.
1. Leadership on global development isn’t only for the richest!
The CDI analyzes the policies of 27 of the world’s richest countries in seven key areas: aid, finance, technology, environment, trade, security, and migration. The indicators adjust for size and economic prosperity—and the results demonstrate that country wealth does not determine the results. The wealthiest countries—represented by the G7—rank anywhere between fourth and twenty-sixth. Income per person averages half that of the United States in Visegrád countries (Czech Republic, Hungary, Poland, and the Slovak Republic), but all four now rank higher in their commitment to development. Portugal, who ranks sixth, performs well in most components despite being less prosperous than many of the CDI countries. Smart policy design is not a matter of prosperity only. Therefore, our first key message to all the leaders of the 27 CDI-countries:
Domestic economic challenges needn’t prevent leadership on smart policies to increase global prosperity.
2. Development is about much more than aid
The CDI draws attention to the fact that global development is about so much more than the amount or quality of foreign development assistance provided. Policymaking in various policy fields directly affect the lives of poor people around the globe.
For example, the design of our policies on technology or finance affect the prospect for people living in poorer countries. Both research and development policies and investment policies are mainly pursued for domestic goals. However, they have a lasting effect on developing countries. Smart intellectual property rights can enable knowledge sharing and technology transfer. Also, bilateral investment agreements with developing countries recognising specific public policy goals such as labour rights, environmental standards, or human rights can have an important effect on the prospect for development.
The commitment to implementing balanced and sustainable policies domestically also sends a strong signal about their importance globally and irrespective of countries borders. Money spent on foreign development assistance does not have the same lasting effect if countries don’t recognise the international impact of their actions in other policy areas. Therefore:
In our integrated world, your policies and decisions as a leader of a rich country have an important bearing on the lives of people in developing nations.
3. Even the bottom-ranked country has smart policies we all can learn from
Like the Sustainable Development Goals (SDGs), the CDI recognizes development has many angles. But while the SDGs cover all nations and their outcomes, the CDI concentrates on the richest countries and emphasizes how policies can make a huge difference to development globally. The fact that we limit our evaluation to high income countries means that policy recommendations are more tailored and relevant. Even the best-ranked countries have weaknesses where they can learn from their peers. Overall leader Denmark performs weaker on migration and could learn from the migration policy designs from countries as varied as Luxembourg, New Zealand, or neighbouring Sweden. Similarly, bottom-ranked South Korea could advise all other 26 CDI countries on how to build long-lasting support for research and development. Accordingly:
Use the CDI as a tool to learn from others and to inspire change through your own best-practice policies.
4. Some overall progress on the Environment component but stronger commitments are needed
Tragically, Hurricane Harvey has reminded the United States how vulnerable we all are when natural disasters hit. Further, people in South Asia were left suffering after massive flooding and devastation affected millions, while earlier this year we saw how the unprecedented drought in Africa affected the lives of millions facing malnutrition. These tragic events, sadly far from unique, remind us that we all need to do more to combat climate change.
This year’s CDI points out that progress has been made—CDI countries report progress in curbing new greenhouse gas emissions and the amount of Ozone-depleting substances has been cut significantly. However, many environmental challenges remain. We need to see an even bigger commitment to development from the CDI countries in the future, such as a complete support for the Paris Agreement and the willingness to tackle issues such as overfishing and deforestation. Thus, our final recommendation:
While progress has been made, many global challenges remain. We ask this generation of world leaders to strengthen and deepen their commitment to development.
These findings show that we and our governments can do so much more to spread prosperity to poorer countries. The CDI serves as a useful tool to identify which national policies still have potential to be designed in a more development friendly way. We hope world leaders use the opportunity of UNGA to discuss ways to make further progress in all policy fields, inspiring each other to achieve more on global development.
With plans for a redesign of the State Department and United States Agency for International Development well under way, this is a critical moment for an informed discussion of the latest reforms proposals that will make US foreign assistance more effective and efficient. Please join us for a bipartisan debate featuring authors of four recent reports that outline options for reform and reorganization of US global development functions. The event will bring to light key areas of consensus and divergence among experts, and will aim to highlight emerging organizing principles for the future of US foreign assistance, potential structural changes to the US global development architecture, and opportunities for building momentum in a fluid political and legislative environment.
With the US Congress considering cuts to foreign assistance and aid budgets in other donor countries coming under increased pressure, evidence about what works in global development is more important than ever. Evidence should inform decisions on where to allocate scarce resources—but to do so, evaluations must be of good quality. The evaluation community has made tremendous progress on quality over the past decade. Several funders have implemented new evaluation policies and most are conducting more evaluations than ever before. But less is known about how well aid agencies are evaluating programs.
To fill in the gap, we—together with our colleagues Julia Raifman Goldberg, Felix Lam, and Alex Radunsky—set out to assess the quality of global health evaluations (both performance and impact evaluations). We looked specifically at publicly available evaluations of large-scale health programs from five major funders: USAID, the Global Fund, PEPFAR, DFID, and IDA at the World Bank. We describe our findings in a new CGD Working Paper and accompanying brief. Check out the brief recap of our findings below.
What types of evaluations are aid agencies conducting?
We identified a total of 299 evaluations of global health programs published between 2009 and 2014. One feature stood out to us: performance evaluations made up an overwhelming majority (91 percent), with impact evaluations accounting for less than 10 percent. This is comparable to the share found across USAID evaluations in all sectors by an earlier study. And among impact evaluations, those using experimental methods, known as randomized controlled trials or RCTs, constituted a minority (we only found five RCTs). When looking at evaluations commissioned or conducted by major funders, the often-made criticism that RCTs are displacing other forms of evaluation doesn’t hold up.
How well are aid agencies evaluating global health programs?
We randomly sampled 37 evaluations and applied a standardized assessment approach with two reviewers rating each evaluation. To answer questions about evaluation quality, we used three criteria from the evaluation literature: relevance, validity, and reliability. We considered evaluations as relevant if the evaluation addressed questions related to the means or ends of an intervention, and used appropriate data to answer those questions. Evaluations were considered valid if analyses were methodologically sound and conclusions were derived logically and consistently from the findings. Evaluations were considered reliable if the method and analysis would be likely to yield similar conclusions if the evaluation were repeated in the same or similar context.
We constructed four aggregate scores (on a three-point scale) to correspond with these criteria. Overall, we found that most evaluations did not meet social science standards in terms of relevance, validity, and reliability; only a relatively small share of evaluations received a high score.
Looking across different types of evaluations, we found that impact evaluations generally scored better than performance evaluations on measures of validity and reliability.
What can aid agencies do better going forward?
Building on our analysis, we developed 10 recommendations for aid agency staff overseeing and managing evaluations to improve quality.
Classify the evaluation purpose by including this information in the title and abstract, as well as coding/tagging categories on the agency website.
Discuss evaluator independence by acknowledging the evaluators’ institutional affiliation and any financial conflicts of interest.
Disclose costs and duration of programs and evaluations.
Plan and design the evaluation before program implementation begins; we found that early planning was associated with higher evaluation quality.
State the evaluation question(s) clearly to ensure the right kinds of data are collected and an appropriate methodology is used.
Explain the theoretical framework underlying the evaluation.
Explain sampling and data collection methods so subsequent researchers could apply them in another context and readers can judge the likelihood of bias.
Improve data collection methods by using purposeful or random sampling, where possible, that provide more confidence in findings.
Triangulate findings using varied sources of qualitative and quantitative data.
Be transparent on data and ethics by publishing data in useable formats, and taking appropriate measures to protect privacy and assure confidentiality.
This set of recommendations draws on the high-quality evaluations we found in our sample. These examples showed that it is possible to conduct good quality evaluations for a range of methodologies and purposes. In many cases, quality improvement is possible within existing budgets by planning early or using better data collection approaches. Taking steps to improve quality can help ensure evaluations promote learning about what works and hold funders and implementers accountable—with an eye on increasing value for money and maximizing development impact.
Evaluations are key to learning and accountability yet their usefulness depends on the quality of their evidence and analysis. This brief summarizes the key findings of a CGD Working Paper that assessed the quality of aid agency evaluations in global health. By looking at a representative sample of evaluations—both impact and performance evaluations—from major health funders, the study authors developed 10 recommendations to improve the quality of such evaluations and, consequently, increase their usefulness.
In late July, the UK’s National Audit Office (NAO) published a progress report on Her Majesty’s Government spending that found that in 2015, a fifth of the £12.1 billion the country spent on aid was committed through government departments and cross-government funds other than the Department for International Development (DFID), the UK’s aid agency. The report also found that “no part of government has responsibility for checking on progress in implementing the UK Aid Strategy or for assessing the overall effectiveness and coherence of ODA expenditure.” [emphasis my own]
About a week later, on August 1, I joined the Center for Global Development as a Senior Fellow and Director for Global Health Policy. I am very excited about my new role, and, since my base will be the London offices of CGD Europe, I am also rather hopeful that I will be able to contribute to the current thinking on improving the effectiveness of aid budgets, including the one spent by the UK government.
With that goal, some colleagues and I recently proposed the creation of an independent public body that would assess the value for money of overseas development assistance (a “NICE” for development). We have proposed a “what works” centre for aid that would place the UK at the forefront of evidence-informed policymaking and reassure skeptics that tax monies are “properly spent.” But as the NAO report points out, ascertaining “properly” is becoming increasingly tough in an ever-fragmented environment, where the Ministry of Defence, the Foreign Office, and the National Institute for Health Research under the Department of Health (to name but a few) all receive aid money to spend on development projects.
Here are some recommendations for how we might assess and improve impact in a fragmented government landscape:
Getting results out of a fragmented, incoherent, and unaccountable aid structure
In its July report, NAO describes the limited capacity of many government departments to absorb and spend (effectively or not) the aid allocated to them. Five of 11 departments (in 2016 another two were added to make them 13) spent over 50 percent of their aid budget in the last quarter of the calendar year, whilst DFID remains the “spender of last resort” (to avoid missing the 0.7 percent target). Perhaps in an attempt to accelerate spending, roughly 20 percent of UK aid is paid out in the form of promissory notes, a more formal version of IOUs, with the amount of uncashed IOUs doubling between 2014 and 2016 to some £8.7 billion as of December 2016 (or the equivalent of 72 percent of the annual UK aid allocation).
Impact is hard (impossible?) to measure (other than the 0.7 percent spending target) given that three of the government’s four strategic aid objectives (resilience, global peace, and global prosperity) lack measurable and attributable outcome indicators, perhaps not surprisingly. And though each department is, in theory, responsible for ensuring “value for money,” again, there is no guidance for how to do so in a coherent way across a fairly wide range of departments which even includes Work and Pensions; Culture, Media; and Sports; and Her Majesty’s Revenue and Customs (HMRC). Interestingly, HMRC experienced an almost five-fold increase in its aid allocation between 2015 and 2016, the largest amongst any government department. Now that the aid transparency index no longer applies to organisations spending less than $1 billion, hence excluding nine out of the 11 UK government departments spending ODA, assessing impact or value for money becomes even harder.
The government’s intention to spend more ODA through the CDC, DFID’s independent investment arm, is also proving challenging as far as assessing effectiveness goes. The plan is to up CDC’s allocation to £6 billion, possibly rising to £12 billion, to invest in the private sector in developing countries in order to accelerate growth and create jobs. Its impact, defined as “a lasting difference to people’s lives in some of the world’s poorest places” according to another NAO report, is not easy to describe, let alone evaluate.
Finally, lines of accountability are blurred. By law, DFID must spend 0.7 percent of GNI on aid. This can be measured and is met, but that is where performance assessment stops as far as non-DFID spending conduits are concerned. The Treasury passes approximately 0.14 percent of the country’s GNI directly to various government departments and three cross-departmental funds with ODA responsibility. These latter have no lines of accountability to DFID or to any other single government department or institution with overall responsibility for assessing the coherence, effectiveness, and value for money of this spend. Even if such a responsible party did exist, given that individual departments and funds are currently not required to “report separately and specifically on how they have spent their ODA budget,” any judgement on how ODA pounds are spent, and to what effect, would simply not be possible.
Assessing the “likelihood of future [aid] effectiveness”
And though assessment of impact for the whole of the ODA envelope is not possible (for now), there have been attempts to look at individual components of it. The Independent Commission for Aid Impact (ICAI) published a report earlier this year on one of the three major cross-government funds for ODA, the Prosperity Fund (the other two are the Conflict and Stability Fund and the Empowerment Fund). This was described as an assessment of “likelihood of future effectiveness,” as the £1.3 billion fund is not yet operational, and so it focused on the procedural aspects of the fund’s design and operation so far. Amongst other things, the Commission called for more transparency, including results-based indicators and performance metrics to be developed in order for the fund’s work to be subject to assessment of impact and value for money. Defining “prosperity” in order to measure it may prove tricky as the fund is charged with the dual task of reducing poverty (primary objective) and creating business opportunities for (mostly) UK and overseas firms (secondary objective). So, in addition to metrics, some guide as to how to trade off these two objectives when necessary, is likely to be required.
One would hope for another one of the government’s ODA pots of money, the Global Challenges Research Fund, to undergo some form of evaluation. It is now spending £1.5 billion through various Research Councils, including Arts and Humanities, Engineering and Physical Sciences, and Economics and Social Sciences.
What makes for effective aid: start with process
Despite this gloomy appraisal, the new environment in the UK with multiple agents entering the development space offers, perhaps, an opportunity to test the effectiveness of engaging non-experts in development and even to prove the critics wrong. Here are a few thoughts on what is not helping as things stand and what could be achieved, whilst acknowledging that attribution and causality are very hard to show in complex real-world interventions. So, what I propose below is more about process: it is about laying out the infrastructure, including evidence generation mechanisms, a strong institutional framework for assessing value for money, and sustained investment in building the UK’s own capacity for delivering aid, all within transparent accountability structures and with a coherent vision of what success looks like.
Fund pragmatic research and empower learning healthcare systems
We know from high-income settings that pragmaticresearch can improve the quality and efficiency of care (see here for an example from paediatric care in the United States and the Academic Health Science Networks model in the English NHS). This is especially the case when research addresses questions that matter to people who pay for and use the healthcare services (see here for an example from the UK of an inclusive approach to setting research priorities). Investing in research can improve health outcomes through creating learning health systems in aid recipient countries, too. For this to work, research funds must go to building capacity where it is most lacking, and less to major UK universities’ estates and facilities overheads. The first NIHR £60 million ODA call, made just before Christmas 2016, offered no guidance (importantly no ceiling) as to the amount of overhead allowed to be claimed by UK universities, but made it clear that low- and middle-income country (LMICs) partners were allowed none.
Launch a NICE for aid
Both NAO and the ICAI reports deplored the lack of measurable indicators of impact, whether in the specification of the four objectives of the government’s strategy or at the level of individual spending departments or funding agencies. Measurable indicators to assess performance (including ones of value for money) and encouraging pragmatic evidence generation especially in low-income country settings are both needed for meaningful impact assessments. A NICE for aid could help with both, though it would have to be an improvement on the UK version, which has had limited leverage when it comes to evidence-making and has been focusing more on individual technologies and less on system capacity, distribution, and affordability—all central to the realities of LMICs. Perhaps the government should require that some of the £1.5 billion allocated to the UK Research Councils be properly spent in low-income countries to build capacity and data collection systems. The intent would be to enable a rigorous assessment of the value for money of specific interventions as well as whole programmes of work funded by aid money. The overall UK aid strategy might itself also be usefully subject to such empirical assessment.
Such a systematic effort could also inform the good work of ICAI, which currently comprises four commissioners supported by three management consulting companies, but which lacks the remit, resources, or methodological standards to commission its own research to address Parliament’s questions (e.g., through a clearly set out reference case for economic evaluation).
Make the private sector accountable
The private sector (product manufacturers, providers, insurers, IT and m-technology companies) is a major partner for healthcare systems striving to expand coverage and improve quality. So, private investment through the likes of CDC or OPIC, the United States' development finance equivalent, could be a force for good. But understanding the impact of the UK’s private investment in overseas healthcare industries—including its impact on the impoverishment of patients and their families who use and pay for the services thereby created, on health outcomes and their distribution, and even on processes and governance arrangements, such as audit and clinical governance—is as important as understanding the impact of public funding on job creation and economic growth.
Build our own capacity to help
Capacity-building within UK institutions not traditionally involved in ODA spending (both those who fund and those who deliver) matters. On the commissioning end, some of the Research Councils and government departments have never before had ODA money of their own. With shrinking operational budgets and headcount, civil servants sometimes turn to management consultants. For example, the Fleming Fund is managed by Mott-MacDonald and High Street consultants were brought in to scope out the Prosperity Fund for the Foreign Office. Such outsourcing of the strategic design and direction of the ODA money further accentuates fragmentation and makes it harder to convey a consistent cross-government message on what this ODA is meant to achieve. It can also be quite expensive.
Institute performance-based contracts between HMG departments
Assuming indicators can be developed and evidence to populate them produced, outcome-based contracts between DFID (or the Treasury) and ODA-spending departments would enhance accountability and spur the generation of evidence of impact (where there is impact) or course correction (where there is none). Such success stories would also boost the case for ODA and, being backed by numbers, are much more likely to be accepted by skeptics, especially those not fundamentally opposed to giving overseas aid but who suspect, probably rightly, that unmonitored and unmeasured impacts mask extremely low productivity.
Stay tuned and engage!
As people come back from summer holidays, I look forward to hearing your thoughts on the above and on what you think we should be focusing on in relation to global health at CGD and CGD Europe. I’ll be working together with our global health team in Washington, DC, and with Amanda Glassman, who is, luckily, going to maintain a very much hands-on and leadership involvement in CGD’s Global Health Policy programme. I am very much open to suggestions and ideas! Ongoing work on family planning, the recently launched Global Health Procurement Working Group, and practical approaches to implementing health benefits packages, drawing on iDSI and the What’s In, What’s Out book to be launched in the fall, are all underway. I am also keen to explore the meaning of implementation advocacy, as a means of moving away from the current dichotomy of advocacy versus analysis towards a more constructive advocacy for analysis paradigm. After all, to be credible and effective in our push for more money for health, we must be able to show more health for the money.
Cutting through the layers of hype surrounding blockchain technology is tough work. Underlying the buildup in excitement, however, is a remarkable tool that could, if designed and used appropriately, help improve processes related to several long-standing development challenges. In our new paper “Blockchain and Economic Development: Hype vs. Reality,” we examine the technology’s potential role in addressing four of those challenges:
making aid disbursement more secure and transparent;
facilitating faster and cheaper international payments;
providing a secure digital infrastructure for verifying identity; and
securing property rights.
We argue that, while blockchain-based solutions have the potential to increase efficiency and improve outcomes dramatically in some use cases and more marginally (if at all) in others, key constraints must be resolved before blockchain technology can meet its full potential in this space. Overcoming these constraints will require increased dialogue between the development and technology communities and a stronger commitment to collecting and sharing data about what’s working and what isn’t in pilot projects that use the technology.
The case of aid distribution
Making aid disbursement more efficient and transparent is one of the areas in which blockchain technology shows promise. Consider the Start Network, which brings together 42 national and international aid agencies—including the International Rescue Committee, Oxfam, and World Vision—with the goal of improving their ability to collectively respond to humanitarian crises, including by enabling its members to agree upon projects and disburse funds within 72 hours of a crisis. Given the group’s belief that the humanitarian system must radically change to accelerate crisis response, it’s not surprising that it is now exploring how blockchain technology might help to provide aid more efficiently and effectively. In July, the Network announced a partnership with blockchain start-up Disberse that will use the company’s platform to speed up the distribution of funds and better trace how funding is spent.
This is just the latest example of how aid organizations are putting blockchain technology to the test. The UN’s World Food Programme (WFP) also recently conducted a successful pilot project in Jordan, where it used a blockchain to manage cash-based transfers to 10,000 Syrian refugees living in the Azraq camp in Jordan. The organization hopes to expand the pilot to cover all 500,000 WFP beneficiaries in Jordan by the end of the year.
From the perspective of individual donors, conducting aid payments on a blockchain can provide three advantages: speed, transparency, and the ability to bypass traditional financial intermediaries. Further, if multiple donors share project information on a single distributed ledger, they can improve coordination both between themselves and with recipient governments.
In the case of aid distribution, the biggest challenge is the inherent nature of the development agencies and large non-profits who may ultimately use the technology. These organizations tend to be risk-averse and slow to innovate—a sensible stance since they act as stewards of donor resources and provide services that can mean the difference between life and death for beneficiaries. For that reason, the development and tech communities will need to work together to address concerns about data security, governance, and operational resiliency that relying on a blockchain raises before wider adoption is likely.
These challenges all appear to be solvable, but the ability of technologists to prove that the solutions they offer provide a significant advantage over existing approaches may be hampered by an absence of quality data. In part, this lack of data simply reflects the newness of the technology. But there is also a concerning trend in which start-ups announce pilot project “successes” without backing up their claims with metrics. This reticence is understandable given the stiff competition for funding and market share, but it undermines the broader effort to design effective solutions. The government agencies and international institutions that partner with start-ups on pilot projects can solve this problem very simply by requiring their partners to collect and publish relevant metrics, and working with them to make that a reality.
These organizations should also work with technologists to develop a set of principles and (eventually) standards for using the blockchain-based solutions in the context of development. The Principles for Digital Development, which have been endorsed by over 100 organizations working in international development, provide a useful model for this effort. While it may be counterproductive to set standards now, given the rapid pace of innovation, it is important to have conversations with an eye towards what these standards might look like in the future to prevent different organizations from developing systems that are ultimately incompatible. Recently formed projects like New America’s The Blockchain Trust Accelerator and Consensys’ Blockchain for Social Good are already bringing actors from both communities together for this discussion and these efforts should get a further boost from the World Bank’s recently announced Blockchain Lab.
Despite the over-hyping of blockchain technology’s potential in certain use cases, we believe that several applications show real promise. A little coordination and a lot of quality data would go a long way towards realizing that promise and improving development outcomes.