Photo courtesy of Nick Youngson
If development evaluation is at an inflection point, what are the drivers that would enable this turning point? There are ten top trends in development evaluation emerging in 2017 that — if widely adopted — indeed promise to revolutionize how we determine what works and learn from development aid projects. I will discuss each of these trends in depth in future articles, but here are the top ten trends to watch for 2017:
- Convergence of evaluation and social impact approaches: Known as IMPCON, the 2016 American Evaluation Association Conference recently joined with Social Value International to discuss “impact convergence” – how businesses-led social impact investing (including corporate social responsibility) and program evaluation can learn from each other. This convergence promises to link organizational goals and missions with project outcomes. This is a trend to watch because the growing discussion is focused on methodological innovation and new resources for development programming using the double bottom line.
- Smart design and the rating of programs by strength of evidence: While many may associate SMART (Specific, Measurable, Assignable (or Achievable), Realistic and Timebound) as an acronym associated with indicator selection, smart design is about program design such as the Center for Global Development’s smart-design-based roadmap for women’s economic empowerment. It goes beyond deciding whether a program works to ranking programs by how strong the evidence or research design is that evaluated the program using standards of evidence such as NESTA has developed. This is a trend to watch because building knowledge comes from multiple studies done over time with replications and requires breaking down organizational stovepipes of evaluation results.
- Use of mixed methods rather than the “gold standard”: While some still believe that randomized experiments are the “gold standard,” mixed methods have emerged not just as a group of “procedures” combining both qualitative and quantitative data, but also as a design set of methodologies in its own right with its own professional association: the Mixed Methods International Research Association. This is a trend to watch because there are now multiple ways to demonstrate impact using counterfactuals and multiple data types for context and triangulation.
- Use of evaluation and shared measurement platforms rather than “standard indicators”: While many development projects still work with spreadsheets and pencil and paper records, there are new, low-cost options that make data more available for learning and immediate feedback rather than stored in an external evaluator database. The use of standards like IRIS, developed by the Global Impact Investing Network, offer a unified way to measure outcomes across organizations and can be shared via a single database. But it is the advent of digital records and cloud-based entry and storage that promises big changes. This includes shared measurement platforms that allow multiple grantees and donors to use a single platform to report results even though each pursues their own goals and outcomes. It also includes the advent of evaluation platforms like the DHIS2 created as part of the PEPFAR DHIS2 is a new generation of open source platforms that creates new opportunities for complex projects to permit entry and analysis of real-time data, including data management, data analysis and visualization including charts, pivot tables, dashboards and reporting frameworks with different levels of access. This is a trend to watch because the capacity for integrating evaluation with monitoring is unparalleled for enhanced data quality and in the future will allow the breakdown of evaluation stovepipes.
- Public-private partnerships for programs and financing: Increasingly, it is recognized that neither governments nor the private sector can do it alone. Used in infrastructure projects such as what the World Bank has pioneered, public-private partnerships have been demonstrated as the key to success in projects like PEPFAR. A new example is the Global Alliance for Trade Facilitation, for which the Center for Private Enterprise is one of the host organizations. This is a trend to watch because public-private partnerships and the collaboration they ensure provide innovative solutions to seemingly intractable problems.
- Responsible data rather than open data sharing: Rather than open-data-by-default where all data is shared publicly, increasingly the collection, use, and storage of data “assets” must take cognizance of privacy and confidentiality concerns in ways that create responsible data. This is a trend to watch because the ethics of data require a different model in today’s interconnected world and especially in closing spaces and conflict areas where third party use of data may be irresponsible.
- Systems and markets rather than single factor approaches: Interventions which promise results as a single factor “silver bullet” are rare. The worldwide eradication of smallpox by vaccination is often viewed as one of the greatest development successes, but it has been rarely duplicated. Instead, new approaches focus on systems and markets. CIPE has used market-based approaches since its founding in 1983 and is now expanding its work to include entrepreneurial ecosystems. This is a trend to watch because this is a model that can be used for complexity and systems change.
- Human-centered and participatory designs rather than inflexible, top-down log frames: In contrast to the externally determined pre-project log frame organized around pre-set project objectives and developed by USAID nearly 50 years ago, the new approach is to design and evaluate programs with the people at the center from the beginning of project design. Human-centered design includes the stakeholders in the data collection, evaluation design and reporting– a process that increases ownership. This is a trend to watch because this approach improves impact measurement with marginalized populations, for complex programs where the intervention and the outcomes are not well-understood, and in the context is volatile or involves repressive situations – situations which are increasingly the norm in development work.
- Gender-responsive and inclusive programming: Including women and girls as “half the sky” is often viewed as a human rights issue that is a social good in itself. But increasingly, it is also recognized globally that women as small business owners and traders are more likely to invest in their communities, and women’s wealth is more likely to be used for the benefit of their families, thereby increasing social capital. This is a trend to watch because investment in women leads to greater impact.
- Including the evaluator as a partner rather than an end-of-project ex post facto external evaluator: Traditional program evaluation based on Donald Campbell places the evaluator as an unbiased researcher who determines whether impact occurred when the project is done. But increasingly it is regarded as a waste of resources to not know whether something works afterwards as project managers work to avoid being “over budget, over time, over and over again.” While it is indeed a “tricky triangle,” placing evaluators alongside managers encourages double-loop learning as well as accountability to evaluation commissioners. It is now recognized as a best practice that better program design and management occurs when the evaluator is engaged at the outset and can advise how interventions are designed to improve evaluability, risk management and the ability to achieve outcomes and overall impact. This is a trend to watch as internal independent evaluation models are proving value-added alongside the accountability check of external evaluation.
I intend to examine each of these trends in greater depth in future articles and discuss what these new innovations mean for development evaluation practice.
Denise Baer is Senior Evaluation Officer at CIPE.