In this blog, we look at why measuring the impact of partnerships is so tricky – but critically important. Key points are summarised at the end of this blog.
Reading international development partnerships’ progress reports, the impact these partnerships have achieved appears to be astounding. Partnerships (and their Secretariats) have catalysed climate action on a global scale, millions of lives have been saved, and millions more children are in school.
Causation, correlation, contribution?
However, as all statistics 101 teachers will state, correlation is not causation. How can you measure the contribution of partnerships, and more specifically partnerships beyond the contributions of individual partners?
Most international development partnerships use metrics “before and after the founding of the partnership”. “Since X partnership was founded, millions of lives/children/carbon emissions…”
These arguments are difficult to disprove, unless implementation has only taken place in regions or countries that have been selected at random, or randomised control trials (RCTs) have been used. (This is rarely if ever the case, as partnerships’ boards perpetuate targeted selection, resulting in high investments and progress in so-called donor darlings, and a lack thereof in so-called donor orphans.)
Even if a partnership has contributed to progress, “high-leverage interventions that would move the needle are largely outside the control of individual (organisations)”, as this article on sustainability reporting in Harvard Business Review (HBR) shows.
Factors such as political priorities, economic growth, or national civic engagement are much more likely to have contributed to changes – but are rarely mentioned.
What are partnerships measuring?
Impact metrics used by partnerships are also often problematic in themselves. As stated in the HBR article above, “reporting is not a proxy for progress. Measurement is often nonstandard, incomplete, imprecise, and misleading.”
Data is often self-selected, self-reported, and not verified by independent institutions, although some partnerships are by their boards required to conduct regular independent evaluations.
Incomplete and misleading data not only applies to the case for what is included (or dropped) in progress and impact reports, but a fudging of input, output, outcome and impact measures is also common when data is lacking (or does not show progress).
“A fudging of input, output, outcome and impact measures is common when data is lacking (or does not show progress).”
Because impact takes time to show in data, and a direct contribution is difficult to prove, most partnerships resort to showcasing inputs or outputs (meetings held, funds raised, contracts signed), and hoping readers (and donors) make a leap of faith that this directly ties to impact figures.
More often, donors themselves push partnerships to report on these impact figures. “How many lives can we say we have saved?” is one of the most common reporting questions a (health-related) development partnership will hear from a donor.
“‘How many lives can we say we have saved?’ is one of the most common reporting questions a…partnership will hear from a donor.”
Such impact-driven communication creates a mutually beneficial relationship between a partnership and donors, whereby partnerships can use impact claims to increase fundraising asks, and donors can justify granting these funds based on these same impact claims. Whether impact was really driven by the partnership appears to be less important.
What are partnerships asked to measure?
The SDG targets on partnerships (SDG17) do not help clarify the situation.
There is no indicator or guidance on measuring impact (SDG Indicator 17.19.1 focuses on “the
dollar value of all resources made available to strengthen statistical capacity in developing countries“) and partnership-related targets listed are highly process focussed, pushing for the creation of more partnerships rather than for partnerships to deliver more impact (SDG Indicator 17.17.1 is “the
amount of United States dollars committed to (a) public-private partnerships“).
“Partnership-related targets listed (in SDG targets) are highly process focussed, pushing for the creation of more partnerships rather than for partnerships to deliver more impact.”
Quality, not just quantity
What is measured; who measures or verifies this; how links are justified from inputs through to impact; and how correlation, contribution, and attribution are differentiated are key questions that partnerships should be transparent about – and provide clear answers to.
Partnerships should also regularly consult stakeholders that are not invested in funding (e.g. national civil society) and publish and meaningfully engage with their evaluations. The current practice of asking a government official who has received millions in aid to publicly speak about the partnership is unlikely to result in more than talking points of praise, and an expectation to receive more funds.
If not for more impact, what are partnerships for?
Partners for Impact (PFI) was founded “to show not only that we are stronger when we work together in partnership, but that sustainable impact in international development can only be delivered by working in partnership. Impact requires meaningful, substantive partnerships.”
If partnerships are not able (or willing) to transparently and credibly show and measure their impact – in a way that it independently verified – we should more closely be looking at what they are then there for. We should also be asking whether they are able to effectively target resources and capacity, and course-correct when needed, if they are “flying blind”.
A mantra that has guided our work for a longer time is Helen Keller’s “Alone we can do so little; together we can do so much.” As this blog shows, credibly measuring and transparently communicating what exactly that “much” is that we have contributed to working together is tricky – but critically important.
Key points summarised:
- International development partnerships often make ambitious impact claims. However, they often (intentionally or pushed by donors) mix up correlation and causation, contribution and attribution, and fudge metrics ranging from inputs, output, outcome to impact.
- Transparency, the use of independent institutions to verify and provide inputs into what data is selected and published, and meaningful engagement and publication of evaluations such as from national civil society are key important for the credibility or partnerships.
- If partnerships are not able (or willing) to work with impact data, we should take a closer look at their claims to deliver while “flying blind”, and other motivations for partnering.
For questions, feedback, or input, we would love to hear from you. You can contact us here.