August 11, 2009

Problem statement

Organizations working in social change are under real pressure to use scarce funds as effectively as possible. They need to ask and answer the question, "What difference do we make?" To do this they need to measure, assess and report their performance, and the results that flow from their efforts.

But the conventional tools for planning, monitoring and evaluation often fall short. Managers and funders alike are left with inadequate data to make key decisions and fulfil their weighty responsibilities.

A growing set of literature analyses why. Please see the related research pages which supports the case for Constituent Voice methodology. Our analysis is heavily based on this related research from respected academics and experienced practioners and forms the backbone of our work.

Our summary of the prevailing dilemma is that implementing agencies tend to be more accountable to their funders than to their intended beneficiaries (who we refer to as primary constituents). They do this despite their commitments to their beneficiaries because it is funding that enables work rather than work (defined as demonstrated results) enabling funding.


As it stands today, agencies have major incentives to tell funders that they will deliver social change on their own and with certainty. Commonly used planning tools (like logical frameworks) tend to reinforce this.

This conflicts with reality. Most change processes are complex, unpredictable and involve many other constituents who are outside agencies’ control. In order to be developmental, external agencies have to respect other constituents’ autonomy – i.e. their right to make their own decisions. Agencies have to build effective alliances; they cannot rely on telling people what to do.

Results-based plans often fall into the trap of being about just one organization and assuming that social change can be predicted in advance. This often makes them unreliable guides to implementation, monitoring and learning.

Assessing and reporting

There are well documented problems in assessing social change. Some of the key components are hard to measure (like changes in attitudes). Different people experience change differently – there are always winners and losers. Changes usually become apparent over a longer time frame than the funding cycle, often much longer. Finally, it can be very hard to attribute changes to one specific intervention. There are too many other factors involved; causal chains tend to be long and interdependent.

Agencies also have incentives to report to funders that they have successfully achieved what they said they would achieve. This means that funders and senior managers lack data about their real impact, making it hard for them to learn how to improve.


Almost all agencies cite ‘partnership’ or ‘participation’ or ‘accountability to beneficiaries’ as cornerstones of effective practice.

These terms all mean that agencies support other organizations and people to have a greater say in making their own decisions about their own priorities. Agencies can only do this by building effective relationships with these other constituents.

But not many agencies define what the concepts mean in practical terms. And even fewer measure how well they actually achieve them. They end up relying on anecdotal reports and the commitment of individual managers instead.

Rigid approaches to planning and reporting can actively prevent participation. Success can become defined as "did we do what we said we would do?" — which means sticking to the original plan, rather than building flexible and trusting relationships with other constituents.