Friday, December 16, 2005

The "attribution problem" problem

I have lost count of the number of times I have seen people make reference to "the attribution problem" as though doing so was a magic spell that dispelled all responsibility to do anything, or to know anything, about the wider and longer term impacts of a project. Ritualistic references to the "attribution problem" are becoming a bit of a problem.

In the worst case I have seen an internationally recognised consultancy company say that "our responsibilities stop at the Output level". And while other agencies might be less explicit, this is not an uncommon position.

This notion of responsibility is very narrow, and misconceived. It sees responsibilities in very concrete terms, delivering results in the form of goods or services provided.

A wider conception of responsibility would pay attention to something that can have wider and longer term impact. That is the generation of knowledge about what works and does not work in a given context. Not only about how to better deliver specific goods or services, but about their impact on their users, and beyond. Automatically, that means identifying and analysing the significance of other sources of influence in addition to the project intervention.

Contra to some people's impressions, this does not mean having to "prove" that the project had an impact, or working out what percentage of the outcome was attributable to the project (as one project manager recently expressed concern about). Something much more modest in scale would still be of real value. Some small and positive steps forward would include: (a) identifying differences in outcomes, within the project location [NB: Not doing a with-without trial], (b) Identifying different influences on outcomes, across those locations, (c) prioritising those influences, according to best available evidence at the time, (d) doing all the above in consultations with actors who have identifiable responsibilities for outcomes in these areas, (e) making these judgements open to wider scrutiny.

This may not seem to be very rigorous, but we need to remember our Marx (G.), who when told by a friend that "Life is difficult", replied "Compared to what?" Even if project managers choose to ignore the whole question of how their interventions are affecting longer term outcomes, other people in the locations and institutions they are working with will continue to make their own assessments (formally and informally, tacitly and expliictly). And those judgements will go on to have their own influences, for good and bad, on the sustainabilty and replicability of any changes. But in the process their influences may not be very transparent or contestable. A more deliberate, systematic and open approach to the analysis of influence might therefore be an improvement.

PS: On the analysis of internal variations in outcomes within development projects, you may be interested in the Positive Deviance initiative at


  1., December 17, 2005 12:29:00 PM

    Rick, I really appreciate your views on this issue. In my own experience, however, there is a relatively simple way forward. For social change organisations operating in environments that are characterised by complex, dynamic and open systems, the solution is to identify contribution instead of attribution of outcomes. Easier said than done, of course.

    The problem begins with agreeing on what we mean by “outcomes”. I find the general definition (see for example the OECD Glossary of Key Terms in Evaluation and Results Based Management, 2002) to be insufficient. OECD proposes: “The products, capital goods and services which result from a development intervention….” The devil is in the “resulting from” that leads to the despair over the difficulty, or impossibility, of attribution. (The OECD translation of outcome into Spanish as “efectos directos” emphasises the conceptual issue.) In contrast, I find the emphasis of the Evaluation Unit at IDRC on “development is accomplished through changes in the behaviour of people” to be more helpful.

    Today, my generic, working definition of outcomes is “changes in the behaviour, relationships, activities, or actions of individuals, groups or organisations that are a direct or indirect, partial or total result of the activities or outputs of the development actor.” I rush to clarify that although generally these changes are the result of multiple actors and factors, there are some—a presidential decree for example—that may be attributed to one forceful, effective actor.

    I have used this definition in three quite different evaluations this year—for a development agency’s multi-annual environmental programme in Latin America, a 36-member European network and a smaller Asian federation. First, I customise the outcome definition for each organisation. Then, I ask the organisations being evaluated to a) formulate the outcomes they have achieved and b) provide the evidence of their contribution. As evaluator, quickly my role becomes that of facilitator and even capacity-builder for these organisations as they identify and communicate concretely and rigorously the changes to which they have contributed. The results are inspiring—for the development actors, for those who commissioned the evaluations and for me as the evaluator.

  2. Hi Ricardo

    Three comments on your comments:
    - I am not sure what the difference is between identifying the contribution versus attribution
    - I am in favour of your more actor-oriented description of what an outcome is. This makes outcome descriptions more communicable, more comrehendable and more verifiable than descriptions of abstract events or processes (which are endemic)
    - I dont feel comfortable with focusing in too exclusively on the contribution the development organisation/project has made to the outcome. A wider focus that also looked at the role of other actors, and the context, might help generate more usefull generalisable knowledge that would help make the outcomes more replicable elsewhere.

    regards, rick

  3., December 18, 2005 11:36:00 AM

    And mine on yours:

    - Attribution or contribution – What is the difference?:

    Thanks for calling me on this. The difference is probably only of connotation. As you know, development agencies seek to hold their grantees accountable for results. In this process "attribution" has come to mean there is evidence that predefined results were achieved AND would not have been achieved without the activities that we funded. "Contribution", however, is more modest and realistic: there is evidence of a reasonable relationship between outcomes and the activities of a grantee. Also, it is more viable to hold grantees accountable, and for them to be demonstrate, for their contribution to development.

    - Looking at the role of other actors:

    The point is well taken. In my next evaluation assignment I will expand the question on evidence of contribution to that of other actors. It will be most interesting because my experience so far is that it is a major challenge, for funding agencies and NGO grantees alike, to agree on and communicate in a verifiable manner the outcomes to which they have contributed. But, part of the difficulty is an ethical dilemma--how to claim to be even partially responsible for results to which others have contributed as much or more? So, it occurs to me that tackling that challenge head-on would be productive.

  4. Hi Ricardo,

    (responding to your second paragraph...)

    The ethical dilemma of how to claim responsibility for results that others are likely to have contributed to arises, in my view, from problems at the planning stage. “M&E is the back-end of strategy” – that’s where a lot of M&E problems originate. If development agencies start by developing plans that acknowledged the role of other actors then expectations will be established that: (a) they will not be the only agent of influence, (b) they will have to generate information about the influence of those other actors, as well as themselves. Linear logic models (e.g. the Logical Framework) are not conducive to this type of contextualised planning (and learning). Network models are more conducive, especially those which describe actor networks.

    I go back to an earlier point here. The end results that aid organisations should be looking is new knowledge about how change is taking place, not simply the reporting of change that has happened (and then making some claim on that). The former is usable by many others, the latter is much less so.

  5. Hola Rick,

    With the best for 2006 and in response to yours of 23 December.

    I agree...especially when the emphasis in monitoring and evaluation is on process rather than product, a big challenge in these times of results-based management. The context of development is so complex, open and dynamic that I am more and more convinced the planning challenge is one of addressing uncertainty. And if there is uncertainty about what you will be able to do, all the more uncertain is what you will achieve, even in terms of activities and outputs. And when we are talking, as we are, about end results, SMART predefinition of outcomes will tend to lead you away from engaging with other actors who aim for the desirable social change, instead of simply the possible.

    For planning, I have been working with a strategic risk management approach that focuses on enabling an organisation to consciously and continually seize opportunities and run dangers and threats. The roles of other social actors (and external and internal factors) figure prominently in this methodology but you remind me that I must re-visit your network model. Thanks.