Tuesday, August 16, 2011

Evaluation methods looking for projects or projects seeking appropriate evaluation methods?


A few months ago I carried out a brief desk review of 3ie's approach to funding impact evaluations, for AusAID's Office of Development Effectiveness. One question that I did not address was "Broadly, are there other organisations providing complementary approaches to 3ie for promoting quality evaluation to fill the evidence gap in international development?"

While my report for 3ie examined the pros and cons of 3ie's use of experimental methods as a preferred evaluation design, it did not look at the question of appropriate institutional structures for supporting better evaluations. Yet, you could argue that choices made about institutional structures could have more consequences than those involving the specifics of particular evaluation methods. The question quoted above seems to contain a tacit assumption about institutional arrangements, i.e that improvements in evaluation can best be promoted by funding externally located specialists centres of expertise, like 3ie. This kind of assumption seems questionable, for two sets of reasons that I explain below. One is to do with the results they generate, the other concerns the neglected potential of an alternative.

In the "3ie" (Open Window) model anyone can submit a proposal for an evaluation of a project implemented by any organisation. This approach is conducive to 'cherry picking' of evaluable (by experimental methods) projects and the collection of evaluations representing a miscellany of types of projects - about which it will be hard to generate useful generalisations. Plus an unknown number of other projects possibly being left unevaluated, because they dont fit the prevailing method preferences.

In the alternative scenario the funding of evaluations would not be outsourced to any specialist centre(s). Instead, an agency like DFID would identify a portfolio of projects needing evaluation. For example, those initiatives focusing on climate change adaptation. DFID would call for proposals for their evaluation and then screen those proposals, largely as it does now, but perhaps seeking a wider range of bidders.

Unlike the present process, it would then offer funding to the bidders who had provided, say the best 50% of, the proposals to develop those proposals further in more detail. At present there is no financial incentive to do so, and any time and money already spent on developing proposals is unlikely to be recompensed, because only one bidder will get the contract.

The expected result of this "proposal development" funding would be revised and expanded proposals that outlined the bidder's proposed methodology in considerable detail, in something like an  inception report. All the bidders involved at this stage would need access to a given set of project documents and at least one collective meeting with the project holders.

The revised proposals would then be assessed by DFID, but with a much greater weighting towards the technical content of the proposal than exists at present. These second level assessment would benefit from the involvement of external specialists, as in the 3ie model. DFID Evaluation Department already does this in the case of some evaluations through the use of a quality assurance panel.The best proposal would then be funded as normal, and the evaluation then carried out.

Both the winning and losing technical proposals would then be put in the public domain via the DFID website in order to encourage cross fertilisation of ideas, external critiquing and public accountability. This is not the case at present. All bidders operate in isolation. There are no opportunities to learn from each other. The same appears to be the case with 3ie, the full text of technical proposals are not publicly available (even of those who were successful). Making the proposals public would mean that the proposal development funding had not been wasted, even where the proposals were not successful.

In summary, with the "external centre of expertise" model there is a risk that methodological preferences are the driving force behind what gets evaluated.  The alternative is a portfolio-of-projects led approach, where interim funding support is used to generate a diversity of improved evaluation proposals, which are later made accessible by all and which can then inform future proposals.

A meta-evaluation might be useful to test the efficacy of this project-led approach. Other matched kinds of projects also needing evaluation could continue to be funded by the pre-existing mechanisms (e.g. in-country DFID offices). Pair comparisons could later be made of the quality of the evaluations that were subsequently produced by the two different mechanisms.Although it is likely there would be multiple points of difference, it should be possible for DFID, and any other stakeholders, to prioritise their relative importance, and come to an overall judgement of which has been most useful.

PS: 3ie seems to be heading in this direction, to some extent.  3ie now have a Policy Window where they have recently sought applications for the evaluation of projects belonging to a specific portfolio ("Poverty Reduction Interventions in Fiji" implemented by the Government of Fiji). Funding is available to cover costs of the successful  bidder (only) to visit Fiji "to develop a scope of work to be included in a subsequent Request for Proposal (RFP) to conduct the impact evaluation".Subject to 3ie's approval of the developed proposal 3ie will then fund the implementation of the evaluation by the bidder.The success of this approach will be worth watching, especially its ability to ensure the evaluation of the whole portfolio of projects (which is likely to depend on 3ie having some flexiblity about the methodologies used). However, I am perhaps making a risky assumption here, that the  projects within the portfolio to be evaluated have not already been pre-selected on the grounds of their suitability to 3ie's preferred approach.




PS: I have been reading the [Malawi] CIVIL SOCIETY GOVERNANCE FUND -
TECHNICAL SPECIFICATION REFERENCE DOCUMENT FOR POTENTIAL SERVICE PROVIDERS. In the section on the role of the Independent Evaluation Agent, it is stated that the agent will be responsible for "The commissioning and coordination of randomised control trials for two large projects funded during the first or second year of granting." This specification appears to have been made prior to the funding of any projects. So, will the fund managers feel obliged to find and fund two large projects that will be evaluable by RCTs? Fascinating, in a bizarre kind of way.

No comments:

Post a Comment