Saturday, April 02, 2005

Constructing "an auditable trail of intentions...."

A useful report has recently been produced by ODI (Lucas, Evans, Pasteur and Lloyd, 2004)“on the current state of PRS monitoring systems. PRS are national level Poverty Reduction Strategies, promoted by multilateral and bilateral aid agencies. In that report they argue for more attention to the severe capacity constraints facing governments who are try to monitor their PRSs. Donors need to take “a less ambitious attitude as to what can be achieved and a willingness to make hard choices when prioritising activities”. Later on, in discussions about the range of indicator data that might be relevant they note that “Given scarce resources, a focus on budget allocations and expenditures may well an appropriate response, particularly if it involves effective tracking exercises with mechanisms to ensure transparency and accountability…Linking these data to a small set of basic service provision indicators that can reasonably reflect annual changes could provide a reasonable starting point in assessing if the PRS is on track.”

Meanwhile I have been working on the fringes of a PRS update process that is taking place in a west African country. While I agree with the line taken above, I am wondering now if even this approach is too ambitious! This will be the second PRS for the country I am working in. This time around the government has made it clear to ministries that their budgets will be linked in to the contents of the PRS. This seems to have had some positive effect on their levels of participation in the planning groups that are drafting sections of the PRS. By now some draft versions of the updated PRS policies have been produced, and they have been circulated for comment within a privileged circle (including donors). Some attempts have been made at explicitly prioritising policy objectives, but only in one of five policy areas. Meanwhile there is a deadline approaching at high speed, for identifying and costing the programmes that will achieve these policy objectives. This is all due by the end of this month, April 2005. Then it is expected the results will feed into a public consultation and then into the national budget process starting in June. However, as yet there is no agreed methodology for the costing process. As the deadline looms the prospects increase for a costing process that is neither systematic or transparent (aka business as usual).

If the process of constructing the costings is not visible, then it becomes very clear to identify the specific linkages between specific PRS policy objectives and specific items in the national budget. So while we can, on ODI good advice, monitor budget allocations and expenditures, what they mean in terms of the PRS policy objectives will remain an act of literary interpretation. Something that could easily be questioned.

IMF and UNDP have I think both had some involvements in costings of broad policy objectives, including the achievement of the MDGs. However, from what I can see these costings have been undertaken by consultant economists, primarily as technical exercises. But I am not sure if this is the right approach. The budgets of ministries are political resources. The alternative approach is to ask Ministries to say how they will use their budget to achieve the various PRS policy objectives, and while doing so make it clear that their performance in achieving those selected objectives with their budget will be assessed. To do this we (/an independent agent) will need what can be described as “an auditable trail of intentions”, from identifiable policy objectives to identifiable programmes, with identifiable budgets, to identifiable outputs and maybe even identifiable outcomes.

There is an apparent complication. This auditable trail will not be a simple linear trail, because a single policy objective can be addressed by multiple programmes, and a single programme can address more than one policy objectives. Similarly with the relationship between a ministry’s programmes and outcomes in poor peoples lives. However, an audit trail can be mapped using a series of linked matrices (each of which can capture a network of relationships). These could include the following: PRS Policy Objectives X Ministry’s Programmes matrix, Ministry's Budget lines X Ministry’s Programmes matrix, and Ministry’s Programmes X Outputs matrix, and an Outputs X Outcomes matrix. This seems complex, but so is the underlying reality. As Groucho Marx said when his friend complained that life is difficult, “Compared to what?”

Postscript: Parallel universes do exist: Proof - a five year national plan with lists of policy objectives in the front and lists of programs in the back (with their budgets) but no visible connections between the policy objectives and programs & budget.

Parallel universes

Identifying the impact of evaluations: Follow the money?

Some years ago I was involved in helping the staff of a large south Asian NGO to plan a three-yearly impact assessment study. It was almost wholly survey based. This time around myself and a colleague managed to persuade the unit responsible for the impact assessment study to take a hypothesis-led approach, rather than simply trawl for evidence of impact by asking as many questions as possible about everything that might be relevant. The latter is often the default approach to impact assessment and usually results in very large reports being produced well after their deadlines.

With some encouragement the unit managed to generate a number of hypotheses in the form of if X Input is provided by our NGO and Y Conditions prevail then Z Outcomes will occur (aka Independent variable + Mediating variable = Dependent variable). Ostensibly they were constructed after consultations with line management staff, to get their interest and ownership in what was being researched. The quality of the hypotheses that were generated was not that great, but things went ahead. Questions were designed that would gather data about X, Y and Z, and cross-tabulation tables were constructed that would enable analysis of the results, showing with/without comparisons. The survey went ahead, the data was collected and analysed, and the report written up. The analytic content of the report was pretty slim, and not very well embedded in past research done by the NGO. But it was completed and submitted to management, and to donors. My inputs had ended during the drafting stage. The study then seemed to sink without trace, as so often happens.

A year or so later a report was produced on the M&E capacity building consultancy that I had been part of when all this had happened. In that report was a reference, amongst other things, to the impact assessment study. It said “The study also produced some controversial findings in relation to training, as it suggested that training was a less important variable in determining the performance of groups than had previously been thought. This finding was disputed at the time, but when [the NGO] had to make severe budget cuts in 2002-3 following the blocking of donor funds by the [government], training was severely cut. There is though still an urgent need for [the NGO] to undertake a specific study to review the relative effectiveness of different types of training.”