Friday, October 30, 2009

On the poverty of baselines and targets...

I have been surprised to see how demanding DFID has become on the subject of baseline data. On page 13 of the new DFID Guidance (on using the new formatted Logical Framework) it is stated that ” All projects should have baseline data at all levels before they are approved. In exceptional circumstances, projects may be approved without baseline data at Output level..." Closer to the ground I have witnessed an UK NGO being pressed by DFID-appointed managers of a funding mechanism to deliver the required baseline data. This is despite the fact that the NGO's project will be implemented in a number of countries over a period of years, not all at once.

Meanwhile, in Uganda and Indonesia, I am watching two projects coming to an end. Both had baseline data collected shortly after they started. Neither is showing any signs of intending to do a re-survey at the end of the project period. Is anyone bothered? Not that I can see. Including DFID, who is a donor supporting one of the projects. And in both cases baseline surveys were expensive investments.To make matters worse, in one country the project performance targets were set before the baseline study, and in the other they have never really been agreed on.

I have just completed the final review of one project. We have diligently compared progress made on a set of indicators, against all the original targets. There are of course the usual problems of weak and missing data, and questionable causal links with project interventions. But what bothers me more is how outdated and ill-fitting some of these initial performance measures are. And how little justice this mode of assessment seems to be doing to what the project has been able to do since it started, especially the flexibility of its response in the face of the changing needs of the main partner organisation. Of even greater concern is the fact that this project is being implemented in a large number of districts, in a country that has been going through a significant process of decentralisation. Each district's capacities and needs are different, and not surprisingly the project's activities and results have varied from district to district. There is fact no one single project. Yet our review process, like many others, has in effect treated these district variations as "noise", obscuring what were expected to be region-wide trends over time.

I am now working on some ideas of how to do things differently in my next project review, in the same country. This time the focus will be more on internal comparisons: (a) between locations, (b) between time periods during the project period.