Wednesday, December 22, 2004

Learning circles and loops: Time for some more sophisticated representations

I cannot count the number of times I have seen papers and books about M&E include a circle which shows an idealised learning cycle. One where planning leads to implementation which then leads to a review which then feeds back into planning. You can see dozens of examples if you do a Google Image search for "learning cycle". Look here for the Google search results

It is about time we developed some less simplified and more realistic representations of learning cycles. We can do this by situating our thinking about learning in more actor-oriented models of what is happening, versus models that focus on abstract and disembodied processes. And by thinking about multiple actors rather than single individuals going through their own action-reflection cycles.

The diagram below shows four technical units proposed for a new project management office in a national poverty program that I have been working with recently. Similar units could probably be found in many organisations. The lines show some expected information flows, especially in relation to the M&E unit. Reading the diagram, it is clear that the M&E unit is involved in a number of learning cycles, each of which involves a different actor (or unit in this case). Those learning relationships with other units will have different priority, which may well vary over time. And the other units will be engaged in learning cycles with other units. It would be much more complex diagram if those other relationships were also included


The state of these learning loops can be investigated empirically, and documented by in an actor x actor matrix. [the actors can be individuals or groups of individuals, as found in each of the units] Cells in the matrix can detail what information has been received from the row actor by the column actor. Most Significant Change monitoring can provide the qualitative details of the information being exchanged between the two actors connected by a given cell. Then the relative importance of the information described in each row can be ranked by the provider (and the column contents ranked by the recipient). We can then examine how consistent the providers and recipients ranking are with each other. We can also compare the participant's views with any formal model the organisation might have about how what and when information should be being exchanged between the actors involved.

If you want to explore this approach further in your own organisation, and want to exchange ideas on how to do so, and the results, let me know Email the Editor

Wednesday, August 04, 2004

Open Government and Open Aid

I have spent the last two weeks working with a government department in an African country that is charged with the responsibility of monitoring the progress with the country's Poverty Reduction Strategy (PRS). The PRS is meant to be a key government policy statement, which then becomes the focus of international donor support to poverty reduction efforts in that country.

Over the last few years donors have been supporting this department by funding the costs of producing annual and other progress reports on the government's implementation of the PRS. In the process of doing so they have obscured a useful signal of government commitment to the PRS: Are they will to invest their own scarce resources into monitoring the PRS? (This is a functioning government with an estimated GDP growth rate of 4.8% in 2003).

Another interesting signal is the government's willingness to publicise the PRS, and more importantly the progress reports on its implementation. There is a communications strategy, but it is not making much progress because of lack of resources (though they have been offered). Some reports have been printed and distributed but knowledge of their availability remains limited. They are not referred to on the departments website. In fact the department's existing website is notable for its invisibility. It is not linked to the government's main website, nor can it be found via search engine inquiries (But it is now being re-structured). Other reports have been produced and distributed within government, but because they have remained in draft status they have not been made publicly available. The focus has been on production of reports as mentioned in government agreements with donors, and much less on "dissemination" of those same reports.

Meanwhile donor support to the department is still coming through multiple individual donors, for multiple separate and overlapping activities. This does not encourage the development and implementation of a coherent and comprehensive plan by the department. Quite the reverse. There are clearly some perverse incentives at work. The department itself has been openly reluctant to share information about what others donors are doing, to individual donors, possibly because it gives the department more room to manoeuvre amongst the various donor agenda's and some freedom to pursue its own priorities (which remain out of sight). And individual donors are still being tempted to "cherry pick" specific activities for funding that could give them some influence on the PRS processes.

My current view is that this process needs to be radically changed. Instead of talking about "dissemination" of information about PRS plans, progress and revisions, the focus should be on developing and implementing a "disclosure" policy. Dissemination is a weasel word, a word that fails to say exactly what is meant. Dribbling out information, even within the government, could legitimately count as dissemination. But is that what is needed? No. A disclosure policy is different. It is a statement about what types of documents will automatically be made publicly available to anyone, without constraint. Printing 5,000 copies of a report may count as dissemination, but it does not count as disclosure. If those copies are available on a website, or information about their availability is made available via a website or public notice board or newspaper, then this does count as disclosure. What is crucial here is the extent to which there is handing over of control over access to information. Then anyone can theoretically participate in its use. This is quite different to various engineered "peoples participation" exercises being promoted by some donors as part of the PRS dissemination and revision process. These are by necessity limited in their scale and frequency, and much more expensive.

Disclosure policies need to be adopted by the supporting donors as well as the assisted department. They should commit donors to public disclosure to at least the following types of information: What government plans have they funded, what sort of support have the provided (including budgets) and what progress and financial reports have they received back? Ideally this information would be publicly accessible on the website of the government department being supported, where it can be seen in context. Not just on the donor's website. This is clearly ambitious. Right now the most immediate step that needs to be taken is to get the relevant donors to share this type of information amongst themselves, let alone with the public at large.

Some good examples of best practice need to be identified and promoted. One is the Government of Uganda website at http://www.finance.go.ug/peap_revision/ Here the government of Uganda has provided access not only to PRS documents , but also to various draft forms of these documents. Along with plans and progress with the PRS revision process, along with contact information about the key people leading the process. Unfortunately there is no link to and from the government's main website at http://www.government.go.ug/

Postscript: I have just noticed that PANOS are offering an award for writing on the subject of "Transparency, good governance and democracy:Do ICTs increase accountability?" Four awards of $1,000 each will be made for the best journalism on this topic produced by journalists in developing and transition countries.


Tuesday, August 03, 2004

No more paradigm changes please!

"It is often assumed that participatory methods are suitable for
gathering qualitative information but that when hard, reliable,
numerical data are required we must turn instead to surveys and
questionnaires with their pre-determined categories and neat tick
boxes. In fact this is a myth, albeit one sustained by some with
vested interests in maintaining their "expert" status and privileges
."

This is the first paragraph of a paper titled "Party Numbers: quantification though participation" which was published in the May 2004 issue of the Enterprise Impact NEWS letter [Issue 30]. This two page paper was a summary of a longer paper by both authors titled “Reversing the Paradigm: Quantification and Participatory Methods".

I have provided a brief critique of the two page summary, which is now available on the MandE NEWS website, here at www.mande.co.uk/docs/CommentsChambers&Mayoux.doc.

Amongst other things, my comments covers the following:
- the need for fewer loose references to paradigm changes
- less use of straw man arguments about different methods of impact assessment
- the need to think about which methods are appropriate in which contexts, rather than making broad generalisations about suitability of methods
- making more use of ranking methods, which are very simple forms of measurement that can be used in both inductive and deductive approaches to impact assessment
- limiting our ambitions about empowering people when doing impact assessments

After I emailed these comments to Linda and Robert, Linda then replied with her comments on what I had said. I have since replied to Linda with some further comments on issues she has raised.

Please add you own comments to this ongoing dialogue, by clicking on the orange "comments" link below.

Monday, July 12, 2004

Where have all the evaluations gone?

Try finding the DFID Evaluation department on the DFID website, at http://www.dfid.gov.uk/ Not an easy task. Compare it to the SIDA website at http://www.sida.se/Sida/jsp/polopoly.jsp?d=107 ,where it very visibly placed on the first page

Try finding copies of recent DFID evaluations. The most recently numbered report is number 637, but there are only 125 actually listed on the website. There are about 10 per year listed from 1989 to 1998, but only 1 to 2 listed each year from 1999 to 2001, and none for 2003 onwards. Notably absent is the Development Effectiveness Report, produced in 2001 (though accessible via a key word search - but only if you already knew its title) And these are all listed in a section that is accessed through the Freedom of Information link on the DFID website!

Over the on the USAID EvalWeb website at http://www.dec.org/partners/evalweb/ there is evidence of a similar decline in the availability of evaluations. The website notes that "An impetus for this site is the decline in evaluations and assessments that are submitted to USAID's document repository, the Development Experience Clearinghouse. From 529 in 1994 to a projected 135 in 2003

Why the decline in evaluation activity, or at least the public availability of those reports, or at least information about their public availability? The increased channelling of DFID funds through sector wide and direct budget support funding mechanisms seems an unlikely explanation. Given the risks, especially with the latter, this development might be expected to generate more, rather than less, evaluation activity. And why all the gaps in the series of evaluation reports that are listed on the website? Why not list all that were undertaken, and if any are out of print, then say so.

Wednesday, June 30, 2004

Projects versus Project Funding Mechanisms

Over the last few years I have had some involvement with the M&E of three project-funding mechanisms. One in the UK, one in Australia and one in south Asia. In all three cases almost all the thinking about the assessment of performance was focused on the analysis of the individual funded projects, along with some syntheses studies that were designed to make some more aggregate assessment of the results of particular categories of projects. The amount of attention given to the assessment of the project funding mechanism varied from a modest amount to none at all. I think this is almost the reverse of what should be the case.

All funding mechanisms that involve calls for proposals and then use a screening process to assess those proposals have in effect a theory of what makes a viable project. In as much as the people reviewing proposals feel they can rate some proposals as better than others then they probably also have a theory of what makes a good project, and a not so good project. These theories will be in the form of a view of what bundle of attributes, discussed during the review process, make the most difference to how successful a project is, in the short and long term. In an ideal world feedback from project-level monitoring and evaluation activities would lead to refinement of these theories about good projects, and this would be evident in changed selection criteria for accepting and funding project proposals. The funding mechanism would get better and better at spotting and funding good projects. In reality I have never seen this sort of feedback link in operation. At least in explicit form.

There are some broad types of theories that would be well worth testing, because they have some identifiable and significant consequences. One is supported by some prior experience. That is, it has been found that it is not the details of the proposed project activities, but the nature of the implementing partner that is what makes a difference between good and bad, or mediocre project outcomes. If this is true it could prompt a substantial re-weighting of emphasis in many project selection procedures, away from a focus on project activities and towards assessment of the project holding organisation. Another possibility is that there is in fact no significant correlation between how well proposals fit selection criteria and their subsequent performance. One possible response to these findings would be to slim down the project selection procedure and to intensify project monitoring and ongoing capacity building of funded projects.

Sunday, June 20, 2004

Treating organisations as though they were machines

The following comments are an excerpt from a response I made to the following paper by Alison Scott(DFID)"Assessing and Monitoring Multilateral Effectiveness", available online here

===============

The multilateral organisation as a machine

19. I was disconcerted to read section 9 on the use of multilateral's own assessment of their effectiveness. Not only about the multilaterals' own lack of capacity to assess their effectiveness, but also the conclusion in para 9.3 that these efforts could not be used, and instead DFID would make its own judgements.

20. When we assess the performance of a machine we ask what is it doing and how does that match against what we expect it to be doing. When we assess the performance of an individual or an organisation, we also ask "what does s/he think they are doing" A person is expected to have agency; to be aware of choices and to make responsible choices. It is that awareness and responsibility which is the foundation of legal judgements that can make the difference between a death sentence, imprisonment, or freedom. On a more mundane level, it is an individual's (or organisation's) knowledge about what has happened which makes the difference between whether what has been done can be changed, avoided in future or replicated. The implication for MEFF is that DFID should be assessing the multilateral's knowledge about what it has been doing, and the effects of what it has been doing. That is what matters.

21. Fortunately, most organisations know more than the sum total of what has been captured by their M&E systems. Knowledge is also captured in other documents, produced by other sections of the organisation. But more importantly, it exists, often in tacit and informal forms, in the heads of people who make decisions about where resources should be allocated.

22. If DFID wants to engage with its multilateral partners, then one means of doing so is by trying to explicate their judgements of their performance, both the criteria they are using, the reasons behind those criteria, and the evidence of achievement on those criteria. This can then be complemented by independent verification by DFID, in the areas of performance that are of the greatest concern. A similar approach was taken with the assessment of a SIDA funded poverty alleviation project in Vietnam (See "A Study Of Perceptions And Responses To Poverty Within The Vietnam-Sweden Mountain Rural Development Programme".
==============
For the full text of my comments on the DFID paper go to http://www.mande.co.uk/wp-content/uploads/2004/MEFFcomments.pdf

Saturday, May 08, 2004

Is moving the goal posts a good thing?

I have just been reviewing the changes made in the Logical Framework used in the DFID-funded PETRRA project (rice research with and for poor farmers). Since the project started in 1999 there have been quite a few changes, at least once a year, if not more often. I decided to go back and compare the contents of the first Logical Framework developed in 1999, with those of the most recent version, last changed in mid-2003.

Somewhat to my surprise I found that many of the changes that had been made meant that the current Logical Framework was now more demanding, in terms of the expectations it set, than it was in 1999. There were now three, rather than one Purpose statements (heresy in some quarters, but viewed approvingly by the last OPR). There are now six Outputs instead of five (Communications work was given much more importance as the project developed). Of the original five Outputs, three had clearly developed a more demanding set of indicators. The other two were neutral, if not a bit more demanding. And the total number of indicators for all the Outputs and Purposes had grown from 18 to 30, roughly counted. Now an average of 3.3 per Purpose or Output.

And the project, which is due to finish by August 2004, looks like it will score above average on the achievement of most of the Outputs and Purposes, when assessed by the OPR team in July.

All this leads me to speculate about to what extent we could read changes in the contents of Logical Frameworks, as indicators of achievement (or lack thereof), even before we look at the evidence on the ground, or elsewhere. This might be half true at least. Where the Logical Framework has been scaled down, to be less demanding, that might relect a movement from unreality to realism, or from realism to failure to achieve. But even that difference should be possible to identify by reading the original Logical Framework. Another concern is sampling. How many changes were made to the Logical Framework, and how evenly over a period of the project? Many, spread well over a long period, would suggest the project managers had some ability/right to make changes. Few might suggest the latter. But that could be investigated. And the more changes there are made, the less likely they might be seen as "random pertubations" versus real trends.

Anyway,...food for thought.

Monday, April 26, 2004

Where are the partners?

About a month ago I took part in the annual staff conference of a small UK NGO. The focus of the first two days was on identifying the main development issues that NGO should be addressing for the next three years or so. This was part of a wider strategic planning process that was just beginning. During the meeting the CEO made a point of distinguishing the NGO from others by the degree to which its approach was led by the views of its southern partners. If that really was the case, then I think the NGO would have had a justifiable claim to radicalism, something it was well known for in the past. But how could we verify such a claim? As I listened to the ongoing discussion about a range of important global issues, including HIV/AIDS, globalisation, fundamentalism, etc I notice how little, if at all, I could hear of the partner’s views on these issues. That the partners were not physically present in the meeting was not my main worry. I felt the CEO, and other staff, were well aware of the need for appropriate engagement with their partners. What concerned me more was that the discussion about global development issues made no reference to which partner thought what about which issue and why? Assuming the partners did have views, and had been consulted about them in the past, why was there no evidence of that process impacting on how the staff were presenting the development issues in this meeting? I would have thought that citing their partner’s views would have given extra weight to the views being cited. Associated with this concern of mine was a related feeling of far too much ungrounded analysis, that would then be very difficult to convert into a strategy that could be operationalised by the NGO.

The radical alternative would be to focus on the their partner’s views, and to talk explicitly about the areas of agreement and disagreement, both between their partners and with the UK NGO. This is where the NGO has some strategic choices to make (and choices it could fudge). Whose views should it support in future, how and why? And how should it respond to differences between its partners? Where should it seek new partnerships and why? Answers to these questions would help support claims they would like to make about working with, and even being led, by some of their partners. On the other hand, a continuation of talk about global issues without a focus on their partners views on those issues, would suggest they are trying to work through them, simply using them as means to an end.

Friday, April 23, 2004

Monitoring empowerment: A contradiction in terms?


A colleague of mine has been doing some work for a major multilateral. They want him to help them identify some indicators of empowerment, which can be included in a national survey instrument. This has always struck me as a particularly paradoxical type of objective. The survey is trying to measure when someone else is empowered. But it will be the survey designer who will define what empowerment is. What if the respondents disagree that a particular development in their lives constitutes empowerment? Is this to be interpreted as "false consciousness" or is this actually an expression of empowerment itself (but probably unlikely to be recorded and analysed as a response)? 

My advice to him was to treat diversity as an indication of empowerment. The rationale for this is spelled out in a conference paper I wrote in 2000, called "Does empowerment start at home? And if so how will we recognise it?". So for any given question about the attitudes or behaviour of the respondents, the survey analyst should examine the range of responses that were given (the SD to be more specific). Not the average response. Here is a quote from that paper: 
  At the population level, diversity of behaviour can be seen as a gross indicator of agency (of the ability to make choices), relative to homogenous behaviour by the same set of people. Diversity of behaviour suggests there is a range of possibilities that individuals can pursue. At the other extreme is standardisation of behaviour, which we often associate with limited choice. The most notable example being perhaps that of an army. An army is a highly organised structure where individuality is not encouraged, and where standardised and predictable behaviour is very important. 

 There was an associated footnote, which read: 
   As noted by some workshop participants, diversity in the behaviour of a set of individuals does not necessarily mean that all have equal choice. Inequalities of power (defined as choice) may still exist. Where we do find diversity in the set as a whole we could then do a more-micro-level analysis and examine the amount of diversity in the behaviour of one individual compared to another. 
So, going back to the survey instrument being designed by the multilateral. As well as examining the range of responses to a given question, the researcher should also compare questions in terms of the range of responses to those questions Where is the most and least diversity of responses? Attention might then focus in on the questions with the least range of response. That is where further investigation would be potentially useful, to identify the nature of any common constraints limiting the choices people are making. And if anything can be done to address those common constraints. 

 What if all the respondents were sending their children to school, does that mean they are not empowered, within this diversity definition of empowerment? That could actually be the case, if there are legal sanctions against not sending children to school. It might also be true that most parents have little real choice about whether to send their children to school. In many developed economies parents are well aware that there are few livelihood options for adults without formal education. This apparently contrary example has some value. Not all forms of lack of empowerment will be of concern to those researching empowerment

Tuesday, April 20, 2004

Why did the chicken cross the road?

On the Euforic website Rob van den Berg considers the challenges facing partners in evaluation....
"Collaborative evaluation is a potential minefield of misunderstandings about definitions, methodologies, concepts, logic and rationalities, reminiscent of the question ‘why did the chicken cross the road?’ The simple answer is that it wanted to get to the other side. Evaluation, however, wants to know whether the chicken took the shortest path across the road; whether it crossed the road in the quickest possible way; whether it did in actual fact reach the other side and whether it expects to remain there; and whether the needs of the chicken have been met by crossing the road..."

I think the case of the chicken who crossed the road has a lot of potential mileage as a metaphor for communicating what people think M&E is all about.

For example, my answer would be:

1. We need to ask the chicken what it had hoped to achieve by crossing the road. Not just pile on the questions regardless of its intentions. What were its objectives or its expectations? Did it in fact have a hypotheses it was going to test? A theory-of-change no less?

2. But we also need to be aware of the possibility that the chicken may have come across some unexpected benefits of crossing the road, after it did so. So just asking about its expectations will not be enough. We also need to ask the chicken about unexpected changes that took place. For example by using the Most Signficant Changes method, we could ask:

"What was the most significant change that took place in your life after you crossed the road?

We need to combine a deductive and theory based approach, with an inductive and experience based approach.

If you think the chicken would disagree with this, or you think there were other stakeholders in the chicken's neghbourhood who would have a different view, let me know, via the MandE NEWS Open Forum

Monday, April 12, 2004

Thinking about networks of policies

I have just returned from XXXX in west Africa, where I have been working on PRSP M&E. One of my continuing concerns while there was to get a handle on the complex context in which PRSP M&E activities are taking place. As in most countries the PRSP exists in a complex policy context, it does not stand on its own. It links into, or is expected to link into, a number of other policies and associated implementation processes.

I think the relationship between policies is an area that deserves some serious thinking about, in M&E terms. There are at least two types of relationships that need to be considered:
1. The overlap in objectives of different policy documents.
2. The connection between policies created by information flows between them, once they are implemented and monitored

Policies can overlap in their objectives, this is fairly clear. New policies are often expected to overlap with existing polices. For example, new and specific policies in particular ministries might be expected to help articulate relevant sections of a PRSP. We can measure this overlap in at least two ways: (a) By examining the overlaps in sets of indicators used for M&E of both policies. The PRSP M&E Plan in XXXX has a useful table showing how a number of policies overlap in this respect. (b) By getting owners of two policies to rank the relative importance of their own and the other's policy objectives, and see how their rankings compare. I have done this with a UK NGO, to assess the alignment of a country level strategy with project specific strategies within the same country.

Policies can also be linked by information flows between them, once their implementation begins. In the best case, policy documents are integral parts of high level management cycles. They are plans, which are to be followed by implementation and then hopefully some sort of review processes. And then even more hopefully, some sort of revised policy. In other words, a higher-level version of the project management cycle (i.e. a policy management cycle). Where there are multiple policies, the M&E outputs of one policy management cycle can feed into the planning stages of another policy management cycle. For example, the Annual Progress Report (APR) of the PRSP (Poverty Reduction Strategy Paper) in country XXX is meant to inform the contents of the government's annual budget, as in many other countries. There is now some discussion about when is the best time for the APR to feed into the budget planning discussions, which take place in stages over a period of months.

In some cases there is a relative clear directional nature of these linkages. The APR is expected to influence the budget more than the budget influences the APR. In other cases the net direction of influence is less clear to me, at this stage at least. The World Bank's PRSC (Poverty Reduction Support Credit) includes indicators about the progress made with M&E of the PRSP. The APR should provide evidence of progress made with M&E of the PRSP, and help trigger flow of funds from the WB to the government. But the presence within the PRSC of specific indicators about PRSP M&E capacity may also shape how the PRSP is monitored. Whether it does or not, I have yet to find out.

There are of course many other policies that the PRSP might be expected to influence, and some of those may in turn be expected to influences the PRSP. Somehow these need to be identified and the desired linkages identified. Then we need to know enough about the stages of their respective policy management cycles to identify how and whether the linkages do actually work or not. Without this all the government's efforts put into "communicating the results" of the PRSP begin to look like a shotgun blast into the sky.

Right now I feel we have a very partial and incomplete view of how government and donor policies are and should be interlinking, through exchanges of information between their M&E stages and planning stages. It is the scale that is daunting, including the long cycle times that make them difficult to see as whole processes. The longer the cycle time of any policy management process the less likely that it will function in the same in ways as it was before. Right now we don’t know how the upcoming PRSP review and revision process will look like.

Tuesday, April 06, 2004

Question: How do you assess a country’s ownership of a PRSP?

Answer: Bit by bit. There have been plenty of questions raised about the extent to which PRSP’s are really owned by the government of the country they refer to. (See Google search on ownership of PRSPs). But how do you assess whether a PRSP has country ownership? Well, maybe the way the question is asked could make a difference. One way is to ask who owns what parts of a PRSP. Rather than asking whether the whole document is owned by the whole government. In XXXX there are some PRSP objectives and associated indicators that could easily be adopted and owned by specific sections of government. For example, those relating to education or health, or macro-economic management. Okay, then how would you recognise when sections of government had taken ownership of specific objectives like these? Beyond simply saying so, which may not mean too much, these sections of government might actually collect and make information available about the associated progress indicators. Even stronger ownership might be associated with a detailed analysis of that data, as well as its collection and dissemination. In other words, the section of government would be investing its resources into M&E of their objective, and actually paying a cost in order to enable achievement of that objective. Back in country XXXX, the recently produced Annual Progress Report does not show any signs of any sections of government visibly owning specific sections of the PRSP. Nor is it clear who has been able to provide what information relating to PRSP indicators. In fact there has been an apparent unwillingness to explicitly state what information has not been made available by whom. The scale of lack of ownership has effectively been withheld from view.

Sunday, April 04, 2004

Hypothesis-led Surveys of Influence - on KAP

Today, on my day off, I have read through proposals received from two companies in Bangladesh to do an opinion survey of about 100 people in 20 organisations. The TORs (which I developed) ask a survey company to undertake a two stage survey process:

1. Interview PETRRA (a project) staff about which organisations they think their project has influenced most and least, and in what ways

2. Interview those companies to test out whether PETRRA staff hypotheses about expected influence are supported or not. Using open and closed questions and any other appropriate methods

A hypothesis led impact survey should produced a much more focused bit of evaluation research. The impact of the survey itself should be visible. If it finds the project staff hypotheses not supported by its findings then this should lead to changes, either in how PETRRA understands its influence, or in how the company does such surveys in future.

So far its not looking good. Both proposals have made no mention of the first round of interviews. They are just going to develop some questions of their own and then go out and interview people, and then presumably try to make sense of the results, without the bother of any guiding hypotheses! Oh well... it looks like we will have to provide them with a second round of briefing instructions and hope they get it right this time

Rick

Saturday, April 03, 2004

PRSP Monitoring: Target fixation and mission creep

Hi to new and returning visitors to MandE NEWS - from Rick Davies, Editor, MandE NEWS

This is a new step forward by Mande NEWS. I hope by starting up this Blog I might be able to generate some more content for Mande NEWS, on a more continuous basis. This will probably be more ad hoc and more from the hip, so some it will probably end up being deleted, later on in the cold light of day. Anyhow, here goes

Right now I am in XXXX, YYYY, working on monitoring and evaluation of the country's PRS (Poverty Reduction Strategy) Look here for Google findings on PRSPs http://www.google.com/search?hl=en&lr=&ie=UTF-8&oe=UTF-8&q=prsp+poverty+reduction+strategy+paper&btnG=Search

When milestones become millstones: The Annual Progress Report (APR) on the implementation of the PRS is due shortly. The relevant government department is working hard to get it out on time. In the process the end purposes of such an APR are being lost sight of. Getting content is the main concern. Readibility will be a secondary concern, if there is time. Identifying the impact of the APR? Well, there has not yet been time to look at what happened with the last APR yet.

Mission creep at multiple levels in all directions: Donor and other comments are now coming in on the earliest draft of the APR. Could you explain x a bit more...? Why do you have no information on y...?

And this is response to an APR that is already try to track progress relative to indicators not just on the original PRS but at least four other policy documents that have come into the picture since the PRS was written. These include:
- a summary revision created by the government when it came into power
- the Poverty Reduction Support Credit, a WB device
- Multi-Donor Budget Support policy document
- HIPC triggers
- Milleniuum Development Goals (okay, they were there before the PRS)

Needed?: Some continual and public mapping of how the various poverty related (govenment and donor) policies relate to each other (or not), in terms of overlapping indicators and objectives. Both existing and planned policies.

Postcript: 6 hours later, my laptop hard drive leaves this world. The second in 18 months. I will not be buying another HP laptop! Fortunately I have been backing up reasonably often, and I am carrying two memory sticks (much recommended)