Wednesday, April 26, 2006

Evidence that the (development) world is getting better

...a new approach to monitoring and evaluation ;-)

I have recently been reviewing the language we use, in the world of development aid, and come to conclusion that there is an accumulating body of evidence that the world is getting better.

Here are some examples of changes I have noticed. If you have noticed other similar changes, please post them as Comments below.

  • In the past we only had projects, now we have initiatives (updated 20 May 2010) 
    • In the past we only had projects, but we have interventions (June 2011, see DFID Business Plan How to Do guidance)
  • In the past we had plans, but now we have strategies 
  • In the past we did research, but now we do analytic work 
  • In the past we interpreted data, but now we are involved in sensemaking  (updated 29 May 2010) 
  • In the past we just had stories, but now we have narratives  (updated 29 May 2010) 
  • In the past we just did monitoring and evaluation, but now we do management for development results (MDR)
  •  In the past we were concerned about coordination, but now we are concerned about harmonisation 
  • In the past we only wanted things to work but now we expect them to be fit-for-purpose 
  • In the past we only had interests, but now we have passions 
  • In the past we had problems, but now we only have issues 
  • In the past we only had news, but now we have breaking news 
  • In the past we were just donors, but now we are development partners 
  • In the past we were just NGOs, but now we are Civil Society Organisations 
  • In the past we had some concepts that were not very practically useful but now we have "sensitising concepts" 
  • In the past we took a particular perspective... now we have an analytical lens (13 April 2011) 
  • In the past we used to stimulate discussion, but now we open a space for a dialogue (or versions thereof) (14 April 2011) 
  • In the past we used to ask a question or make a point in a conference, but now we make an intervention (or is this now also passe?) (14 April 2011)
  • In the past we used to search the internet, but now we do horizon scanning work (July 17, 2012)
  • In the past we just used things, but now we leverage
  • In the past we just had trial and error, but now we do problem driven iterative adaptation.  
  • In the past our activities only had an effect but now they impact things (2018), and now we even have "impactful development"
  • In the past we just tried to change things, but now we are aiming for transformational change (2018)
  • In the past we only had evaluators, but now we have impact researchers! (2021, thanks @EvaluationMaven)
  • In the past we just wanted more detail, but now are asking for more granularity (2021)
  • In the past we just tried to do participatory development , but now we are into in co-production (2021)

Integrating funding applications and baseline surveys

This idea falls into the category of "things I should have learned years ago!"

Over the last year or so I have been working on a number of research funding mechanisms in Ghana and Vietnam. Both involve something like the traditional two stage process of inviting simple / short Concept Notes for research, then from amongts the best of these, inviting fully developed Proposals for research. Quite a lot of information is provided by the grantee-to-be by this process, as well as by those who dont end up qualifying as grantees.

But up to now it has never occured to me that we should design this process to simultaneously gather information about the baseline status of these organisations and their activities, for subsequent monitoring and evaluation purposes. Especially information about their relationships with other actors at this early stage, which is of increasing interest to me. Instead, in one instance, we have organised a separate baseline survey some months later, involving the approved grantees only. Needless to say, this did not impress the new grantees, who had thought they had finished with form filling for the time being!

Another advantage of this approach is that by including the non-successfull applicants, we gather some wider contextual data, that will put the characteristics of the approved grantees in a broader perspective. Some of this information may reflect on the capabilty of the non-successful applicant, but other data may be more independent.

I have also been pushing a number of grant making bodies to use the application process to generate predictions of subsequent success, on a numerical scale. These predictions can later be compared to actual / perceived success, some years down the road. Not only is the correlation between the prediction and outcome of interest, so will be the positive and negative outliers (the unexpected successes and the unexpected failures). This is where case study investigations could help us learn a lot about what makes the difference between success and faulure.