Thursday, March 28, 2019

Where there is no (decent / usable) Theory of Change...



I have been reviewing a draft evaluation report in which two key points are made about the relevant Theory of Change:

  • A comprehensive assessment of the extent to which expected outcomes were achieved (effectiveness) was not carried out, as the xxx TOC defines these only in broad terms.
  •  ...this assessment was also hindered by the lack of a consistent outcome monitoring system.
I am sure this situation is not unique to this program. 

Later on the same report, I read about the evaluation's sampling strategy. As with many other evaluations I have seen, the aim was to sample a diverse range of locations in such a way that was maximally representative of the diversity of how and where the program was working. This is quite a common approach and a reasonable one at that.

But it did strike me later on that this intentionally diverse sample was an underexploited resource. If 15 different locations were chosen, one could imagine a 15 x 15 matrix. Each of the cells in the matrix could be used to describe how a row location compared to a column location. In practice, only half the matrix would be needed, because each relationship would be mentioned twice e.g. Row location A and its relation to Column location J would also be covered by Row location J and its relation to Column location A.

What sort of information would go in such cells? Obviously, there could be a lot to choose from. But one option would be to ask key stakeholders, especially those funding and/or managing any two compared locations. I would suggest they be asked something like this:
  • "What do you think is the most significant difference between these two locations/projects, in the ways they are working?"
And then ask a follow-up question...
  • "What difference do you think this difference will make?"
The answers are potential (if...then...) hypotheses, worth testing by an evaluation team. In a matrix generated by a sample of 15 locations, this exercise could generate ((15*15)-15))/2 = 105 potentially useful hypotheses, which could then be subject to a prioritisation / filtering exercise, which should include considerations of their evaluability (Davies, 2013). More specifically, how they relate to any Theory of Change, whether there is relevant data available, and whether any stakeholders are interested in the answers.

Doing so might also help address a more general problem, which I have noted elsewhere (Davies, 2018). And which was also a characteristic of the evaluation mentioned above. That is the prevalence in evaluation ToRs of open-ended evaluation questions, rather than hypothesis testing questions: 
" While they may refer to the occurrence of specific outcomes or interventions, their phrasings do not include expectations about the particular causal pathways that are involved. In effect these open-ended questions imply either that those posting the questions either know nothing, or they are not willing to put what they think they know on the table as testable propositions. Either way this is bad news, especially if the stakeholders have any form of programme funding or programme management responsibilities. While programme managers are typically accountable for programme implementation it seems they and their donors are not being held accountable for accumulating testable knowledge about how these programmes actually work. Given the decades-old arguments for more adaptive programme management, it’s about time this changed (Rondinelli, 1993; DFID, 2018).  (Davies, 2018)