Thursday, April 05, 2012

Criteria for assessing the evaluability of Theories of Change


2019 05 21 Update: Please also see

Evaluability Assessments: Reflections on a review of the literature. Davies, R., Payne, L., 2015. Evaluation 21, 216–231. PDF copy


2012: Our team has recently begun work on an evaluability assessment of an agency's work in a particular policy area, covering many programs in many countries. Part of our brief is to examine the evaluability of the programs' Theory of Change (ToC). 

In order to do this we clearly need to identify some criteria for assessing the evaluability of ToC. I initially identified five which I thought might be appropriate, and then put these out to the members of the MandE NEWS email list for comment. Many comments were quickly forthcoming. In all, a total of 20 people responded in the space of two days (Thanks to Bali, Dwiagus, Denis, Bob, Helene, Mustapha, Justine, Claude, Alex, Alatunji, Isabel, Sven, Irene, Francis, Erik, Dinesh, Rebecca, John, Rajan and Nick).

Caveats and clarifications

What I have presented below is my current perspective on the issue of evaluability criteria, as informed by these responses. It is not intended to be an objective and representative description of the responses (Lookhere for a copy of all the comments received) (You can also download this posting as a pdf)

The word "evaluable" needs some clarification. In the literature on evaluability assessments it has two meanings. The main one is that it is possible to evaluate something. For example, if the theory is clear and the data is available. The second meaning is more practically oriented. The theory may be clear and the data available, but the theory may be so implausible that it is simply not worth expending resources on its evaluation. Or there may be a perfectly good ToC, but if no one owns it apart from a consultant who visited the project six months ago, so it might be questionable whether expensive resources should be invested in its evaluation.

We also need to distinguish between an evaluable ToC and a “good” ToC.  A ToC may be evaluable because the theory is clear and plausible, and relevant data is available. But as the program is implemented, or following its evaluation, it might be discovered that the ToC was wrong, that people or institutions don’t work the way the theory expected them to do so. It was a “bad” ToC. Alternately it is also possible that a ToC may turn out to be good, but the poor way it was initially expressed made it un-evaluable, until remedial changes were made. 

 This brings us to a third clarification.  My minimalist definition of a ToC is quite simple: “the description of a sequence of events that is expected to lead to a particular desired outcome” Such a description could be in text, tables, diagrams or a combination of these. Falling within the scope of this definition we could of course find ToC that are evaluable and those that are not so evaluable. 

A possible list of criteria for assessing the evaluability of a Theory of Change (Version 2)
·         Understandable
o   Do the individual readers of the ToC find it easy to understand?  Is the text understandable? If used, is the diagram clear?
o   Do different people interpret the ToC in the same way?
o   Do different documents give consistent representations of the same ToC?
·         Verifiable
o   Are the events described in a way that could be verified? This is the same territory as that of Objectively Verifiable Indicators (OVIs) and Means of Verification (MoVs) found in LogFrames
·         Testable
o   Are there identifiable causal links between the events? Often there are not
o   Are the linked events parts of an identifiable causal pathway?
·         Explained
o   Are there explanations of how the connections are expected to work? Connections are common, explanations of the causal process involved are much less so.
o   Have the underlying assumptions been made explicit? (also duplicated below)
·         Complete
o   Does what might be a long chain of events make a connection between the intervening agent with the intended beneficiaries (/target of their actions)? In a recent ToC that I have seen the ToC is quite detailed at the beneficiary end, but surprisingly vague and unspecific towards the agent’s end, even though that is where accountability might be more immediately expected.
·         Inclusive (a better a term is needed here)
o   Does the ToC encompass the diversity of contexts it is meant to cover? In ToC covering whole portfolios of projects there could be a substantial diversity of contexts and interventions. Does the ToC provide room for these with sacrificing too much in terms of verifiability and testability” See Modular Theories of Change: A means of coping with diversity and change? for some views how to respond to this challenge.
·         Justifiable(new)
o   Is there evidence supporting the sequence of events in the ToC? Either from past studies, previous projects, and/or from a situation analysis/baseline study or the like which is part of the design/inception stage of the current project
·         Plausible (new)
o   Where there is no prior evidence is the sequence of events plausible, given what is known about the intervention and the context?
o   Have the underlying assumptions been made explicit?
o   Have contextual factors been recognised as important mediating variables?
·         Owned
o   Can those responsible for contents of the ToC be identified?
o   How widely owned is the ToC?
o   Do their views have any consequences?
·         Embedded
o   Are the contents of the ToC are also referred to in other documents that will help ensure that it is operationalized?

Weighting

It was sensibly suggested that some criteria were more important than others. One argued that if you can establish that the causal links in a ToC are evidence based then ‘ownership will and shall follow’”. 

In individual evaluability assessments a simple sense of their relative priority may be sufficient. When comparisons need to be made of the evaluability of multiple programs, it may be necessary to think about weighted scoring mechanisms/checklists. 

Purpose

It was suggested that the criteria used would depend on the purpose for which the ToC was created. An understanding of the Purpose could therefore inform the weighting given to the different criteria.
Prior to consulting the email list members I had drafted a list of three possible purposes that could generate different kinds of evaluation questions, which an evaluability assessment would need to consider. They were:
·         If the purpose of the ToC was to set direction
o   Then we need to ask were programs designed accordingly?
·         If the purpose of the ToC was to make a prediction
o   Then we need to ask if the programs subsequently turn out this way
·          If the purpose of the ToC was to provide a summation
o   Then we need to ask if this is an accurate picture of what actually happened?

One criticism of the inclusion of prediction was that most ToC are nothing like scientific models and because of this they are typically insufficient in their contents to generate any attributable predictions.  This may be true in the sense that scientific predictions aim to be generalisable, albeit subject to specific conditions e.g. that gravity behaves the same way in different parts of the universe. But most program ToC have much more location-specific predictions in mind, e.g. about the effects of a particular intervention in a particular place. There are interesting exceptions however, such as a ToC about a whole portfolio of programs, or a ToC about a whole policy area that might be operationalised through investment portfolios managed in a range of countries. There the criticism of incapacity may be more relevant.

The same critic proposed an alternate purpose to prediction, one where simplicity might be more of a virtue than a liability. ToC may aim to communicate or generate insight, by focusing on the core of an idea that is driving or inspiring a program. If so, then evaluation question could focus on how the ToC has changed the users’ understanding of the issues involved. This question about effects could be extended to include the effects of participation in the process whereby the ToC was developed.
PS: A similar point was made by another contributor, in a parallel related discussion on the KBF email list, who distinguished between two purposes:
  • to model a situation to better understand it and programme around it
  • to simplify a complex situation to help explain it to others and persuade them of the logic of your proposed intervention (e.g. for funding).
...noting that “in practice there is often a tradeoff between the explanatory and persuasive aspects of the underlying logic

Issues arising about criteria

The following issues were raised.
·         Process and Product: The list above is largely about the ToC product, not the process whereby it was created. Some argued there needed to be a participatory process of development to ensure the ToC was “aligned with the needs of beneficiaries and the national objectives”. However, others argued that that “ToC are not “development projects” that must be aligned with the Paris Declaration, but rather tools that must be rigorous, applied without ‘complaisance’ “. The hoped for reality might lie in between, ToC typically are associated with specifically project interventions and the extent of their ownership is relevant to answering the practical aspects of evaluability. On the other hand, the rigour of their use as tools will affect their usefulness and whether they can be evaluated. The product-oriented criteria given above do include two criteria that may reflect the effects of a good development process. i.e. ownership and embeddedness.
·         Ownership: It was argued that ownership was not a criterion of good ToC, often the consensus in science has been proved wrong. But in the above list the criterion of ownership is relevant to whether the ToC is worth evaluating, it is not a criterion of value of the belief or understanding represented by the ToC. It could be argued that widely owned views of how a project is working are eminently worth evaluating, because of the risk that they are wrong.
·         This approach might lead to the view that on the other hand ToC with few owners should not be evaluated. This view was in effect questioned by an example cited of an evaluator coming up with their alternative ToC, which was based on prior evaluations studies and research, in contrast to the politically motivated views of the official in charge of a program.  This brings us back to the criteria listed above, and the idea of weighting them according to context (ownership versus justifiability).
·         Relevance: This proposed criterion begs the question of relevant to whom? Ownership of the ToC (voluntary or mandated) would seem to signify a degree of relevance.
·         Falsifiability: It was argued that this is the pre-eminent criteria of a good scientific theory, and one which needed more attention by development agencies when thinking about the ToC behind their interventions. The criteria in the list above address this to some extent by inquiring about the existence of clear causal links, along with good explanations for how they are expected to work. Perhaps “good” needs to be replaced by falsifiable, though I worry about setting the bar too high when most ToC I see barely manage to crawl. Many decent ToC do include multiple causal links. The more there are, the more vulnerable they are to disproof, because only one link needs to fail for ToC not to work. This could be seen as a crude measure of falsifiability.
·         Flexibility: Although it was suggested that ToC be flexible and adaptable this view is contentious, in that it seems to contradict the need for specificity (by being verifiable, testable, and explained) and thus its falsifiablity. However, there is no in principle reason why a ToC can’t be changed. If it is, it becomes a different ToC, subject to a separate evaluation. It is not the same one as before. The only point to note here is that the findings of the adapted version would not validate the content of the earlier version.
·         Lack of adaptability may also be a problem. It was suggested  evaluators should ask 'When has the ToC been reviewed and how has it been adapted in the light of implementation experience, M&E data, dialogue and consultation with stakeholders?” If the answer is not for a long time, then there may be doubts about its current relevance, which could be reflected in limited ownership.
·         Clarity of logic as well as evidence: One commentator suggested that it might be made clear whether a given cause is both “necessary and sufficient”, presumably as distinct from alternative combinations of these terms.  Necessity and sufficiency is a demanding criterion, and arguable whether which many programs would satisfy, or perhaps even should satisfy.
·         Simplicity: This suggested requirement (captured by Occam’s razor) is not as simple a requirement as it might sound.  It will always be in tension with its opposite (captured by Ashby’s Law of Requisite Variety), which is that a theory must also have sufficient internal complexity in order to describe the complexity of the events it is seeking to describe. Along the same lines some commentators asked whether there was enough detail provided, the lack of which can affect verifiability and testability.  Simplicity may win out as the more important criteria where a ToC is primarily intended as a communication tool.
·         Justifiability was highlighted as important. Plausiblity was questioned “What that does really mean? If based on common sense then it is incompatible with being evidence based! If humanity had to rely on common sense, the earth would still be flat!!” Plausibility is clearly not a good evaluation finding. But it is a useful finding for an evaluability assessment. If a ToC is not plausible then it makes no sense to go any further with the design of an evaluation. Justifiablity is evidence of a good ToC, and is a judgement that might follow an evaluation. However, it might also be obvious before an evaluation, through an evaluability assessment, and lead to a decision that a further evaluation would not be useful.
Informed sources mentioned by contributors

Connell, J.P. & Kubisch, A.C. (1998) Applying a theory of change approach to the evaluation of comprehensive community initiatives: progress, prospects and problems, in: K. Fulbright-Anderson, A.C. Kubisch & J.P. Connell (Eds) New Approaches to Evaluating Community Initiatives. Volume 2: Theory, measurement and analysis (Queenstown, The Aspen Institute).  [courtesy of  John Mayne]

Connell and Kubisch suggest a number of attributes of a good theory of change.
·         It should be plausible.  Does common sense or prior evidence suggest that the activities, if implemented, will lead to desired results?
·         It should be agreed.  Is there reasonable agreement with the theory of change as postulated?
·         It should be embedded.  Is the theory of change embedded in a broader social and economic context, where other factors and risks likely to influence the desired results are identified?
·         It should be testable.  Is the theory of change specific enough to measure its assumptions in credible and useful ways?

Other sources that may be of interest
PS 30 April 2012: See also HIVOS posting on "How can I recognise a good quality Theory of Change?"

Friday, March 16, 2012

Can we evolve explanations of observed outcomes?


In mathematics and computer science, an optimization problem is the problem of finding the best solution from all feasible solutions. There are various techniques for doing so.

Science as a whole can be seen as an optimisation process, involving a search for explanations that have the best fit with observed reality.

In evaluation we often have a similar task, of identifying what aspects of one or more project interventions best explains the observed outcomes of interest. For example, the effects of various kinds of improvements in health systems on rates of infant mortality.  This can done in two ways. One is by looking internally at the design of a project, at its expected workings and then trying to find evidence of whether it did so in practice. This is the territory of theory led-evaluation.  The other way is to look externally, at alternative explanations involving other influences, and to seek to test those. This is ostensibly good practice but not very common in reality, because it can be time consuming and to some extent inconclusive, in that there may always be other explanations not yet identified and thus untested. This is where randomised control trials (RCTs) come in. Randomised allocation of subjects between control and intervention groups nullifies the possible influence of other external causes.  Qualitative Comparative Analysis (QCA) takes a slightly different approach, searching for multiple possible configurations of conditions which are both necessary and sufficient to explain all observed outcomes (both positive and negative instances).

The value of theory led approaches, including QCA, is that the evaluator’s theories help the search for relevant data, amongst the myriad of possibly relevant design characteristics, and combinations thereof. The absence of a clear theory of change is often one reason why baseline surveys are so expansive in contents, but yet rarely used. Without a half way decent theory we can easily get lost. It is true that "There is nothing as practical as a good theory" (Kurt Lewin)

The alternative to theory led approaches

There is however an alternative search process which does not require a prior theory, known as the evolutionary algorithm, the kernel of the process of evolution. The evolutionary processes of variation, selection and retention, iterated many times over, have been able to solve many complex optimisation problems such as the design of a bird that can both fly long distances and dive deep in the sea for fish to eat. Genetic algorithms (GA) are embodiments of the same kinds of process in software programs, in order to solve problems of interest to scientists and businesses. These are useful in two respects. One is the ability to search vary large combinatorial spaces very quickly. The other is that they can come up with solutions involving particular combinations of attributes that might not have been so obvious to a human observer.

Development projects have attributes that vary. These include both the context in which they operate and the mechanisms by which they seek to work. There are many possible combinations of these attributes, but only some of these are likely to be associated with achieving a positive impact on peoples’ live. If they were relatively common then implementing development aid projects would not be so difficult. The challenge is how to find the right combination of attributes. Trial and error by varying project designs and their implementaion on the ground is a good idea in principle, but in practice it is slow. There is also a huge amount of systemic memory loss, for various reasons including poor or non-existent communications between various iterations of a project design taking place in different locations.

Can we instead develop models of projects, which combine real data about the distribution of project attributes with variable views of their relative importance in order to generate an aggregate predicted result? This expected result can then be compared to an observed result (ideally from independent sources).  By varying the influence of the different attributes a range of predicted results can be generated, some of which may be more accurate than others. The best way to search this large space of possibilities is by using a GA. Fortunately Excel now includes a simple GA add-in, known as Solver.

The following spreadsheet shows a very basic example of what such a model could look like, using a totally fictitious data set. The projects and their observed scores on four attributes (A-D) are shown on the left. Below them is a set of weights, reflecting the possible importance of each attribute for the aggregate performance of the projects. The Expected Outcome score for each project is the sum of the score on each attribute x the weight for that score.  In other words the more a project has an important attribute (or combination of these) the higher will be its Expected Outcome score. That score is important only as a relative measure, relative to that of the other projects in the model.

The Expected Outcome score for each project is then compared to an Observed Outcome measure (ideally converted to a comparable scale), and the difference is shown as the Prediction Error. On the bottom left an is aggregate measures of prediction error, the Standard Deviation. The original data can be found in this Excel file.

 

The initial weights were set at 25 for each attribute, in effect reflecting the absence of any view about which might be more important. With those weights, the SD of the Prediction Errors was 1.25 After 60,000+ iterations in the space of 1 minute the SD had been reduce down to 0.97. This was achieved with this new combination of weights: Attribute A:19, Attribute B: 0, Attribute C: 19, Attribute D: 61.The substantial error that remains can be considered as due to causal factors outside of the model (i.e. as is described by the list of attributes)[1].

It seems that it is also possible to find least appropriate solutions, i.e, those which make the least accurate Outcome Predictions. Using the GA set to find the maximum error, it was found that in the above example a 100% weighting given to Attribute A generated a SD of 1.87. This is the nearest that such an evolutionary approach comes to disproving a theory.

GA deliver functional rather than logical proofs that certain explanations are better than others. Unlike logical proofs, they are not immortal. With more projects included in the model it is possible that there may be a fitter solution, which applies to this wider set. However, the original solution to the smaller set would still stand.

Models of complex processes can sometimes be sensitive to starting conditions. Different results can be generated from initial settings that are very similar. This was not the case in this exercise, with widely different initial weighting’s evolving and converging on almost identical sets of final weightings e.g. 19, 0, 19,  62 versus 61) producing the same final error rate. This robustness is probably due to the absence of feedback loops in the model, which could be created where the weighted score of one attribute affected those of another.  That would a much more complex model, possibly worth exploring at another time.

Small changes in Attribute scores made a more noticable difference to the Prediction Error. In the above model varying Project 8’s score on attribute A from 3 to 4 increases the average error by 0.02. Changes in other cells varied in direction of their effects. In more realistic models with more kinds of attributes and more project cases the results are likely to be less sensitive to such small differences in attribute scores.

The heading of this post asks “Can we evolve explanations of observed outcomes?” My argument above suggests that in principle it should be possible. However there is a caveat. A set of weighted attributes that are associated with success might better be described as the ingredients of an explanation. Further investigative work would be needed to find out how those attributes actually interact together in real life.  Before then, it would be interesting to do some testing of this use of GAs on real project datasets.

Your comments please...

PS 6 April 2012: I have just come across the Kaggle website. This site hosts competitions to solve various kinds of prediction problems (re both past and future events) using a data set available to all entrants, and gives prizes to the winner - who must provide not only their prediction but the algorithm that generated the prediction. Have a look. Perhaps we should outsource the prediction and testing of results of development projects via this website? :-) Though..., even to do this the project managers would still have a major task on hand: to gather and provide reliable data about implementation characteristics, as well as measures of observed outcomes... Though...this might be easier with some projects that generate lots of data, say micro-finance or education system projects.

 View this Australian TV video, explaining how the site works and some of its achievements so far. And the Fast Company interview of the CEO

PS 9 April 2012:  I have just discovered that there is a whole literature on the use of genetic algorithms for rule discovery "In a nutshell, the motivation for applying evolutionary algorithms to data mining is that evolutionary algorithms are robust search methods which perform a global search in the space of candidate solutions (rules or another form of knowledge representation)" (Freitas, 2002) The rules referred to are typcially "IF...THEN..."type statements






[1] Bear in mind that this example set of attribute scores and observed outcome measures is totally fictitious, so the inability to find a really good set of fitting attributes should not be surprising. In reality some sets of attributes will not be found co-existing because of their incompatibility e.g. corrupt project management plus highly committed staff



Tuesday, March 13, 2012

Modular Theories of Change: A means of coping with diversity and change?


Two weeks ago I attended a DFID workshop at which Price Waterhouse Coopers (PwC) consultants presented the results of their work, commissioned by DFID, on “Monitoring Results from Low Carbon Development”. LCD is one of three areas of investment by International Climate Fund (ICF). The ICF is “a £2.9bn financial contribution … provided by the UK Government to support action on climate change and development. Having started to disperse funds, a comprehensive results framework is now required to measure the impact of this investment, to enable learning to inform future programming, and to show value for money on every pound”

The PwC consultants’ tasks included (a) consultation with HMG staff on the required functions of the LCD results framework; (b) a detailed analysis of potentially useful indicators through extensive consultations and research into the available data; and (c) exploration of opportunities to harmonise results and/or share methodologies and data collection with others. Their report documents the large amount of work that has been done, but also acknowledges that more work is still needed.

Following the workshop I sent in some comments on the PwC report, some of which I will focus on here because I think they might be of wider interest. There were three aspects of the PwC proposals that particularly interested me. One was the fact that they had managed to focus down on 28 indicators, and were proposing that set be limited even further, down to 20. Secondly, they had organised the indicators into a LogFrame type structure, but one which is covering two levels of performance in parallel (within countries and across countries), rather than in a sequence. Thirdly, they had advocated the use of Multi-Criteria Analysis (MCA) for the measurement of some of the more complex forms of change referred to in the Logframe.  MCA is similar in structure to the design of weighted checklists, which I have previously discussed here and elsewhere.

Monitorable versus evaluable frameworks

As it stands the current LCD LogFrame is a potential means of monitoring important changes relating to low carbon development. But it is not yet sufficiently developed to enable an evaluation of the impact of efforts aimed at promoting low carbon development. This is because there is not yet sufficient clarity about the expected causal linkages between the various events described in the Logframe. It is the case that, as is required by DFID Logframes, weightings have been given to each of the four Outputs describing their expected impact on Outcome level changes. But the differences in weightings are modest (+/- 10%) and each of the Outputs describes a bundle of up to 5 more indicator-specific changes.

Clarity about the expected causal linkages is an essential “evaluability” requirement. Impact evaluations in their current form seek to establish not only what changes occurred, but also their causes. Accounts of causation in turn need to include not only attribution (whether A can be said to have caused B) but also explanation (how A caused B). In order for the LCD results framework to be evaluable, someone needs “connect the dots” in some detail. That is, identify plausible explanations for how particular indicator-specific changes are expected influence each other. Once that is done, the LCD program could be said to have not just a set of indicators of change, but a Theory of Change about how the changes interact and function as a whole.

Indicator level changes as shared building blocks

There are two subsequent challenges here to developing an evaluable Theory of Change for LCD. One is the multiplicity of possible causal linkages. The second is the diversity of perspectives on which of these possible causal linkages really matter. With 28 different indicator-specific changes there are, at least hypothetically, many thousands of different possible combinations that could make up a given Theory of Change (ii, where i = number of indicator specific changes). But, it can be well argued that “this is a feature, not a bug”. As the title of this blog suggests, the 28 indicators can be considered as equivalent to Lego building blocks. The same set (or parts thereof) can be combined in a multiplicity of ways, to construct very different ToC. The positive side to this picture is the flexibility and low cost. Different ToC can be constructed for different countries, but each one does not involve a whole new set of data collection requirements. In fact it is reasonable to expect that in each country the causal linkages between different changes may be quite different, because of the differences in the physical, demographic, cultural and economic context.

Documenting expecting causal linkages (how the blocks are put together)

There are other more practical challenges, relating to how to exploit this flexibility. How do you seek stakeholder views of the expected causal connections, without getting lost in a sea of possibilities? One approach I have used in Indonesia and in Vietnam involves the use of simple network matrices, in workshops involving donor agencies and/or the national partners associated with a given project. Two examples are shown below. These don’t need to be understood in detail (one is still in Vietnamese), it is their overall structure that matters. 

A network matrix simply shows the entities that could be connected in the left column and top row. The convention is that each cell in the matrix provides data on whether that row entity is connected to that column entity (and it may also describe the nature of the connection)

The Indonesian example shown below shows expected relationships between 16 Output indicators (left column) and 11 Purpose level indicators (top row) in a maternal health project. Workshop participants were asked to consider one Purpose level indicator at a time, and allocate 100 percentage points across the 16 Output indicators, with more percentage points = an Output having more expected impact on the Purpose indicator, relative to other Output indicators. Debate was encouraged between participants as figures were proposed for each cell down a column. Looking within the matrix we can see that for Purpose 3 it was agreed that Output indicator 1.1 would have the most impact. For some other Purpose level changes, impact was expected from a wider range of Outputs. The column on the right side sums up the relative expected impact of each Output, providing useful guidance on where monitoring attention might be most usefully focused.


This exercise was completed in little over an hour. The matrix of results show one set of expected relationships amongst many thousands of other possible sets that could exist within the same list of indicators. The same kind of data can be collected on a larger scale via online surveys, where the options down each column are represented within a single multiple choice question. Matrices like these, obtained either from different individuals or different stakeholder groups, can be compared with each other to identify relationships (i.e. specific cells) where there is the most/least agreement, as well as which relationships are seen as most important, when all satkeholder views are added up. This information should then inform the focus of evaluations, allowing scarce attention and resources to be directed to the most critical relationships.

The second example of a network matrix used to explicate a tacit ToC comes from Vietnam, and is shown below. In this example, a Ministry’s programmes are shown (unconventionally) across the top row and the country’s 5 year plan objectives are shown down the left column. Cell entries, discussed and proposed by workshop participants, show the relative expected causal contribution of each programme to each 5 year plan objective. Summary row on the bottom shows the aggregate expected contribution of each programme and the summary column on the right show the aggregate extent to which each 5 year plan objective was expected to be affected.


 Modularity

The modules referred to in the title of this blog can be seen as referring to two types of entities that can be used to construct many different kinds of ToC. One is the indicator-specific changes in the LCD Logframe, for example. By treating them as a standard set available for use by different stakeholders in different settings, we may gain flexibility at a low cost. The other is the grouping of indicator specific changes into categories (e.g. Outputs 1-2-3-4) and larger sets of categories (Outputs, Outcomes, Purpose). The existence of one or more nested types of entities is sometimes described as modularity. In evolutionary theory it has been argued that modularity in design improves evolvablity. This can happen: (a) by allowing specific features to undergo changes without substantially altering the functionality of the entire system, (b) by allowing larger more structural changes to occur by recombining existing functional units.

In the conceptual world of Logframes, and the like, this suggests that we may need to think of ToC being constructed at multiple levels of detail, by different sized modules. In the LCD Logframe impact weightings had already been assigned to each Output, indicating its relative expected contribution to the Outcomes as a whole. But the flexibility of ToC design at this level was seriously constrained by the structure of the representational device being used. In a Logframe Outputs are expected to influence Outcomes, but not the other way. Nor are they expected to influence each other, contra other more graphic based logic models. Similarly, both of the above network matrix exercises made use of existing modules and accepted the kinds of relationship that was expected between them (Outputs should influence Purpose level changes; Ministry Programmes should influence 5Year Plan objectives achievements). 

The value of multiple causal pathways with a ToC

More recently I have seen the ToC for a major area of DFID policy that will be coming under review. This is represented in diagramatic form, showing various kinds of events (including some nested categories of events), and also shows the expected causal relationships between these events. It was quite a complex diagram, perhaps too much so for those who like traffic-light level simplicities. However, what interested me the most is that subsequent versions have been used to show how two specific in-country programs fit within this generic ToC. This has been done by highlighting the particular events that make up one of the number of causal chains that can be found within the generic ToC. In doing so it appears to be successfully addressing a common problem with generic ToC - the inability to reflect the diversity of the programs that make up the policy area described by a generic ToC.

Shared causal pathways justify more evaluation attention

This innovation points to an alternate and additional use of the matrices above. The cell numbers could refer to the numbers of constituent programs in a policy area (and/or which are funded by a single funding mechanism) that involve this particular causal link (i.e. between the row event and the column event). The higher this number, the more important it would be for evaluations to focus on that casual link - because the findings would have relevance across a number of programs in the policy area.


Thursday, February 16, 2012

Evaluation questions: Managing agency, bias and scale



It is common to see in the Terms of Reference (ToRs) of an evaluation a list of evaluation questions. Or, at least a requirement that the evaluator develops such a list of questions as part of the evaluation plan. Such questions are typically fairly open-ended “how” and “whether” type questions. On the surface this approach makes sense. It gives some focus but leaves room for the unexpected and unknown.

But perhaps there is an argument for a much more focused and pre-specified approach. 

Agency

There are two grounds on which such an argument could be made. One is that aid organisations implementing development programs have “agency”, i.e. they are expected to be able to assess the situation they are in and act on the basis of informed judgements. They are not just mechanical instruments for implementing a program, like a computer. Given this fact, one could argue that evaluations should not simply focus on the behaviour of an organisation and its consequences, but on the organisation’s knowledge of its behaviour and its consequences. If that knowledge is misinformed then the sustainability of any achievements may be seriously in doubt. Likewise, it may be less likely that unintended negative consequences of a program will be identified and responded to appropriately.

One way to assess an organisation’s knowledge is to solicit their judgements about program outcomes in a form that can be tested by independent observation. For example, an organisation’s view on the percentage of households who have been lifted above the poverty line as a result of a livelihood intervention. An external evaluation could then gather independent data to test this judgement, or more realistically, audit the quality of the data and analysis that the organisation used to come to their judgement. In this latter case the role of the external evaluator is undertake a meta-evaluation, evaluating an organisation’s capacity by examining their judgements relating to key areas of expected program performance. This would require focused evaluation questions rather than open ended evaluation questions.

Bias

The second argument is arises from a body of research and argument about the prevalence of what appears to be endemic bias in many fields of research: the under-reporting of negative findings (i.e. non-relationships) and the related tendency of positive findings to disappear over time. The evidence here makes salutary reading, especially the evidence from the field of medical research where research protocols are perhaps the most demanding of all (for good reason, given the stakes involved). Lehrer’s 2010 article in the New York Times “The Truth Wears Off: Is there something wrong with the scientific method? is a good introduction, and Ioannidis’ work (cited by Lehrer) provides the more in-depth analysis and evidence. 

One solution that has been proposed to the problem of under-reporting of negative findings is the establishment of trial registries, whereby plans for experiments would be lodged in advance, before their results are known. This is now established practice in some fields of research and has recently been proposed for the use of randomised control trials by development agencies[1] Trial registries can provide two lines of defence against bias. The first is to make visible all trials, regardless of whether they are deemed “successful” and get published, or not. The other defence is against inappropriate “data mining”[2] within individual trials. The risk is that researchers can examine so many possible correlations between independent and dependent variables that some positive correlations will appear by chance alone. This risk is greater where a study looks at more than one outcome measure and at several different sub-groups. Multiple outcome measures are likely to be used when examining the impact on complex phenomenon such a poverty levels or governance, for example. When there are many relationships being examined there is also the well known risk of publication bias, of the evaluator only reporting the significant results.

These risks can be managed partly by the researchers themselves. Rasmussen et al suggests that if the outcomes are assumed to be fully independent, statistical significance values should be divided by the number of tests. Other approaches involve constructing mean standardised outcomes across a family of outcome measures. However these do not deal with the problem of selective reporting of results. Rasmussen et al argue that this risk would be best dealt with through the use of trial registries, where relationships to be examined are recorded in advance. In other words, researchers would spell out the hypothesis or claim to be tested, rather than simply state an open ended question. Open ended questions invite cherry picking of results according to the researcher’s interests, especially when there are lot of them.

As I have noted elsewhere, there are risks with this approach. One concern is that it might prevent evaluators from looking at the data and identifying new hypothesis that genuinely emerges as being of interest and worth testing.  However, registering hypotheses to be tested would not preclude this possibility. It should, however, make it evident when this is happening, and therefore encourage the evaluator to provide an explicit rationale for why additional hypotheses are being tested.

Same again, on a larger scale

The problems of biased reporting re-appear when individual studies are aggregated. Ben Goldacre explains:  

But individual experiments are not the end of the story. There is a second, crucial process in science, which is synthesising that evidence together to create a coherent picture.
In the very recent past, this was done badly. In the 1980s, researchers such as Celia Mulrow produced damning research showing that review articles in academic journals and textbooks, which everyone had trusted, actually presented a distorted and unrepresentative view, when compared with a systematic search of the academic literature. After struggling to exclude bias from every individual study, doctors and academics would then synthesise that evidence together with frightening arbitrariness.

The science of "systematic reviews" that grew from this research is exactly that: a science. It's a series of reproducible methods for searching information, to ensure that your evidence synthesis is as free from bias as your individual experiments. You describe not just what you found, but how you looked, which research databases you used, what search terms you typed, and so on. This apparently obvious manoeuvre has revolutionised the science of medicine”

Reviews face the same risks as individual experiments and evaluations. They may be selectively published, and their individual methodologies may not adequately deal with the problem of selective reporting of the more interesting results – sometimes described as cherry picking.  The development of review protocols and the registering of those prior to a review are an important means of reducing biased reporting, as they are with individual experiments. Systematic reviews are already a well established practice in the health sphere under the Cochrane Collaboration and in social policy under the Campbell Collaboration. Recently a new health sector journal, Systematic Reviews, has been established with the aim of ensuring that the results of all well-conducted systematic reviews are published, regardless of their outcome. The journal also aims to promote discussion of review methodologies, with the current issue including a paper on “Evidence summaries”, a rapid review approach.

It is common place for large aid organisations to request synthesis studies of achievements across a range of programs, defined by geography (e.g. a country program) or subject matter (e.g. livelihood interventions). A synthesis study requires some meta-evaluation, of what evidence is of sufficient quality and what is not. These judgements inform both the sampling of sources and the weighing of evidence found within the selected sources.  Despite the prevalence of synthesis studies, I am not aware of much literature existing on appropriate methodologies for such reviews, at least within the sphere of development evaluation. [I would welcome corrections to this view]

However, there are signs that experiences elsewhere with systematic reviews are being attended to. In the development field The International Development Coordinating Group has been established, under the auspices of the Campbell Collaboration, with the aim of encouraging registration of review plans and protocols and then disseminating “systematic reviews of high policy-relevance with a dedicated focus on social and economic development interventions in low and middle income countries”. DFID and AusAID have funded 3ie to commission a body of systematic reviews of what it identifies as rigorous impact evaluations, in a range of development fields. More recently an ODI Discussion Paper has reviewed some experiences with the implementation of systematic reviews. Associated with the publication of this paper was a useful online discussion.  

Three problems that were identified are of interest here. One is the difficulty of accessing source materials, especially evaluation reports many of which are not in the public domain, but should be. This problem is faced by all review methods, systematic and otherwise. This problem is now being addressed on multiple fronts, by individual organisation initiatives (e.g. 3ie and IDS evaluation databases) and by collective efforts such as the International Aid Transparency Initiative. The authors of the ODI paper note that “there are no guarantees that systematic reviews, or rather the individuals conducting them, will successfully identify every relevant study, meaning that subsequent conclusions may only partially reflect the true evidence base.” While this is (for any type of review process) it is the transparency of the sample selection - via protocols, and the visibility of the review itself – via registries, which help make this problem manageable.

The second problem, as seen by the authors, is that “Systematic reviews tend to privilege one kind of method over another, with full-blown randomised controlled trials (RCTs) often representing the ‘gold standard’ of methodology and in-depth qualitative evidence not really given the credit it deserves.” This does not have to be the case.   A systematic review has been usefully defined as “an overview of primary studies which contains an explicit statement of objectives, materials, and methods and has been conducted according to explicit and reproducible methodology” Replicability is key and this requires systematic and transparent process relating to sampling and analysis. This should be evident in protocols.

A third problem was identified by 3ie, in their commentary on the Discussion Paper. This relates directly to the initial focus of this blog, the argument for more focused evaluation questions. They comment that:

Even with plenty of data available, making systematic reviews work for international development requires applying the methodology to clearly defined research questions on issues where a review seems sensible. This is one of the key lessons to emerge from recent applications of the methodology. A review in medicine will often ask a narrow question such as the Cochrane Collaboration’s recent review on the inefficacy of oseltamivir (tamiflu) for preventing and treating influenza. Many of the review questions development researchers have attempted to answer in recent systematic reviews seem too broad, which inevitably leads to challenges. There is a trade-off between depth and breath, but if our goal is to build a sustainable community of practice around credible, high quality reviews we should be favouring depth of analysis where a trade-off needs to be made.”





[1] By the head of DFID EvD in 2011 and by Rasmussen et al, see below.
[2] See Ole Dahl Rasmussen, Nikolaj Malchow-Møller, Thomas Barnebeck Andersen, Walking the talk: the need for a trial registry for development interventions,  available via http://mande.co.uk/2011/uncategorized/walking-the-talk-the-need-for-a-trial-registry-for-development-interventions/