Thursday, May 24, 2012

A perspective on "Value for Money" relationships


The constituents of "value for money"


Matrices can be a useful means of showing the results of different combinations of things In this matrix I show how three different performance attributes can be seen as the results of different combinations of change in unit costs and effectiveness.
Source: Department of Crude Measures

PS: DFID and ICAI documents talk about Value for Money (VfM) as being made up of three elements: Economy, Efficiency and Effectiveness. But if we take VfM literally, as being about a relationship between value and money, then two of these three elements don't belong. Economy is just about money and effectiveness is just about value. For more, perhaps too much, on ideas about VfM, see this list of documents at www.mande.co.uk

Another take on definitions

My client is faced with the task of comparing multiple diverse projects within a policy portfolio. I have to think about what sort of comparisons are possible in this context. I come up with the following matrix:
Applying this simple set of distinctions may not be so easy. At what point would you be able to say two or more interventions were the same kind and scale? Or that the outcomes of two or more interventions were the same kind and scale?

Thursday, April 19, 2012

Data mining algorithms as evaluation tools


For years now I have been in favour of theory-led evaluation approaches. Many of the previous postings on this website are evidence of this. But this post is about something quite different, about a particular form of data mining, how to do it and how it might be useful. Some have argued that data mining is radically different from hypothesis-led research (or evaluation, for that matter). Others have argued that there are some important continuities and complimentarities (Yu, 2007)

Recently I have started reading about different data mining algorithms, especially the use of what are called classification trees and genetic algorithms (GAs). The latter was the subject of my recent post, about whether we could evolve models of development projects as well as design them. Genetic algorithms are software embodiments of the evolutionary algorithm (i.e. iterated variation, selection, retention) at work in the biological world. They are good for exploring large possibility spaces and for coming up with new solutions that may not be nearby to current practice.

I had wondered if this idea could be connected to the use of Qualitative Comparative Analysis (QCA), a method of identifying configurations of attributes (e.g. of development projects) associated with a particular type of outcomes (e.g. reduced household poverty). QCA is a theory-led approach, which uses very basic forms of data about attributes (i.e. categorical), then describes configurations of these attributes using Boolean logic expressions, and analyses these with the help of software that can compare and manipulate these statements. The aim is to come up with a minimal number of simple “IF…THEN” type statements describing what sorts of conditions are associated with particular outcomes. This is potentially very useful for development aid managers who are often asking about “what works where in what circumstances”. (But before then there is the challenge of getting on top of the technical language required to be able to do QCA).

My initial thought  was whether genetic algorithms could be used evolve and test statements describing different configurations, as distinct from constructing them one by one on the basis of a current theory. This might lead to quicker resolution, and perhaps discoveries that had not been suggested by current theory. 

As described in my previous post, there is already a simple GA built into Excel, known as Solver. What I could not work out was how to represent logic elements like AND, NOT, OR in such a way that Solver could vary them to create different statements representing different configurations of existing attributes.  In the process of trying to sort out this problem I discovered that there is a  whole literature on GAs and rule discovery (rules as in IF-THEN statements). Around the same time, a technical adviser from FrontlineSolver suggested I try a different approach to the automated search for association rules. This involved the use of Classification Trees, a tool which has the merit of producing results which are readable by ordinary mortals, unlike the results of some other data mining methods. 

An example!

This Excel file contains a small data set, which has previously been used for QCA analysis. It contains 36 cases, each with 4 attributes and 1 outcome of interest. The cases relate to different ethnic minorities in countries across Europe and the extent to which there has been ethnic political mobilisation in their countries (being the outcome of interest). Both the attributes and outcomes are coded as either 0 or 1 meaning absent or present. 

With each case having up to four different attributes there could be 16 different combinations of attributes. A classification algorithm in XLMiner software (and others like it) is able to automatically sort through these possibilities to find the simplest classification tree that can correctly point to where the different outcomes take place. XLMiner produced the following classification tree, which I have annotated and will through below.



We start at the top with the attribute “large” referring to the size of the linguistic subnation within their own country. Those that are large have then been divided according to whether their subnational region is “growing” or not. Those that are not have then been divided into those who are relatively “wealthy” group within their nation and those who are not. The smaller linguistic substations  have also been divided into those who are relatively wealthy group within their nation and those who are not, and those who are relatively wealthy are then divided into those whose subnational region speak and write in their own language or not. The square nodes at the end of each “branch” indicate the outcome associated with these combinations of conditions - where there has been ethnic political mobilisation (1) or not (0). Under each square node are the ethnic groups placed in that category. These results fit with the original data in Excel (right column). 

This is my summary of the rules described by the classification tree:
  • ·         IF a linguistic subnation’s economy is large AND growing THEN ethnic political mobilisation will be present [14 of 19 positive cases]
  • ·         IF a linguistic subnation’s economy is large, NOT growing AND is relatively wealthy THEN ethnic political mobilisation will be present [2 of 19 positive cases]
  • ·         IF a linguistic subnation’s economy is NOT large AND is relatively wealthy AND speaks and writes in its own language THEN ethnic political mobilisation will be present [3 of 19 positive cases]
Both QCA and classification trees have procedures for simplifying the association rules that are found. With classification trees there is an automated “pruning” option that removes redundant parts. My impression is that there are no redundant parts in the above tree, but I may be wrong.
These rules are, in realist evaluation terminology, describing three different configurations of possible causal processes. I say "possible" because what we have above are associations. Like correlation co-effecients, they don't necessarily mean causation. However, they are at least candidate configurations of causal processes at work.

The origins of this data set and its coding are described in pages 137-149 of The Comparative Method: Moving Beyond Qualitative and Quantitative Strategies by Charles C. Ragin, viewable on Google Books. Also discussed there is the QCA analysis of this data set and its implications for different theories of ethnic political mobilisation. My thanks to Charles Ragin for making the data set available.

I think this type of analysis, by both QCA and classification tree algorithms, has considerable potential use in the evaluation of development programs. Because it uses nominal data the range of data sources that can be used is much wider than statistical methods that need ratio or interval scale data. Nominal data can either be derived from pre-existing more sophisticated data (by using cut-off points to create categories) or be collected as primary data, including by participatory processes such as card/pile sorting and ranking exercises. The results in the form of IF…THEN rules should be of practical use, if only in the first instance as a source of hypotheses needing further testing by more detailed inquiries. 

There are some fields of development work where large amounts of potentially useful, but rarely used, data is generated on a continuing basis such a microfinance services and to a less extent healthy and education services. Much of the demand for data mining capacity has come from industries that are finding themselves flooded with data, but lack the means to exploit it. This may well be the case with more development agencies in the future, as they make more use of interactive websites and mobile phone data collection methods and the like. 

For those who are interested, there is a range of software worth exploring in addition to the package I have mentioned above. See these lists: A and B  I have a particular interest in GATree, which uses genetic algorithm to evolve the best fitting classification tree, and to avoid the problem of being stuck in a “local optimum”. There is also another type of algorithm with the delightfull name of Random Forests, which uses the “wisdom of crowds” principle to find the best fitting classification tree. But note the caveat: “Unlike decision trees, the classifications made by Random Forests are difficult for humans to interpret”. These and other algorithms are in use by participants in the Kaggle competitions online, which themselves could be considered as a kind of semi-automated meta-algorithm (i.e. an algorithm for finding useful algorithms). Lots to explore!

PS: I have just found and tested another package, called XLSTAT, that also generates classification trees. Here is a graphic showing the same result as found above, but in more detail. (Click on the image to enlarge it)

PS 29 April 2012: In QCA distinctions are made between a condition being "necessary" and or "sufficient" for the outcome to occur.  In the simplest setting a single condition can be a necessary and sufficient cause. In more complex settings a single condition may be a necessary part of a configuration of conditions which itself is sufficient but not necessary. For example a "growing" economy in the right branch of the first tree above. In classification trees the presence/absence of the necessary/sufficient conditions can easily be observed. If a condition appears in all "yes" branches of the tree (= different configurations) then it is "necessary". If a condition appears along with another in a given "yes" branch of  of a tree then it is not "sufficient". "Wealthy" is a condition that appears necessary but not sufficient. See more on this distinction in a more recent post:Representing different combinations of causal conditions

PS 4 May 2012: I have just discovered there is what looks like a very good open source data mining package called RapidMiner, which comes with a whole stack of training videos, and a big support and development community


PS 29 May 2012: Pertinent comment from Dilbert 

PS 3 June 2012: Prediction versus explanation: I have recently found a number of web pages on the issue of prediction versus explanation. Data mining methods can deliver good predictions. However information relevant to good predictions does not always provide good explanations e.g. smoking may be predictive of teenage pregnancy but it is not a cause of it (see interesting exercise here). So is data mining a waste of time for evaluators? On reflection it occured to me that it depends on the circumstances and how the results of any analysis are to be used. In some circumstances the next steps may be to choose between existing alternatives. For example, which organisation or project to fund. Here good predictive knowledge, based on data about past performance, would be valuable. In other circumstances a new intervention may need to be designed from the ground up. Here access to some explanatory knowledge about possible causal relationships would be especially relevant.On further reflection, even where a new intervention has to be designed it is likely that it will involve choices of various modules (e.g. kinds of staff, kinds of activities) where knowledge of their past performance record is very relevant. But so would be a theory about their likely interactions.

At the risk of being too abstract,it would seem that a two way relationship is needed: proposed explanations need to be followed by tested predictions and successful predictions need to be followed by verified explanations.











Thursday, April 05, 2012

Criteria for assessing the evaluability of Theories of Change


2019 05 21 Update: Please also see

Evaluability Assessments: Reflections on a review of the literature. Davies, R., Payne, L., 2015. Evaluation 21, 216–231. PDF copy


2012: Our team has recently begun work on an evaluability assessment of an agency's work in a particular policy area, covering many programs in many countries. Part of our brief is to examine the evaluability of the programs' Theory of Change (ToC). 

In order to do this we clearly need to identify some criteria for assessing the evaluability of ToC. I initially identified five which I thought might be appropriate, and then put these out to the members of the MandE NEWS email list for comment. Many comments were quickly forthcoming. In all, a total of 20 people responded in the space of two days (Thanks to Bali, Dwiagus, Denis, Bob, Helene, Mustapha, Justine, Claude, Alex, Alatunji, Isabel, Sven, Irene, Francis, Erik, Dinesh, Rebecca, John, Rajan and Nick).

Caveats and clarifications

What I have presented below is my current perspective on the issue of evaluability criteria, as informed by these responses. It is not intended to be an objective and representative description of the responses (Lookhere for a copy of all the comments received) (You can also download this posting as a pdf)

The word "evaluable" needs some clarification. In the literature on evaluability assessments it has two meanings. The main one is that it is possible to evaluate something. For example, if the theory is clear and the data is available. The second meaning is more practically oriented. The theory may be clear and the data available, but the theory may be so implausible that it is simply not worth expending resources on its evaluation. Or there may be a perfectly good ToC, but if no one owns it apart from a consultant who visited the project six months ago, so it might be questionable whether expensive resources should be invested in its evaluation.

We also need to distinguish between an evaluable ToC and a “good” ToC.  A ToC may be evaluable because the theory is clear and plausible, and relevant data is available. But as the program is implemented, or following its evaluation, it might be discovered that the ToC was wrong, that people or institutions don’t work the way the theory expected them to do so. It was a “bad” ToC. Alternately it is also possible that a ToC may turn out to be good, but the poor way it was initially expressed made it un-evaluable, until remedial changes were made. 

 This brings us to a third clarification.  My minimalist definition of a ToC is quite simple: “the description of a sequence of events that is expected to lead to a particular desired outcome” Such a description could be in text, tables, diagrams or a combination of these. Falling within the scope of this definition we could of course find ToC that are evaluable and those that are not so evaluable. 

A possible list of criteria for assessing the evaluability of a Theory of Change (Version 2)
·         Understandable
o   Do the individual readers of the ToC find it easy to understand?  Is the text understandable? If used, is the diagram clear?
o   Do different people interpret the ToC in the same way?
o   Do different documents give consistent representations of the same ToC?
·         Verifiable
o   Are the events described in a way that could be verified? This is the same territory as that of Objectively Verifiable Indicators (OVIs) and Means of Verification (MoVs) found in LogFrames
·         Testable
o   Are there identifiable causal links between the events? Often there are not
o   Are the linked events parts of an identifiable causal pathway?
·         Explained
o   Are there explanations of how the connections are expected to work? Connections are common, explanations of the causal process involved are much less so.
o   Have the underlying assumptions been made explicit? (also duplicated below)
·         Complete
o   Does what might be a long chain of events make a connection between the intervening agent with the intended beneficiaries (/target of their actions)? In a recent ToC that I have seen the ToC is quite detailed at the beneficiary end, but surprisingly vague and unspecific towards the agent’s end, even though that is where accountability might be more immediately expected.
·         Inclusive (a better a term is needed here)
o   Does the ToC encompass the diversity of contexts it is meant to cover? In ToC covering whole portfolios of projects there could be a substantial diversity of contexts and interventions. Does the ToC provide room for these with sacrificing too much in terms of verifiability and testability” See Modular Theories of Change: A means of coping with diversity and change? for some views how to respond to this challenge.
·         Justifiable(new)
o   Is there evidence supporting the sequence of events in the ToC? Either from past studies, previous projects, and/or from a situation analysis/baseline study or the like which is part of the design/inception stage of the current project
·         Plausible (new)
o   Where there is no prior evidence is the sequence of events plausible, given what is known about the intervention and the context?
o   Have the underlying assumptions been made explicit?
o   Have contextual factors been recognised as important mediating variables?
·         Owned
o   Can those responsible for contents of the ToC be identified?
o   How widely owned is the ToC?
o   Do their views have any consequences?
·         Embedded
o   Are the contents of the ToC are also referred to in other documents that will help ensure that it is operationalized?

Weighting

It was sensibly suggested that some criteria were more important than others. One argued that if you can establish that the causal links in a ToC are evidence based then ‘ownership will and shall follow’”. 

In individual evaluability assessments a simple sense of their relative priority may be sufficient. When comparisons need to be made of the evaluability of multiple programs, it may be necessary to think about weighted scoring mechanisms/checklists. 

Purpose

It was suggested that the criteria used would depend on the purpose for which the ToC was created. An understanding of the Purpose could therefore inform the weighting given to the different criteria.
Prior to consulting the email list members I had drafted a list of three possible purposes that could generate different kinds of evaluation questions, which an evaluability assessment would need to consider. They were:
·         If the purpose of the ToC was to set direction
o   Then we need to ask were programs designed accordingly?
·         If the purpose of the ToC was to make a prediction
o   Then we need to ask if the programs subsequently turn out this way
·          If the purpose of the ToC was to provide a summation
o   Then we need to ask if this is an accurate picture of what actually happened?

One criticism of the inclusion of prediction was that most ToC are nothing like scientific models and because of this they are typically insufficient in their contents to generate any attributable predictions.  This may be true in the sense that scientific predictions aim to be generalisable, albeit subject to specific conditions e.g. that gravity behaves the same way in different parts of the universe. But most program ToC have much more location-specific predictions in mind, e.g. about the effects of a particular intervention in a particular place. There are interesting exceptions however, such as a ToC about a whole portfolio of programs, or a ToC about a whole policy area that might be operationalised through investment portfolios managed in a range of countries. There the criticism of incapacity may be more relevant.

The same critic proposed an alternate purpose to prediction, one where simplicity might be more of a virtue than a liability. ToC may aim to communicate or generate insight, by focusing on the core of an idea that is driving or inspiring a program. If so, then evaluation question could focus on how the ToC has changed the users’ understanding of the issues involved. This question about effects could be extended to include the effects of participation in the process whereby the ToC was developed.
PS: A similar point was made by another contributor, in a parallel related discussion on the KBF email list, who distinguished between two purposes:
  • to model a situation to better understand it and programme around it
  • to simplify a complex situation to help explain it to others and persuade them of the logic of your proposed intervention (e.g. for funding).
...noting that “in practice there is often a tradeoff between the explanatory and persuasive aspects of the underlying logic

Issues arising about criteria

The following issues were raised.
·         Process and Product: The list above is largely about the ToC product, not the process whereby it was created. Some argued there needed to be a participatory process of development to ensure the ToC was “aligned with the needs of beneficiaries and the national objectives”. However, others argued that that “ToC are not “development projects” that must be aligned with the Paris Declaration, but rather tools that must be rigorous, applied without ‘complaisance’ “. The hoped for reality might lie in between, ToC typically are associated with specifically project interventions and the extent of their ownership is relevant to answering the practical aspects of evaluability. On the other hand, the rigour of their use as tools will affect their usefulness and whether they can be evaluated. The product-oriented criteria given above do include two criteria that may reflect the effects of a good development process. i.e. ownership and embeddedness.
·         Ownership: It was argued that ownership was not a criterion of good ToC, often the consensus in science has been proved wrong. But in the above list the criterion of ownership is relevant to whether the ToC is worth evaluating, it is not a criterion of value of the belief or understanding represented by the ToC. It could be argued that widely owned views of how a project is working are eminently worth evaluating, because of the risk that they are wrong.
·         This approach might lead to the view that on the other hand ToC with few owners should not be evaluated. This view was in effect questioned by an example cited of an evaluator coming up with their alternative ToC, which was based on prior evaluations studies and research, in contrast to the politically motivated views of the official in charge of a program.  This brings us back to the criteria listed above, and the idea of weighting them according to context (ownership versus justifiability).
·         Relevance: This proposed criterion begs the question of relevant to whom? Ownership of the ToC (voluntary or mandated) would seem to signify a degree of relevance.
·         Falsifiability: It was argued that this is the pre-eminent criteria of a good scientific theory, and one which needed more attention by development agencies when thinking about the ToC behind their interventions. The criteria in the list above address this to some extent by inquiring about the existence of clear causal links, along with good explanations for how they are expected to work. Perhaps “good” needs to be replaced by falsifiable, though I worry about setting the bar too high when most ToC I see barely manage to crawl. Many decent ToC do include multiple causal links. The more there are, the more vulnerable they are to disproof, because only one link needs to fail for ToC not to work. This could be seen as a crude measure of falsifiability.
·         Flexibility: Although it was suggested that ToC be flexible and adaptable this view is contentious, in that it seems to contradict the need for specificity (by being verifiable, testable, and explained) and thus its falsifiablity. However, there is no in principle reason why a ToC can’t be changed. If it is, it becomes a different ToC, subject to a separate evaluation. It is not the same one as before. The only point to note here is that the findings of the adapted version would not validate the content of the earlier version.
·         Lack of adaptability may also be a problem. It was suggested  evaluators should ask 'When has the ToC been reviewed and how has it been adapted in the light of implementation experience, M&E data, dialogue and consultation with stakeholders?” If the answer is not for a long time, then there may be doubts about its current relevance, which could be reflected in limited ownership.
·         Clarity of logic as well as evidence: One commentator suggested that it might be made clear whether a given cause is both “necessary and sufficient”, presumably as distinct from alternative combinations of these terms.  Necessity and sufficiency is a demanding criterion, and arguable whether which many programs would satisfy, or perhaps even should satisfy.
·         Simplicity: This suggested requirement (captured by Occam’s razor) is not as simple a requirement as it might sound.  It will always be in tension with its opposite (captured by Ashby’s Law of Requisite Variety), which is that a theory must also have sufficient internal complexity in order to describe the complexity of the events it is seeking to describe. Along the same lines some commentators asked whether there was enough detail provided, the lack of which can affect verifiability and testability.  Simplicity may win out as the more important criteria where a ToC is primarily intended as a communication tool.
·         Justifiability was highlighted as important. Plausiblity was questioned “What that does really mean? If based on common sense then it is incompatible with being evidence based! If humanity had to rely on common sense, the earth would still be flat!!” Plausibility is clearly not a good evaluation finding. But it is a useful finding for an evaluability assessment. If a ToC is not plausible then it makes no sense to go any further with the design of an evaluation. Justifiablity is evidence of a good ToC, and is a judgement that might follow an evaluation. However, it might also be obvious before an evaluation, through an evaluability assessment, and lead to a decision that a further evaluation would not be useful.
Informed sources mentioned by contributors

Connell, J.P. & Kubisch, A.C. (1998) Applying a theory of change approach to the evaluation of comprehensive community initiatives: progress, prospects and problems, in: K. Fulbright-Anderson, A.C. Kubisch & J.P. Connell (Eds) New Approaches to Evaluating Community Initiatives. Volume 2: Theory, measurement and analysis (Queenstown, The Aspen Institute).  [courtesy of  John Mayne]

Connell and Kubisch suggest a number of attributes of a good theory of change.
·         It should be plausible.  Does common sense or prior evidence suggest that the activities, if implemented, will lead to desired results?
·         It should be agreed.  Is there reasonable agreement with the theory of change as postulated?
·         It should be embedded.  Is the theory of change embedded in a broader social and economic context, where other factors and risks likely to influence the desired results are identified?
·         It should be testable.  Is the theory of change specific enough to measure its assumptions in credible and useful ways?

Other sources that may be of interest
PS 30 April 2012: See also HIVOS posting on "How can I recognise a good quality Theory of Change?"

Friday, March 16, 2012

Can we evolve explanations of observed outcomes?


In mathematics and computer science, an optimization problem is the problem of finding the best solution from all feasible solutions. There are various techniques for doing so.

Science as a whole can be seen as an optimisation process, involving a search for explanations that have the best fit with observed reality.

In evaluation we often have a similar task, of identifying what aspects of one or more project interventions best explains the observed outcomes of interest. For example, the effects of various kinds of improvements in health systems on rates of infant mortality.  This can done in two ways. One is by looking internally at the design of a project, at its expected workings and then trying to find evidence of whether it did so in practice. This is the territory of theory led-evaluation.  The other way is to look externally, at alternative explanations involving other influences, and to seek to test those. This is ostensibly good practice but not very common in reality, because it can be time consuming and to some extent inconclusive, in that there may always be other explanations not yet identified and thus untested. This is where randomised control trials (RCTs) come in. Randomised allocation of subjects between control and intervention groups nullifies the possible influence of other external causes.  Qualitative Comparative Analysis (QCA) takes a slightly different approach, searching for multiple possible configurations of conditions which are both necessary and sufficient to explain all observed outcomes (both positive and negative instances).

The value of theory led approaches, including QCA, is that the evaluator’s theories help the search for relevant data, amongst the myriad of possibly relevant design characteristics, and combinations thereof. The absence of a clear theory of change is often one reason why baseline surveys are so expansive in contents, but yet rarely used. Without a half way decent theory we can easily get lost. It is true that "There is nothing as practical as a good theory" (Kurt Lewin)

The alternative to theory led approaches

There is however an alternative search process which does not require a prior theory, known as the evolutionary algorithm, the kernel of the process of evolution. The evolutionary processes of variation, selection and retention, iterated many times over, have been able to solve many complex optimisation problems such as the design of a bird that can both fly long distances and dive deep in the sea for fish to eat. Genetic algorithms (GA) are embodiments of the same kinds of process in software programs, in order to solve problems of interest to scientists and businesses. These are useful in two respects. One is the ability to search vary large combinatorial spaces very quickly. The other is that they can come up with solutions involving particular combinations of attributes that might not have been so obvious to a human observer.

Development projects have attributes that vary. These include both the context in which they operate and the mechanisms by which they seek to work. There are many possible combinations of these attributes, but only some of these are likely to be associated with achieving a positive impact on peoples’ live. If they were relatively common then implementing development aid projects would not be so difficult. The challenge is how to find the right combination of attributes. Trial and error by varying project designs and their implementaion on the ground is a good idea in principle, but in practice it is slow. There is also a huge amount of systemic memory loss, for various reasons including poor or non-existent communications between various iterations of a project design taking place in different locations.

Can we instead develop models of projects, which combine real data about the distribution of project attributes with variable views of their relative importance in order to generate an aggregate predicted result? This expected result can then be compared to an observed result (ideally from independent sources).  By varying the influence of the different attributes a range of predicted results can be generated, some of which may be more accurate than others. The best way to search this large space of possibilities is by using a GA. Fortunately Excel now includes a simple GA add-in, known as Solver.

The following spreadsheet shows a very basic example of what such a model could look like, using a totally fictitious data set. The projects and their observed scores on four attributes (A-D) are shown on the left. Below them is a set of weights, reflecting the possible importance of each attribute for the aggregate performance of the projects. The Expected Outcome score for each project is the sum of the score on each attribute x the weight for that score.  In other words the more a project has an important attribute (or combination of these) the higher will be its Expected Outcome score. That score is important only as a relative measure, relative to that of the other projects in the model.

The Expected Outcome score for each project is then compared to an Observed Outcome measure (ideally converted to a comparable scale), and the difference is shown as the Prediction Error. On the bottom left an is aggregate measures of prediction error, the Standard Deviation. The original data can be found in this Excel file.

 

The initial weights were set at 25 for each attribute, in effect reflecting the absence of any view about which might be more important. With those weights, the SD of the Prediction Errors was 1.25 After 60,000+ iterations in the space of 1 minute the SD had been reduce down to 0.97. This was achieved with this new combination of weights: Attribute A:19, Attribute B: 0, Attribute C: 19, Attribute D: 61.The substantial error that remains can be considered as due to causal factors outside of the model (i.e. as is described by the list of attributes)[1].

It seems that it is also possible to find least appropriate solutions, i.e, those which make the least accurate Outcome Predictions. Using the GA set to find the maximum error, it was found that in the above example a 100% weighting given to Attribute A generated a SD of 1.87. This is the nearest that such an evolutionary approach comes to disproving a theory.

GA deliver functional rather than logical proofs that certain explanations are better than others. Unlike logical proofs, they are not immortal. With more projects included in the model it is possible that there may be a fitter solution, which applies to this wider set. However, the original solution to the smaller set would still stand.

Models of complex processes can sometimes be sensitive to starting conditions. Different results can be generated from initial settings that are very similar. This was not the case in this exercise, with widely different initial weighting’s evolving and converging on almost identical sets of final weightings e.g. 19, 0, 19,  62 versus 61) producing the same final error rate. This robustness is probably due to the absence of feedback loops in the model, which could be created where the weighted score of one attribute affected those of another.  That would a much more complex model, possibly worth exploring at another time.

Small changes in Attribute scores made a more noticable difference to the Prediction Error. In the above model varying Project 8’s score on attribute A from 3 to 4 increases the average error by 0.02. Changes in other cells varied in direction of their effects. In more realistic models with more kinds of attributes and more project cases the results are likely to be less sensitive to such small differences in attribute scores.

The heading of this post asks “Can we evolve explanations of observed outcomes?” My argument above suggests that in principle it should be possible. However there is a caveat. A set of weighted attributes that are associated with success might better be described as the ingredients of an explanation. Further investigative work would be needed to find out how those attributes actually interact together in real life.  Before then, it would be interesting to do some testing of this use of GAs on real project datasets.

Your comments please...

PS 6 April 2012: I have just come across the Kaggle website. This site hosts competitions to solve various kinds of prediction problems (re both past and future events) using a data set available to all entrants, and gives prizes to the winner - who must provide not only their prediction but the algorithm that generated the prediction. Have a look. Perhaps we should outsource the prediction and testing of results of development projects via this website? :-) Though..., even to do this the project managers would still have a major task on hand: to gather and provide reliable data about implementation characteristics, as well as measures of observed outcomes... Though...this might be easier with some projects that generate lots of data, say micro-finance or education system projects.

 View this Australian TV video, explaining how the site works and some of its achievements so far. And the Fast Company interview of the CEO

PS 9 April 2012:  I have just discovered that there is a whole literature on the use of genetic algorithms for rule discovery "In a nutshell, the motivation for applying evolutionary algorithms to data mining is that evolutionary algorithms are robust search methods which perform a global search in the space of candidate solutions (rules or another form of knowledge representation)" (Freitas, 2002) The rules referred to are typcially "IF...THEN..."type statements






[1] Bear in mind that this example set of attribute scores and observed outcome measures is totally fictitious, so the inability to find a really good set of fitting attributes should not be surprising. In reality some sets of attributes will not be found co-existing because of their incompatibility e.g. corrupt project management plus highly committed staff



Tuesday, March 13, 2012

Modular Theories of Change: A means of coping with diversity and change?


Two weeks ago I attended a DFID workshop at which Price Waterhouse Coopers (PwC) consultants presented the results of their work, commissioned by DFID, on “Monitoring Results from Low Carbon Development”. LCD is one of three areas of investment by International Climate Fund (ICF). The ICF is “a £2.9bn financial contribution … provided by the UK Government to support action on climate change and development. Having started to disperse funds, a comprehensive results framework is now required to measure the impact of this investment, to enable learning to inform future programming, and to show value for money on every pound”

The PwC consultants’ tasks included (a) consultation with HMG staff on the required functions of the LCD results framework; (b) a detailed analysis of potentially useful indicators through extensive consultations and research into the available data; and (c) exploration of opportunities to harmonise results and/or share methodologies and data collection with others. Their report documents the large amount of work that has been done, but also acknowledges that more work is still needed.

Following the workshop I sent in some comments on the PwC report, some of which I will focus on here because I think they might be of wider interest. There were three aspects of the PwC proposals that particularly interested me. One was the fact that they had managed to focus down on 28 indicators, and were proposing that set be limited even further, down to 20. Secondly, they had organised the indicators into a LogFrame type structure, but one which is covering two levels of performance in parallel (within countries and across countries), rather than in a sequence. Thirdly, they had advocated the use of Multi-Criteria Analysis (MCA) for the measurement of some of the more complex forms of change referred to in the Logframe.  MCA is similar in structure to the design of weighted checklists, which I have previously discussed here and elsewhere.

Monitorable versus evaluable frameworks

As it stands the current LCD LogFrame is a potential means of monitoring important changes relating to low carbon development. But it is not yet sufficiently developed to enable an evaluation of the impact of efforts aimed at promoting low carbon development. This is because there is not yet sufficient clarity about the expected causal linkages between the various events described in the Logframe. It is the case that, as is required by DFID Logframes, weightings have been given to each of the four Outputs describing their expected impact on Outcome level changes. But the differences in weightings are modest (+/- 10%) and each of the Outputs describes a bundle of up to 5 more indicator-specific changes.

Clarity about the expected causal linkages is an essential “evaluability” requirement. Impact evaluations in their current form seek to establish not only what changes occurred, but also their causes. Accounts of causation in turn need to include not only attribution (whether A can be said to have caused B) but also explanation (how A caused B). In order for the LCD results framework to be evaluable, someone needs “connect the dots” in some detail. That is, identify plausible explanations for how particular indicator-specific changes are expected influence each other. Once that is done, the LCD program could be said to have not just a set of indicators of change, but a Theory of Change about how the changes interact and function as a whole.

Indicator level changes as shared building blocks

There are two subsequent challenges here to developing an evaluable Theory of Change for LCD. One is the multiplicity of possible causal linkages. The second is the diversity of perspectives on which of these possible causal linkages really matter. With 28 different indicator-specific changes there are, at least hypothetically, many thousands of different possible combinations that could make up a given Theory of Change (ii, where i = number of indicator specific changes). But, it can be well argued that “this is a feature, not a bug”. As the title of this blog suggests, the 28 indicators can be considered as equivalent to Lego building blocks. The same set (or parts thereof) can be combined in a multiplicity of ways, to construct very different ToC. The positive side to this picture is the flexibility and low cost. Different ToC can be constructed for different countries, but each one does not involve a whole new set of data collection requirements. In fact it is reasonable to expect that in each country the causal linkages between different changes may be quite different, because of the differences in the physical, demographic, cultural and economic context.

Documenting expecting causal linkages (how the blocks are put together)

There are other more practical challenges, relating to how to exploit this flexibility. How do you seek stakeholder views of the expected causal connections, without getting lost in a sea of possibilities? One approach I have used in Indonesia and in Vietnam involves the use of simple network matrices, in workshops involving donor agencies and/or the national partners associated with a given project. Two examples are shown below. These don’t need to be understood in detail (one is still in Vietnamese), it is their overall structure that matters. 

A network matrix simply shows the entities that could be connected in the left column and top row. The convention is that each cell in the matrix provides data on whether that row entity is connected to that column entity (and it may also describe the nature of the connection)

The Indonesian example shown below shows expected relationships between 16 Output indicators (left column) and 11 Purpose level indicators (top row) in a maternal health project. Workshop participants were asked to consider one Purpose level indicator at a time, and allocate 100 percentage points across the 16 Output indicators, with more percentage points = an Output having more expected impact on the Purpose indicator, relative to other Output indicators. Debate was encouraged between participants as figures were proposed for each cell down a column. Looking within the matrix we can see that for Purpose 3 it was agreed that Output indicator 1.1 would have the most impact. For some other Purpose level changes, impact was expected from a wider range of Outputs. The column on the right side sums up the relative expected impact of each Output, providing useful guidance on where monitoring attention might be most usefully focused.


This exercise was completed in little over an hour. The matrix of results show one set of expected relationships amongst many thousands of other possible sets that could exist within the same list of indicators. The same kind of data can be collected on a larger scale via online surveys, where the options down each column are represented within a single multiple choice question. Matrices like these, obtained either from different individuals or different stakeholder groups, can be compared with each other to identify relationships (i.e. specific cells) where there is the most/least agreement, as well as which relationships are seen as most important, when all satkeholder views are added up. This information should then inform the focus of evaluations, allowing scarce attention and resources to be directed to the most critical relationships.

The second example of a network matrix used to explicate a tacit ToC comes from Vietnam, and is shown below. In this example, a Ministry’s programmes are shown (unconventionally) across the top row and the country’s 5 year plan objectives are shown down the left column. Cell entries, discussed and proposed by workshop participants, show the relative expected causal contribution of each programme to each 5 year plan objective. Summary row on the bottom shows the aggregate expected contribution of each programme and the summary column on the right show the aggregate extent to which each 5 year plan objective was expected to be affected.


 Modularity

The modules referred to in the title of this blog can be seen as referring to two types of entities that can be used to construct many different kinds of ToC. One is the indicator-specific changes in the LCD Logframe, for example. By treating them as a standard set available for use by different stakeholders in different settings, we may gain flexibility at a low cost. The other is the grouping of indicator specific changes into categories (e.g. Outputs 1-2-3-4) and larger sets of categories (Outputs, Outcomes, Purpose). The existence of one or more nested types of entities is sometimes described as modularity. In evolutionary theory it has been argued that modularity in design improves evolvablity. This can happen: (a) by allowing specific features to undergo changes without substantially altering the functionality of the entire system, (b) by allowing larger more structural changes to occur by recombining existing functional units.

In the conceptual world of Logframes, and the like, this suggests that we may need to think of ToC being constructed at multiple levels of detail, by different sized modules. In the LCD Logframe impact weightings had already been assigned to each Output, indicating its relative expected contribution to the Outcomes as a whole. But the flexibility of ToC design at this level was seriously constrained by the structure of the representational device being used. In a Logframe Outputs are expected to influence Outcomes, but not the other way. Nor are they expected to influence each other, contra other more graphic based logic models. Similarly, both of the above network matrix exercises made use of existing modules and accepted the kinds of relationship that was expected between them (Outputs should influence Purpose level changes; Ministry Programmes should influence 5Year Plan objectives achievements). 

The value of multiple causal pathways with a ToC

More recently I have seen the ToC for a major area of DFID policy that will be coming under review. This is represented in diagramatic form, showing various kinds of events (including some nested categories of events), and also shows the expected causal relationships between these events. It was quite a complex diagram, perhaps too much so for those who like traffic-light level simplicities. However, what interested me the most is that subsequent versions have been used to show how two specific in-country programs fit within this generic ToC. This has been done by highlighting the particular events that make up one of the number of causal chains that can be found within the generic ToC. In doing so it appears to be successfully addressing a common problem with generic ToC - the inability to reflect the diversity of the programs that make up the policy area described by a generic ToC.

Shared causal pathways justify more evaluation attention

This innovation points to an alternate and additional use of the matrices above. The cell numbers could refer to the numbers of constituent programs in a policy area (and/or which are funded by a single funding mechanism) that involve this particular causal link (i.e. between the row event and the column event). The higher this number, the more important it would be for evaluations to focus on that casual link - because the findings would have relevance across a number of programs in the policy area.