Friday, December 22, 2023

Using the Confusion Matrix as a general-purpose analytic framework


Background

This posting has been prompted by work I have done this year for the World Food Programme (WFP) as member of their Evaluation Methods Advisory Panel (EMAP). One task was to carry out a review, along with colleague Mike Reynolds, of the methods used in the 2023 Country Strategic Plans evaluations. You will be able to read about these, and related work, in a forthcoming report on the panel's work, which I will link to here when it becomes available.

One of the many findings of potential interest was: "there were relatively very few references to how data would be analysed, especially compared to the detailed description of data collection methods". In my own experience, this problem is widespread, found well beyond WFP. In the same report I proposed the use of what is known as the Confusion Matrix, as a general purpose analytic framework. Not as the only framework, but as one that could be used alongside more specific frameworks associated with particular intervention theories such as those derived from the social sciences.

What is a Confusion Matrix?

A Confusion Matrix is a type of truth table,  i.e., a table representing all the logically possible combinations of two variables or characteristics. In an evaluation context these two characteristics could be the presence and absence of an intervention, and the presence and absence of an outcome.  An intervention represents a specific theory (aka model), which includes a prediction that a specific type of outcome will occur if the intervention is implemented.  In the 2 x 2 version you can see above, there are four types of possibilities:

  1. The intervention is present and the outcome is present. Cases like this are known as True Positives
  2.  The intervention is present but the outcome is absent. Cases like this are known as False Positives. 
  3. The intervention is absent and the outcome is absent. Cases like this are known as True Negatives
  4. The intervention is absent but the outcome is present. Cases like this are known as False Negatives. 
Common uses of the Confusion Matrix

The use of Confusion Matrices is most commony associated with the field of machine learning and predictive analytics, but it has much wider application. These include the fields of medical diagnostic testing, predictive maintenance,  fraud detection,  customer churn prediction, remote sensing and geospatial analysis, cyber security, computer vision, and natural language processing. In these applications the Confusion Matrix is populated by the number of cases falling into each of the four categories. These numbers are in turn the basis of a wide range of performance measures, which are described in detail in the Wikipedia article on the Confusion Matrix. A selection of these is described here, in this blog on the use of the EvalC3 Excel app

The claim

Although the use of a Confusion Matrix is  commonly associated with quantitative analyses of performance, such as the accuracy of predictive models, it can also be a useful framework for thinking in more qualitative terms. This is a less well known and publicised use, which I elaborate on below. It is the inclusion of this wider potential use that is the basis of my claim that the Confusion Matrix can be seen as a general-purpose analytic framework.

The supporting arguments

The claim has at least four main arguments:

  1. The structure of the Confusion Matrix serves as a useful reminder and checklist, that at least four different kinds of cases should be sought after, when constructing and/or evaluating a claim that X (e.g. an intervention) lead to Y (e.g an outcome). 
    1. True Positive cases, which we will usually start looking for first of all. At worst, this is all we look for.
    2. False Positive cases, which we are often advised to do, but often dont invest much time in actually doing so. Here we can learn what does not work and why so.
    3. False Negative cases, which we probably do even less often. Here we can learn what else works, and perhaps why so,
    4. True Negative cases, because sometimes there are asymmetric causes at play i.e not just the absence of the expected causes
  2. The contents of the Confusion Matrix helps us to identify interventions that are necessary, sufficient or both. This can be practically useful knowledge
    1. If there are no FP cases, this suggests an intervention is sufficient for the outcome to occur. The more cases we investigate , without still finding a TP, the stronger this suggestion is. But if only one FP is found, that tells us the intervention is not sufficient. Single cases can be informative. Large numbers of cases are not aways needed.
    2. If there are no FN cases, this suggests an intervention is necessary for the outcome to occur. The more cases we investigate , without still finding a FN, the stronger this suggestion is. But if only one FN is found, that tells us the intervention is not necessary. 
    3. If there are no FP or FN cases, this suggests an intervention is sufficient and necessary for the outcome to occur. The more cases we investigate, without still finding a TP or FN, the stronger this suggestion is. But if only one FP, or FN is found, that tells us that the intervention is not sufficient or not necessary, respectively. 
  3. The contents of the Confusion Matrix help us identify the type and scale of errors  and their acceptability. FP and FN cases are two different types of error that have different consequences in different contexts. A brain surgeon will be looking for an intervention that has a very low FP rate, because errors in brain surgery can be fatal, so cannot be recovered. On the other hand, a stockmarket investor is likely to be looking for a more general purpose model, with few FNs. However, it only has to be right 55% of the time to still make them money. So a high rate of FPs may not be a big  concern. They can recover their losses through further trading. In the field of humanitarian assistance the corresponding concerns are with coverage (reaching all those in need, i.e minimising False Negatives) and leakage (minimising inclusion of those not in need i.e False Positives). There are Confusion Matrix based performance measures for both kinds error and for the degree that both kinds of error are balanced (See the Wikipedia entry)
  4. The contents of the Confusion Matrix can help us identify usefull case studies for comparison purposes. These can include
    1. Cases which exemplify the True Positive results, where the model (e.g an intervention) correctly predicted the presence of the outcome. Look within these cases to find any likely causal mechanisms connecting the intervention and outcome. Two sub-types can be useful to compare:
      1. Modal cases, which represent the most common characteristics seen in this group, taking all comparable attributes into account, not just those within the prediction model. 
      2. Outlier cases, which represent those which were most dissimilar to all other cases in this group, apart from having the same prediction model characteristics
    2. Cases which exemplify the False Positives, where the model incorrectly predicted the presence of the outcome.There are at least two possible explanations that can be explored:
      1. In the False Positive cases, there are one or more other factors that all the cases have in common, which are blocking the model configuration from working i.e. delivering the outcome
      2. In the True Positive cases, there are one or more other factors that all the cases have in common, which are enabling the model configuration from working i.e. delivering the outcome, but which are absent in the False Positive cases
        1. Note: For comparisons with TPs cases, TP and FP cases should be maximally  similar in their case attributes. I think this is called  MSDO (most similar, different outcome) based case selection
    3. Cases which exemplify the False Negatives, where the outcome occurred despite the absence the attributes of the model. There are three possibilities of interest here:
      1. There may be some False Negative cases that have all but one of the attributes found in the prediction model. These cases would be worth examining, in order to understand why the absence of a particular attribute that is part of the predictive model does not prevent the outcome from occurring. There may be some counter-balancing enabling factor at work, enabling the outcome.
      2. It is possible that some cases have been classed as FNs because they missed specific data on crucial attributes that would have otherwise classed them as TPs.
      3. Other cases may represent genuine alternatives, which need within-case investigation to identify the attributes that appear to make them successful 
    4. Cases which exemplify the True Negatives, where the absence the attributes of the model is associated with the absence of the outcome.
      1. Normally this are seen as not being of much interest. But there may cases here with all but one of the intervention attributes. If found then the missing attribute may be viewed as: 
        1. A necessary attribute, without which the outcome can occur
        2. An INUS attribute i.e. an attribute that is Insufficient but Necessary in a configuration that is Unnecessary but Sufficient for the outcome (See Befani, 2016). It would then be worth investigating how these critical attributes have their effects by doing a detailed within-case analysis of the cases with the critical missing attribute.
      2. Cases may become TNs for two reasons. The first, and most expected, is that the causes of positive outcomes are absent. The second, which is worth investigating, is that there are additional and different causes at work which are causing the outcome to be absent. The first of these is described as causal symmetry, the second of these is described as causal asymmetry. Because of the second possibility is worthwhile paying close attention to TN cases to identify the extent to which symmetrical causes or asymmetrical causes are at work. The findings could have significant implications for any intervention that is being designed. Here a useful comparision would be  between maximally similar TP and TN cases.
Resources

Some of you may know that I have built the Confusion Matrix into the design of EvalC3, an Excel app for cross-case analysis, that combines measurement concepts from the disparate fields of machine learning and QCA (Qualitative Comparative Analysis). With fair winds. this should become available as a free to use web app in early 2024, courtesy of a team at Sheffield Hallam University. There you will be able to explore and exploit the uses of the Confusion Matrix for both quantative and qualitative analyses.



Saturday, October 28, 2023

Beyond summarisation by AI and/or editors- Readers can now interrogate full transcripts of meeting discussions

Over the last two months, a small group of us have been managing a MSC Monthly Online Gathering. In each meeting we have recorded the discussions, then generated a transcript, both using Otter.AI. Then I have used Claude AI, to generate a one-page summary of each discussion. That itself seems likely to be useful to both attendeess and non-attendees. (Though I have yet obtain feedback on this meeting output). You can view two AI summaries of discussions in the October meeting, here: 

https://mande.co.uk/wp-content/uploads/2023/10/18th-October-MSC-AM-Rick.pdf

https://mande.co.uk/wp-content/uploads/2023/10/18th-October-PM-Konny.pdf

But why not jump ahead and give people more than a simple feedback opportunity. Let's enable them to question the full text of the transcript, in their own individual way, albeit after being informed about the overall topics covered during the discussion via the AI summaries above. This is now possible using a third party app known as Pickaxe. Here you can design an AI prompt that can then be made publically usable, preloaded with a given discussion transcript.

Here is a link to the two very simple Pickaxe public prompts I have developed that you can now use to interrogate the two discussions. 

AM session 
PM session

You can ask follow up questions, click on "Go to Chat"

If you try these out, I will get feedback, in the form of a visible record of how you used it. You could also provide feedback on this experience, using the Comment function below

Give it a go, now...!


Postscript 31 October

I think the performance of Pickaxe on this task is poor, compared to that of Claue AI on the same task.  I will be disabling this implementation in the next day or so


Thursday, August 31, 2023

Evaluating thematic coding and text summarisation work done by artificial intelligence (LLM)


Evaluation is a core part of the workings of artificial intelligence algorithms. It is something that can be built in, in the shape of specific segments of code. But it is also an additional human element which needs to complement and inform the subsequent use of any outputs of artificial intelligence systems.

If we take supervised machine learning algorithms as one of the simpler forms of artificial intelligence, all of these have a very simple basic structure.  Their operations involve the reiteration of search followed by evaluation. For example, we have a dataset which describes a number of cases, which could be different locations where a particular development intervention is taking place. Each of these cases have a number of attributes which we think may be useful predictors of an outcome we are interested in. And in addition, some of those predictors (or combinations thereof) might reflect some underlying causal mechanisms which would be useful for us to know about. The simplest form of machine learning will involve what is called an exhaustive or brute force search of each possible combination of those attributes (defined in terms of their presence or absence, in this simple example). Taking one combination at a time, the algorithm will evaluate whether it predicted the outcome or not, and then store that judgement. Reiterating that process, it will then compare the next judgement to this earlier judgement and replace that earlier judgement if the new one is better. And so on until all possible combinations have been evaluated and compared to previous judgement. In more complex machine learning algorithms involving artificial neural networks the evaluation and feedback processes can be much more complex, but the abstract description still fits.

What I'm interested in talking about here is what happens outside the block of code that does this type of processing. Specifically, with the products that are produced and how we humans can evaluate its value. This is territory where a lot of effort has already been expended, most noticeably on the subject of algorithmic fairness and what is known as the alignment problem.  These could be crudely described as representing both short and long-term concerns respectively.  I won't be exploring that literature here, interesting and important as it is.

What I will be talking about here is two examples of my own current experiments with the use of one AI application known as Claude AI, used to do some forms of qualitative data analysis. In the field that I work in, which is largely to do with international development aid programs, a huge amount of qualitative data i.e text is generated and I think it is fair to say that its analysis is a lot more problematic than when we are dealing with many forms of quantitative data. So the arrival of large language model (LLM) versions of artificial intelligence appears to offer some interesting opportunities for making some usable progress in this difficult area.

The text data that I have been working with as been generated by participants in a participatory scenario planning process, carried out using ParEvo.org, and implemented by the International Civil Society Centre in Germany this year. The full details of that exercise will be available soon in a ICSC publication.  The exercise generated a branching tree structure of storylines about the future, built with 109 paragraphs of text, contributed by 15 participants, over eight iterations.What I will be describing here concerns two types of analysis of that text data.

Text summarisation

[this section has been redrafted] The first was a text summarisation task, where I asked Claude AI to produce one sentence headline summaries of each of these 109 texts. Text summarisation is a very common application of LLMs. This it did quickly, as usual, and the results looked plausible. But but by now I had also learned to be appropriately sceptical and was asking myself how 'accurate' these headlines were. I could examine each headline and its associated text, but this would take time. So I tried another approach.

I opened up a new prompt window in Claude AI and uploaded 2 files. One containing the headlines, and the other containing each of the 109 texts preceded by an identification number. I then asked Claude AI to match each headline with the text that it best described, and to display the results using the ID number of the text (rather than its full contents) and the predicted associated headline. This process has some similarities with back translation. What I was interested in here was how well it could reassign the headlines to their original texts.  If it did well this would give me some confidence in the accuracy of its analytic processes, and might obviate the need for a manual check up of the headlines' fit with content.  

My first attempt was a clear failure, with classification accuracy of 21%, being far worse than chance. On examination this was caused by the way I had formated the uploaded data. The second attempt, using two separated data files, was more successful This time the classification accuracy was 63%. Given that the 27% error could occur at two stages (headline creation and headline matching) it could be argued that the classification error was more like half this value i.e 13.5% and so the classication accuracy was more like 76.5%. At this point it seemed worthwhile to also examine the misclassifications ( a back translation stage called reconciliation) - what headline was mismatched with what headline.  An examination of the false classifications suggested that around 40% of the mismatches may have been because of words they had in common, despote the full headline being different.

Where does that leave me? With some confidence in the headline generation process, but could we do better?  Could we find a better way to generate reproducable headlines...See further below where I talk about ensemble methods.

Content analysis

The second task was a type of content analysis. Because of a specific interest, I had separated a subset of the hundred nine paragraphs into two groups, the first of which had been the subject to further narrative development by the participants (aka surviving storylines), and the second being others which were not developed any further (aka extinct storylines). I asked Claude AI to analyse the subset of the texts in terms of three attributes: the vocabulary, the style of writing, and the genre. Then for each attribute, to sort the texts into two groups, and describe what each group had in common and how they differed from the other group. It then did so. Here is an image of its output.

But how can I evaluate this output? If I looked at one of the texts in a particular group would I find the attributes that Claude AI was telling me that the group it belonged to possessed?  In order to make this form of verification easier, and smaller in scale,  I gave Claude AI a follow-up task: for each of the two groups under each of the three attributes of the text Claude AI should provide the ID number of an exemplar body of text which best represented the presence of the characteristics that were described.  This it was able to do, and in my first use of the specific case examples I found that 9/10 did fit the summary description provided for the group. This strategy is similar to another one which I've used with GPT4, when trying to extract specific information about evaluation methods used in a set of evaluation reports. There I have asked it to provide page or paragraph references for any claim about what methods are being used in the evaluation. Broadly speaking, in a large majority of cases, these page references pointed to relevant sections of text. 

My second strategy was another version of back translation, connecting concrete instances with pre-existing abstract descriptions. This time I opened a new prompt session, still within Claude AI, and uploaded a file containing the same subset of paragraphs, and then in the prompt window I copy and pasted the description of the attributes of the three sets of two groups identified earlier (without information on which text belnged to which group).  I then asked Claude AI to identify which paragraphs of text fitted which of the 3 x 2 groups, which it did.  I then collated the results of the two tasks in an Excel file, which you can see here below (click on image to magnify it). The green cells are where the predicted group matches the original group, and the yellow cells are where there were mismatches.  The overall classification accuracy was 67%, whch is better than chance but not great either. I should also add that this was done with prompt information that included the IDs of the exemplars mentioned above (a format called "one-shot learning")


What was I evaluating when I was doing these "reverse translations"? It could probably be described as a test of, or search for, some form of construct validity. Was there any stable concept involved? 

Ensemble methods

Given the two results reported above, which were better than chance, but not much better, what else could be done? There is one possible way forward, which might give us more confidence in the products generated by LLM analyses.  Both Claude AI and ChatGPT4, and probably others, allow users to hit a Retry button, to generate another response to the same prompt. These will usually vary, and the degree of variation can be controlled by a parameter known as "temperature". 

An ensemble approach in this context would be to generate multiple responses using the same prompt and then use some type of aggregation process to find the best result. Similar to 'wisdom of crowds" processes.  In its simplest form this would, for example, involve counting the number of times each different headlines were proposed for the same item of text, and selecting one with the highest count. This approach will work where you have predefined categories as "targets". Those categories could have been developed inductively (as above) or deductively, from prior theory. It may even be possible to design a prompt script that include multiple genetration steps, and even the aggregation and evaluation stages. 

But to begin with i will probably focus on testing a manual version of the process. I will report on some experiments with this approach in the next few days....

Update 02/09/23: A yet to be tested draft prompt that could automate the process


A lesson learned on the way: I initially wrote out a rough draft of a Claude AI prompt that might help automate the process I've described above.  I then ask Claude AI to convert this into a prompt which would be understood and generate reliable and interpretable results.  When it did this it was clear that part of my intentions had not been understood correctly (however you interpret the word understood).  This could be just an epiphenomenon, in the sense of it only being generated by this particular enquiry.  Or, it could point to a deeper or more structurally embedded analytic risk that would have consequences if I actually ask Claude AI to implement the rough draft in its original form (as distinct from simply refine that text as a prompt).  The latter possibility concerned me, so I edited the prompt text that had been revised by Claude AI to remove the misunderstood part of the process. The version you see above is Claude AIs interpretation of my revised version, which I think will now work. Lets see,,,!

Update 03/09/23: It looks like the ensemble method may work as expected. Using 10 iterations only, which is a small number compared to how they are normally used, the classication accuracy increased to 84%. In the data displayed about numbers of time each predicted headline was matched to a given text there were 4 instances where there were ties. There were also 8 instances where the best match was still only found in less than 5 of the 10 iterations. More iterations might generate more definitive best matches and increase the accuracy rate. The correct match was already visible in the second and third ranking best matches of 4 of the 18 incorrectly matches headlines.

Another lesson learned, perhaps: Careful wording of prompts is important, the more explicit the instructions are the better. I learned to preface the word "match" with a more specific "analyze the content of all the numbered texts in File 1 and identify which one the headline best describes" . And careful formating of the text data files was also potentially important. making it clear where each text began and ended and removing any formating artifacts that could cause confusion.

And because of experiences with such sensitivities, I think i should re-do the whole analysis, to see if I generate the same or similar results!!!

Ensembles of brittle prompts?

I just came across this glimpse of a paper "Prompt Ensembles Make LLMs More Reliable" which is a different version of the idea I explored above. Here the prompt that is in use is also varied, from iteration to iteration. 

 

Ensembles of brittel prompts?




Friday, May 26, 2023

Finding useful distinctions between different futures

 

This blog posting is a response to Joseph Voros's informative blog posting about the Futures Cone. It is a useful contribution in as much as it helps us think about the future in terms of different sets of possibilities. Here is a copy of his edited version.

Figure 1: Voros, 2017


My alternative, shown below, was developed in the context of supporting ParEvo.org explorations of alternative futures. It has some similarities and differences. For a start, here is the diagram.

Figure 2: Sets and sub-sets of alternative futures Davies, 2023

I will now list Joseph's explanation of each of the terms he used, and how they might relate to mine (in red)


  • Possible – these are those futures that we think ‘might’ happen, based on some future knowledge we do not yet possess, but which we might possess someday (e.g., warp drive). I think these fall in the grey area above (which also contain the dark and light green).
  • Plausible – those we think ‘could’ happen based on our current understanding of how the world works (physical laws, social processes, etc).I think these fall somewhere within the green matrix
  • Probable – those we think are ‘likely to’ happen, usually based on (in many cases, quantitative) current trends. These probably fall within the Likely row of the green matrix
  • Preferable – those we think ‘should’ or ‘ought to’ happen: normative value judgements as opposed to the mostly cognitive, above. There is also of course the associated converse class—the un-preferred futures—a ‘shadow’ form of anti-normative futures that we think should not happen nor ever be allowed to happen (e.g., global climate change scenarios comes to mind).These probably fall within the Desirable column of the green matrix
  • Projected – the (singular) default, business as usual, ‘baseline’, extrapolated ‘continuation of the past through the present’ future. This single future could also be considered as being ‘the most probable’ of the Probable futures. As suggested above, probably at the most likely end of the Likely row in the above green matrix
  • (Predicted) – the future that someone claims ‘will’ happen. I briefly toyed with using this category for a few years quite some time ago now, but I ended up not using it anymore because it tends to cloud the openness to possibilities (or, more usefully, the ‘preposter-abilities’!) that using the full Futures Cone is intended to engender. Probably also at the most likely end of the Likely row in the above green matrix
Preposterious events are not really covered. Perhaps they are at the extreme end of the Unlikely events with known probabilities i.e zero likelihood.

 

Though lacking in alliteration my schema does have some more practically useful features

The primary additional feature is that for each different kind of future there are some conjectured consequences in terms of likely appropriate responses. Some of these are shown red:

  • Organisational "slack" i.e. uncommitted resources or reserves that could enable responses to the unforeseen (though, of course,  not every kind of unforseen event)
  • Fringe investments, such as blue sky research, can be appropriate where a possibility is in sight but its likelihood of happening is far from clear
  • Robust responses are those that might work, though not necessarily be the most effective or most efficient, across a span of possibilities having varying probabilities and desirabilities
  • Customised responses are those more tailored to specific combinations of un/likely and un/desirable events. The following more detailed version of the green martix describes some major possible variations of this kind
Figure 3
Where to next?

I would like to hear from readers their views on the possible utility of these distinctions. And whether any other distinctions could be added to or replace those I have used. 

Monday, March 06, 2023

How can evaluators practically think about multiple Theories of Change in a particular context?


This blog posting is been prompted by participation in two recent events. One was some work I was doing with the ICRC, reviewing Terms of Reference for an evaluation.  The other was listening in as a participant to this week's European Investment Bank conference titled "Picking up the pace: Evaluation in a rapidly changing world". 

When I was reviewing some Terms of Reference for an evaluation I noticed a gap which I have seen many times before. While there was a reasonable discussion of the types of information that would need to be gathered there was a conspicuous absence of any discussion of how that data would be analysed. My feedback included the suggestion that the Terms of Reference needed to ask the evaluation team for a description of the analytical framework they would use to analyse the data they were collecting.

The first two sessions of this week's EIB conference were on the subject of foresight and evaluation. In other words how evaluators can think more creatively and usefully about  possible futures – a subject of considerable interest to me. You might notice that I've referred to futures rather than the future, intentionally emphasising the fact that there may be many different kinds of futures, and with some exceptions (e.g. climate change) is not easy to identify which of these will actually eventuate.

To be honest, I wasn't too impressed with the ideas that came up in this morning's discussion about how evaluators could pay more attention to the plurality of possible futures. On the other hand, I did feel some sympathy for the panel members who were put on the spot to answer some quite difficult questions on this topic.

Benefiting from the luxury of more time to think about this topic, I would like to make a suggestion that might be practically usable by evaluators, and worth considering by commissioners of evaluations. The suggestion is how an evaluation team could realistically give attention not just to a single "official"  Theory Of Change about an intervention, but to multiple relevant Theories Of Change about an intervention and its expected outcomes. In doing so I hope to address both issues I have raised above: (a) the need for an evaluation team to have a conceptual framework structuring how it will analyse the data it collects, and (b) the need to think about more than one possible future and how that might be realised i.e. more than one Theory of Change.

The core idea is to make use of something which I have discussed many times previously in this blog, known as the Confusion Matrix – to those involved in machine learning, and more generally described simply as a truth table - one that describes four types of possibilities. It takes the following form:

In the field of machine learning the main interest in the Confusion Matrix is the associated performance measures that can be generated, and used to analyse and assess the performance of different predictive models.  While these are of interest, what I want to talk about here is how we can use the same framework to think about different types of theories, as distinct from different types of observed results.

There are four different types of Theories of Change that can be seen in the Confusion Matrix. The first (1) describes what is happening when intervention is present and the expected outcome of that intervention is present. This is the familiar territory of the kind of Theories of Change that an evaluator will be asked to examine.

The second (2) describes what is happening when intervention is present and the expected outcome of that intervention is absent. This theory would describe what additional conditions are present, or what expected conditions are absent, which will make a difference – leading to the expected outcome being absent.  When it comes to analysing data on what actually happened identifying these conditions can lead to modification of the first (1) Theory of Change such that it becomes a better predictor of the outcome and there are fewer False Positives (found in cell 2). Ideally the less False Positives the better. But from a theory development point of view there should always be some situations described in cell 2 because there will never be an all-encompassing theory that works everywhere. There will always be boundary conditions beyond which the theory is not expected to work. So an important part of an evaluation is not just to refine the theory about what works (1) but also to refine the theory of the circumstances in which it will not be expected to work  (2),  sometimes known as conditions or boundary conditions.

The third theory (3) describes what is happening when the intervention is absent but nevertheless the outcome is present. Consideration of this possibility involves recognition of what is known as "multi-finality" i.e. that some events can arise from multiple alternative causal conditions (or combinations of  causal conditions).  It's not uncommon to find advice to evaluators that they should consider alternative theories to those they are currently focused on. For example in the literature on contribution analysis. But it strikes me that this is often close to a ritualistic requirement, or at least treated that way in practice. In this perspective alternative theories are a potential threat to the theory being focused on (1). But a much more useful perspective would be to treat these alternative theories as potentially useful other courses of an action that an agent could take, which warrant serious attention in their own right. And if they are shown to have some validity this does not by definition mean that the main theory of change (1) is wrong. It' simply means that there are alternative ways of achieving the outcome, which can only be a bonus finding. 

The fourth theory describes what is happening when intervention is absent and the outcome is also absent (4).  In its simplest interpretation, it may be that the actual absence of the attributes of the intervention is the reason why the outcome is not present. But this can't be assumed. There may be other factors which have been more important causes. For example the presence of an earthquake, or the holding of a very contested election. This possibility is captured by the term "asymmetric causality" i.e. that the causes of something not happening may not simply be the absence of the causes of something happening. Knowing about these other possible causes of desired outcome not happening is surely important, in addition to and alongside knowing about how an intervention does cause the outcome. Knowing more about these causes might help other parties with other interventions in mind move cases with this experience from being True Negatives (4) to being False Negatives (3)

In summary, I think there is an argument for evaluators not being too myopic when they are thinking about Theories of Change they need to pay attention to.  It should not be all about testing the first (1) type of Theory of Change, and considering all the other possibility is simply as challengers, which may or may not then be dismissed  Each of those other types of theories (2-3-4) are important and useful in their own right and deserve attention.



Tuesday, October 18, 2022

Four types of futures that should be covered by a Theories of Change


ParEvo.org is a web app that enables the collaborative exploration of alternative futures, online. In the evaluation stage, participants are asked to identify which of the surviving storylines fall into each of these categories:

  • Most desirable
  • Least desirable
  • Most likely
  • Least likely
In one part of the analysis of storylines generated during a ParEvo exercise the storylines are plotted on scatter plot, where the two dimensions are likelihood and desirability, as seen in this example


Most Theories of Change that I have come across, when working as an evaluator, focus on a future that is seen as desirable and likely (as in expected). At best, the undesirable futures will be mentioned in an accompanying section on risks and their management.

A less myopic approach might be useful, one which would orient the users of the Theory of Change to a more adaptive stance towards the future.

One way forward would be to think of a four-part Theory of Change, each of which has different implications. as follows


The top right cell may already be covered by a Theory of Change. In the desirable but unlikely, and undesirable but likely two cells it would be useful to have ordered lists that describe events, what needs to be done before they happen, and what needs to be done after they happen. In the unlikely and undesirable cell plans for monitoring the status of these events need to be spelled out, and updated on an ongoing basis



Thursday, October 13, 2022

We need more doubt and uncertainty!


This week the Swedish Evaluation Society (SVUK)  is holding its annual conference. I took part in a session today on Theories of Change. The first part of my presentation summarised the points I made in a 2018 CEDIL Inception Report titled 'Theories of Change: Technical Challenges with Evaluation Consequences'. Following the presentation I was asked by Gustav Petersson, the discussant, whether we should pay more attention to the process of generating diagrammatic Theories of Change. I could only agree, reflecting that for example it was not uncommon that a representative of a conference working group might summarise a very comprehensive and in-depth discussion in all too brief and succinct terms when reporting back to a plenary. Leaving out, or understating, the uncertainties , ambiguities and disagreements. Similarly the completed version of a diagrammatic Theory of Change is likely to suffer from the same limitations ... being an overly simplified version of a much more complex and nuanced discussions between those involved in its construction that went on beforehand.

Later in the day I was reminded of this section in the Hitchhiker's Guide to the Galaxy where Vroomfondel, representing a group of striking philosophers said '"That's right!" and shouted , "we demand rigidly defined areas of doubt and uncertainty!"

I'm inclined to make a similar kind of request of those developing Theories of Change.  And of those subsequently charged with assessing the evaluability of the associated intervention, including its Theory of Change. What I mean is that the description of the Theory of Change should make it clear which various parts of the theory the owner(s) of that theory are more confident in verses less confident. Along with descriptions of the nature of the doubt or uncertainty and its causes e.g. first-hand experience, or supporting evidence (or lack of) from other sources.

Those undertaking an evaluability assessment could go a step further and convert various specific forms of doubt and uncertainty into evaluation questions that could form an important part of the Terms of Reference for an evaluation.  This might go some way to remedying another problem discussed during the session, which is the all too common (in my experience) phenomena of Terms of Reference only making generic references to an intervention's Theory of Change. For example, by asking in broad terms about "what works and in what circumstances". Rather than the testing of various specific parts of that theory, which would arguably be more useful, and better use of limited time and resources.

The bottom line: The articulation of a Theory of Change should conclude with a list of important evaluation questions. Unless there are good reasons to the contrary, those questions should then appear in the Terms of Reference for a subsequent evaluation



PS: Vroomfondel is a philosopher. He appears in chapter 25 of The Hitchhiker's Guide to the Galaxy, along with his collegue Majikthise, as a representative of the Amalgamated Union of Philosophers, Sages, Luminaries and Other Thinking Persons (AUPSLOTP; the BBC TV version inserts 'Professional' before 'Thinking'). The Union is protesting about Deep Thought, the computer which is being asked to determine the Answer to the Ultimate Question of Life, the Universe and Everything. See https://hitchhikers.fandom.com/wiki/Vroomfondel



Thursday, June 30, 2022

Using ParEvo to conduct thought experiments


I have just had an interesting conversation with an NGO network who have been developing some criteria to: (a)  help speed up the approval and release of funding in humanitarian emergencies, but (b) at same time minimising risk of poor use of those funds.

They think these criteria are useful but are not entirely sure whether those seeking funding will agree.  So they are exploring ways of testing out their applicability through a wider consultation process.

One way doing this, which we have been discussing, involves the use of ParEvo.org. The plan is that a group of participants representing potential grantees will develop a set of storylines which starts off with a particular organisation seeking funding for a particular humanitarian emergency. Then a branching structure of possible subsequent storyline developments will be articulated through the usual ParEvo process

After those storylines been developed there will be an evaluation phase, as is common practice now with most ParEvo exercises.  At this point the participants will be asked two generic types of questions ( and variations on these), as described below:

1.  Which of the criteria in the current framework would be most likely to help avoid or mitigate the problems seen in storyline X? (Answer=Description & Explanation) 

  • and if the answer is none, are there any other criteria that could be included in the framework that might have helped?

2.  Which of the storylines in the current exercise would have most benefited by criteria X in the current framework, in the sense of problems described there would have been avoided or mitigated. (Answer=Description & Explanation) 

  • and if the answer is none, does this suggest that the criteria is irrelevant and could be removed?
Postscript: One interesting thing about this type of thought experiment is that the theory (the proposed funding criteria) and the possible realities that they may be applied to (where the theories may or may not work there as expected) are constructed by different parties who are independent from each other.  This is not usually the case with thought experiments, and could be seen as a positive variation.

Stay tuned for if and when this idea flies, then soars or crashes


Courtesy https://xkcd.com/

For more on thought experiments, see Armchair science



Friday, June 17, 2022

Alternative futures as "search strategies"




When you read the phrase "search strategy' this may bring to mind what you need when you are doing a literature search on the Internet.  Or you may be thinking about different forms of supervised machine learning, which involve different types of search strategies.  For example in my Excel-based EvalC3 prediction modelling app there are four different search strategies that users can choose from, to help find the most accurate predictive model describing what combinations of attributes are the best predictor of a particular outcome.  Or you may have heard of James March, an organisational theorist who in 1981 wrote a paper called 'A model of adaptive organizational search ' where he talks about how organisations find the right new technologies to develop and explore.This is probably the closest thing to the type of search process that I'm describing below.

Right now I am in the process of helping some other consultants design a ParEvo exercise, in which recipients of research grants from the same foundation will collaboratively develop a number of alternative storylines describing how their efforts to ensure the uptake and use of the research findings takes place (and sometimes fails to take place) over the coming three years.  Because these are descriptions of possible futures they are inherently a form of fiction.  But please note they are not an attempt at "predicting" fiction.  Rather, they are more like a form of 'preparedness enabling ' fiction.

As part of the planning process for this exercise we have had to articulate our expectations of what will come out of this exercise, in terms of possible desirable benefits for both the participants and the foundation.  In other words the beginnings of a Theory of Change, which needs to be supplemented by details of how the exercise will be best be run in this particular instance, and thus hopefully deliver these results.

When thinking about reasonable expectations for this exercise I came up with the following possibilities, which are now under discussion:

1 Participants will hear different interpretations and views of 
  1. What other participants mean when they use the term "research uptake ' 
  2. What successful, and unsuccessful, research uptake looks like in its various forms, to various participants
  3. How the process of research uptake can be facilitated, and inhibited, by a range of factors – some within researchers control and some beyond their control.
2.  This experience may then inform how each of the participants proceed with their own work on facilitating research uptake

3. The storylines that are generated by the end of the exercise will provide the participants and the XXXX trust with a flexible set of expectations against which actual progress with research uptake can be compared at a later date.

So, my current thinking is that what we have here is a description of a particular kind of search strategy where both the objectives worth pursuing, and the means of achieving them, are both being explored at the same time, at least within the ParEvo exercise.  Though other things will also be happening after the exercise, hopefully involving some use of the ideas generated during exercise (see possibility 2)

There is also another facet of the idea of search strategies which needs to be mentioned here.  When search is used in a machine learning context it is always accompanied by an evaluation function which determines whether the search continues or comes to a stop because the best possibility has now been identified (a stopping rule, I think is the term involved).  So, in the three possibilities listed above the last one describes the possibility of an evaluation function.  Exactly how it will work needs more thinking, but I think it will be along the lines of asking participants in the prior exercise to identify the extent to which their experience in the interim period has fitted any of the storylines that were developed earlier, and in what ways it has and has not, and why so in both cases.  Stay tuned...




Thursday, April 28, 2022

Budgets as theories


A government has a new climate policy. It outlines how climate investments will be spread through a number of different ministries, and implemented by those ministries using a range of modalities. Some funding will be channelled to various multilateral organisations. Some will be spent directly by the ministries. Some will be channelled on to the private-sector. At some stage in the future this government wants to evaluate the impact of this climate policy. But before then it is been suggested that an evaluability assessment might be useful, to ask if how and when such an evaluation might be feasible.

This could be a challenge to those with the task of undertaking the evaluability assessment. And even for those planning the Terms of Reference for that evaluability assessment. The climate policy is not yet finalised. And if the history of most government policy statements (that I have seen) has any lessons it is that you can't expect to see a very clearly articulated Theory of Change of the kind that you might expect to find in the design of a particular aid programme.

My provisional suggestion at this stage is that the evaluability assessment should treat the government's budget, particularly those parts involving funding of climate investments, as a theory of what is intended. And to treat the actual flows of funding that subsequently occur as the implementation of that theory.  My naïve understanding of the budget is that it consists of categories of funding, along with subcategories and sub- subcategories, et cetera. In other words a type of tree structure involving a nested series of choices about where more versus less funds should go.  So, the first task of an evaluability assessment would be to map out the theory i.e. the intentions as captured by budget statements at different levels of detail, moving from national to ministerial and then to small units thereafter. And to comment on the adequacy of these descriptions and and gaps that need to be addressed.

This exercise on its own will not be sufficient as an explication of the climate policy theory because it will not tell us how these different flows of funding are expected to do their work. One option would be to follow each flow down to its 'final recipient', if such a thing can actually be identified. But that would be a lot of work and probably leave us with a huge diversity of detailed mechanisms. Alternatively, one might do this on sampling basis, but how would appropriate samples be selected?

There is an alternative which could be seen as a necessity that could then be complemented by a sampling process. This would involve examining each binary choice, starting from the very top of the budget structure and asking 'key informants" questions about why climate funding was present in one category but not the other, or more in one category than the other.  This question on its own might have limited value because budgeting decisions are likely to have a complex and often muddy history, and the responses received might have a substantial element of 'constructed rationality' . Nevertheless the answers could provide some useful context. 

A more useful follow-up question would be to then ask the same informants about their expectations of differences in performance of the amount of climate financing via category X versus category Y.  Followed by a question about how they expect to hear about the achievement of that performance, if at all.  Followed by a question about what they would most like to know about performance in this area. Here performance could be seen in terms of the continuum of behaviours, ranging from simple delivery of the amount of funds as originally planned, to their complete expenditure, followed by some form of reporting on outputs and outcomes, and maybe even some form of evaluation, reporting some form of changes.  

These three follow-up questions would address three facets of an evaluability assessments (EA): a) The ToC - about expected  changes, b) Data availability , c) Stakeholder interests.  Questions would involve two types of comparisons: funding versus no funding, and more versus less funding. The fourth EA question, about the surrounding institutional context, typically asks about the factors that may enable and/or limit an evaluation of what actually happened (more on evaluability assessments here).

 There will of course be complications in this sort of approach.. Budget documents will not simply be a nested series of binary choices, at each level their work may be multiple categories available rather than just two.  However informants could be asked to identify 'the most significant difference 'between all these categories, in effect introducing an intermediary binary category. There could also be a great number of different levels to the budget documents, with each new level in effect doubling the number of choices and associated questions that need to be asked. Prioritisation of enquiries would be needed, possibly based on a 'follow the (biggest amount of) money 'principle.  It is also possible that quite a few informants will have limited ideas or information about the binary comparisons they are asked about.  A wider selection of informants might help fill that gap.  Finally there is the question of how to 'validate" the views expressed about expected differences in performance, availability of performance information and relevant questions about performance.  Validation might take the form of a survey of a wider constituency of stakeholders within the organisation of interest, of the views expressed by the informants.

PS: Re this comment in the third para above: "And to treat the actual flows of funding that subsequently occur as the implementation of that theory"  One challenge the EA team might find is that while it may have accessed to detailed budget documents, in many places it may not yet be clear where funds have been tagged as climate finance spending. That itself would be an important EA finding.

To be continued...

Sunday, April 24, 2022

Making small samples of large populations useful


I was recently contacted by someone who is working for a consulting firm that has a contract to evaluate the implementation of a large-scale health program covering a huge number of countries.  Their client had questioned their choice of 6 countries as case studies.  They were encouraging the consultancy firm to expand the number of country case studies, apparently because they thought this would make this sample of country cases more representative of the population of countries as a whole.  However, the consulting firm wasn't planning to aggregate results of the six country case studies and then make a claim about generalisability of findings across the whole population of countries.  Quite the opposite, the intention was that each country case study would provide a detailed perspective on one or more particular issues that was well exemplified by that case.

In our discussions, I ended up suggesting a strategy that might satisfy both parties in that it addressed to some extent the question of generalisable knowledge at the same time was designed to exploit the particularities of individual country cases.  My suggestion was relatively simple, although implementing might take a bit of work making use of whatever data is available on the full population of countries.  The suggestion was that for each individual case study the first step in the process would be to identify and explain the interesting particularities of that case, within the context of the evaluation's objectives.    Then the evaluation team would look through whatever data is available on the whole population of countries, with the aim of identifying a sub-set of other countries that had similar characteristics (perhaps both generic {political, socio-economic indicators} and issue specific) with the case study country. These would then be assumed to be the countries where the case study findings and recommendations could be most relevant. 

A shown in the diagram below, it is possible that the sub-set of countries relevant to each case study county might overlap to some extent. Even when one case study country is examined it is possible that it might have more than one particularity of interest, each of whose analysis might be usefully generalised to a limited number of other countries. And those different sub-sets of countries may themselves overlap to some extent (not shown below).  


Green nodes = case study countries
Red nodes = remainder of the whole population 
Red nodes connected to green nodes = countries that might find green node country case study findings relevant
Unconnected red nodes = Parts of whole population where case study findings not expected to have any relevance

Another possibility, perhaps seen as unadvisable in normal circumstances, would be to identify the relevant countries to any case study analysis after the fact, not necessarily or only before.  After the case study had actually been carried out there would be much more information available on the relevant particularities of the case study country that might make it easier to identify which other countries these finding were most relevant to. However the client of the evaluation might need to be given some reassurance in advance. For example, by ensuring that at least some of these (red node) countries were identified at the beginning, before the case studies were underway.

PS: It is possible to quantify the nature of this kind of sampling. For example, in the above diagram
Total number of cases =  37  (red and green). 
Case study cases = 5 (14%)  of all cases
Relevant-to-case-study cases = 17 (46%) of all cases
Relevant-to->1-case-study cases = 3 (8%) of all cases
Not-relevant-to-case-study* cases = 15 (40%) of all cases 

*Bear in mind also that in many evaluations case studies will not be the only means of inquiry. For example, there are probably internal and external data sets that can be gathered and analysed re the whole set of 37 countries.

Conclusion: We should not be thinking in terms of binary options. It is not true that either  a case is part of a representative sample of a whole population, or it is representative and of interest only to itself. It can be relevant to a sub-set of the population.