Showing posts sorted by date for query QCA. Sort by relevance Show all posts
Showing posts sorted by date for query QCA. Sort by relevance Show all posts

Friday, December 22, 2023

Using the Confusion Matrix as a general-purpose analytic framework


Background

This posting has been prompted by work I have done this year for the World Food Programme (WFP) as member of their Evaluation Methods Advisory Panel (EMAP). One task was to carry out a review, along with colleague Mike Reynolds, of the methods used in the 2023 Country Strategic Plans evaluations. You will be able to read about these, and related work, in a forthcoming report on the panel's work, which I will link to here when it becomes available.

One of the many findings of potential interest was: "there were relatively very few references to how data would be analysed, especially compared to the detailed description of data collection methods". In my own experience, this problem is widespread, found well beyond WFP. In the same report I proposed the use of what is known as the Confusion Matrix, as a general purpose analytic framework. Not as the only framework, but as one that could be used alongside more specific frameworks associated with particular intervention theories such as those derived from the social sciences.

What is a Confusion Matrix?

A Confusion Matrix is a type of truth table,  i.e., a table representing all the logically possible combinations of two variables or characteristics. In an evaluation context these two characteristics could be the presence and absence of an intervention, and the presence and absence of an outcome.  An intervention represents a specific theory (aka model), which includes a prediction that a specific type of outcome will occur if the intervention is implemented.  In the 2 x 2 version you can see above, there are four types of possibilities:

  1. The intervention is present and the outcome is present. Cases like this are known as True Positives
  2.  The intervention is present but the outcome is absent. Cases like this are known as False Positives. 
  3. The intervention is absent and the outcome is absent. Cases like this are known as True Negatives
  4. The intervention is absent but the outcome is present. Cases like this are known as False Negatives. 
Common uses of the Confusion Matrix

The use of Confusion Matrices is most commony associated with the field of machine learning and predictive analytics, but it has much wider application. These include the fields of medical diagnostic testing, predictive maintenance,  fraud detection,  customer churn prediction, remote sensing and geospatial analysis, cyber security, computer vision, and natural language processing. In these applications the Confusion Matrix is populated by the number of cases falling into each of the four categories. These numbers are in turn the basis of a wide range of performance measures, which are described in detail in the Wikipedia article on the Confusion Matrix. A selection of these is described here, in this blog on the use of the EvalC3 Excel app

The claim

Although the use of a Confusion Matrix is  commonly associated with quantitative analyses of performance, such as the accuracy of predictive models, it can also be a useful framework for thinking in more qualitative terms. This is a less well known and publicised use, which I elaborate on below. It is the inclusion of this wider potential use that is the basis of my claim that the Confusion Matrix can be seen as a general-purpose analytic framework.

The supporting arguments

The claim has at least four main arguments:

  1. The structure of the Confusion Matrix serves as a useful reminder and checklist, that at least four different kinds of cases should be sought after, when constructing and/or evaluating a claim that X (e.g. an intervention) lead to Y (e.g an outcome). 
    1. True Positive cases, which we will usually start looking for first of all. At worst, this is all we look for.
    2. False Positive cases, which we are often advised to do, but often dont invest much time in actually doing so. Here we can learn what does not work and why so.
    3. False Negative cases, which we probably do even less often. Here we can learn what else works, and perhaps why so,
    4. True Negative cases, because sometimes there are asymmetric causes at play i.e not just the absence of the expected causes
  2. The contents of the Confusion Matrix helps us to identify interventions that are necessary, sufficient or both. This can be practically useful knowledge
    1. If there are no FP cases, this suggests an intervention is sufficient for the outcome to occur. The more cases we investigate , without still finding a TP, the stronger this suggestion is. But if only one FP is found, that tells us the intervention is not sufficient. Single cases can be informative. Large numbers of cases are not aways needed.
    2. If there are no FN cases, this suggests an intervention is necessary for the outcome to occur. The more cases we investigate , without still finding a FN, the stronger this suggestion is. But if only one FN is found, that tells us the intervention is not necessary. 
    3. If there are no FP or FN cases, this suggests an intervention is sufficient and necessary for the outcome to occur. The more cases we investigate, without still finding a TP or FN, the stronger this suggestion is. But if only one FP, or FN is found, that tells us that the intervention is not sufficient or not necessary, respectively. 
  3. The contents of the Confusion Matrix help us identify the type and scale of errors  and their acceptability. FP and FN cases are two different types of error that have different consequences in different contexts. A brain surgeon will be looking for an intervention that has a very low FP rate, because errors in brain surgery can be fatal, so cannot be recovered. On the other hand, a stockmarket investor is likely to be looking for a more general purpose model, with few FNs. However, it only has to be right 55% of the time to still make them money. So a high rate of FPs may not be a big  concern. They can recover their losses through further trading. In the field of humanitarian assistance the corresponding concerns are with coverage (reaching all those in need, i.e minimising False Negatives) and leakage (minimising inclusion of those not in need i.e False Positives). There are Confusion Matrix based performance measures for both kinds error and for the degree that both kinds of error are balanced (See the Wikipedia entry)
  4. The contents of the Confusion Matrix can help us identify usefull case studies for comparison purposes. These can include
    1. Cases which exemplify the True Positive results, where the model (e.g an intervention) correctly predicted the presence of the outcome. Look within these cases to find any likely causal mechanisms connecting the intervention and outcome. Two sub-types can be useful to compare:
      1. Modal cases, which represent the most common characteristics seen in this group, taking all comparable attributes into account, not just those within the prediction model. 
      2. Outlier cases, which represent those which were most dissimilar to all other cases in this group, apart from having the same prediction model characteristics
    2. Cases which exemplify the False Positives, where the model incorrectly predicted the presence of the outcome.There are at least two possible explanations that can be explored:
      1. In the False Positive cases, there are one or more other factors that all the cases have in common, which are blocking the model configuration from working i.e. delivering the outcome
      2. In the True Positive cases, there are one or more other factors that all the cases have in common, which are enabling the model configuration from working i.e. delivering the outcome, but which are absent in the False Positive cases
        1. Note: For comparisons with TPs cases, TP and FP cases should be maximally  similar in their case attributes. I think this is called  MSDO (most similar, different outcome) based case selection
    3. Cases which exemplify the False Negatives, where the outcome occurred despite the absence the attributes of the model. There are three possibilities of interest here:
      1. There may be some False Negative cases that have all but one of the attributes found in the prediction model. These cases would be worth examining, in order to understand why the absence of a particular attribute that is part of the predictive model does not prevent the outcome from occurring. There may be some counter-balancing enabling factor at work, enabling the outcome.
      2. It is possible that some cases have been classed as FNs because they missed specific data on crucial attributes that would have otherwise classed them as TPs.
      3. Other cases may represent genuine alternatives, which need within-case investigation to identify the attributes that appear to make them successful 
    4. Cases which exemplify the True Negatives, where the absence the attributes of the model is associated with the absence of the outcome.
      1. Normally this are seen as not being of much interest. But there may cases here with all but one of the intervention attributes. If found then the missing attribute may be viewed as: 
        1. A necessary attribute, without which the outcome can occur
        2. An INUS attribute i.e. an attribute that is Insufficient but Necessary in a configuration that is Unnecessary but Sufficient for the outcome (See Befani, 2016). It would then be worth investigating how these critical attributes have their effects by doing a detailed within-case analysis of the cases with the critical missing attribute.
      2. Cases may become TNs for two reasons. The first, and most expected, is that the causes of positive outcomes are absent. The second, which is worth investigating, is that there are additional and different causes at work which are causing the outcome to be absent. The first of these is described as causal symmetry, the second of these is described as causal asymmetry. Because of the second possibility is worthwhile paying close attention to TN cases to identify the extent to which symmetrical causes or asymmetrical causes are at work. The findings could have significant implications for any intervention that is being designed. Here a useful comparision would be  between maximally similar TP and TN cases.
Resources

Some of you may know that I have built the Confusion Matrix into the design of EvalC3, an Excel app for cross-case analysis, that combines measurement concepts from the disparate fields of machine learning and QCA (Qualitative Comparative Analysis). With fair winds. this should become available as a free to use web app in early 2024, courtesy of a team at Sheffield Hallam University. There you will be able to explore and exploit the uses of the Confusion Matrix for both quantative and qualitative analyses.



Monday, June 14, 2021

Paired case comparisons as an alternative to a configurational analysis (QCA or otherwise)

[Take care, this is still very much a working draft!] Criticisms and comments welcome though

The challenge

The other day I was asked for some advice on how to implement a QCA type of analysis within an evaluation that was already fairly circumscribed in its design. Circumscribed both by the commissioner and by the team proposing to carry out the evaluation. The commissioner had already indicated that they wanted a case study orientated approach and had even identified the maximum number of case studies that they wanted to see (ten) .  While the evaluation team could see the potential use of a QCA type analyses they were already committed to undertaking a process type evaluation, and did not want a QCA type analyses to dominate their approach. In addition, it appeared that there already was a quite developed conceptual framework that included many different factors which might be contribute causes to the outcomes of interest.

As is often the case, there seemed to be a shortage of cases and an excess of potentially explanatory variables. In addition, there were doubts within the evaluation team as to whether a thorough QCA analysis would be possible or justifiable given the available resources and priorities.

Paired case comparisons as the alternative

My first suggestion to the evaluation team was to recognise that there is some middle ground between across-case analysis involving medium to large numbers of cases, and a within-case analysis. As described by Rihoux and Ragin (2009)  a QCA analysis will use both, going back and forth, using one to inform the other, over a number of iterations.. The middle ground between these two options is case comparisons – particularly comparisons of pairs of cases. Although in the situation described above there will be a maximum of 10 cases that can be explored, the number of pairs of these cases that can be compared is still quite big (45).  With these sort of numbers some sort of strategy is necessary for making choices about the types of pairs of cases that will be compared. Fortunately there is already a large literature on case selection. My favourite summary is the one by  Gerring, J., & Cojocaru, L. (2015). Case-Selection: A Diversity of Methods and Criteria. 

My suggested approach was to use what is known as the Confusion Matrix as the basis for structuring the choice of cases to be compared.  A Confusion Matrix is a simple truth table, showing a combination of two sets of possibilities (rows and columns), and the incidence of those possibilities (cell values). For example as follows:


Inside the Confusion Matrix are four types of cases: 
  1. True Positives where there are cases with attributes that fit my theory and where the expected outcome is present
  2. False Positives, where there are cases with attributes that fit my theory but where the expected outcome is absent
  3. False Negatives, where there are cases which do not have attributes that fit my theory but where nevertheless the outcome is present
  4. True Negatives, where there are cases which do not have attributes that fit my theory and where the outcome is absent as expected
Both QCA and supervised machine learning approaches are good at identifying individual (or packages of)  attributes which are good predictors of when outcomes are present or when they are absent – in other words where there are large number of True Positive and True Negative cases. And the incidence of exceptions: the False Positive and False Negatives. But this type of cross case-based led analysis do not seem to be available as an option to the evaluation team I have mentioned above.

1. Starting with True Positives

So my suggestion has been to look at the 10 cases that they have at hand, and start by focusing in on those cases where the outcome is present (first column). Focus on the case that is most similar to others with the outcome present, because findings about this case may be more likely to apply to others. See below on measuring similarity) . When examining that case identify one or more attributes which is the most likely explanation for the outcome being present. Note here that this initial theory is coming from a single within-case analysis, not a cross-case analysis. The evaluation team will now have a single case in the category of True Positive. 

2. Comparing False Positives and True Positives

The next step in the analysis is to identify cases which can be provisionally described as a False Positive. Start by finding a case which has the outcome absent. Does it have the same theory-relevant attributes as the True Positive? If so, retain it as a False Positive. Otherwise, move it to the True Negative category. Repeat this move for all remaining cases with the outcome absent. From among all those qualifying as False Positives, find one which is otherwise be as similar as possible in all its other attributes to the True Positive case.This type of analysis choice is called MSDO, standing for "most similar design, different outcome" - see the de Meur reference below.  Also see below on how to measure this form of similarity. 

The aim here is to find how the causal mechanisms at work differ. One way to explore this question is to look for an additional attribute that is present in the True Positive case but absent in the False Negative case, despite those cases otherwise being most similar.  Or, an attribute that is absent in the True Positive but present in the False Negative case. In the former case the missing case could be seen as a kind of enabling factor, whereas in the latter case it could be seen as more like a blocking factor.  If nether can be found by comparison of coded attributes of the cases then a more intensive examination of raw data on the case might still identify them, and lead to an updated/elaboration of theory behind the True Positive case. Alternately, that examination might suggest measurement error is the problem and that the False Positive case needs to be reclassified as True Positive.

3. Comparing False Negatives and True Positives

The third step in the analysis is to identify at least one most relevant case which can be described as a False Negative.  This False-Negative case should be one that is as different as possible in all its attributes to the True Positive case. This type of analysis choice is called MDSO, standing for "most different design, same outcome". 

 The aim here should be to try to identify if the same or different causal mechanisms is at work,  when compared to that seen in the True Positive case. One way to explore this question is to look for one or more attributes that both the True Positive and False Negative case have in common, despite otherwise being "most different". If found, and if associated with the causal theory in the True Positive case,  then the False Negative case can now be reclassed as a True Positive. The theory describing the now two True Positive cases can now be seen as provisionally "necessary"for the outcome, until another False Negative case is found and examined in a similar fashion.If the casual mechanism seems to be different then the case remains as a False Negative.

Both the second and third step comparisons described above will help : (a0 elaborate the details, and (b) establish the limits of the scope of the theory identified in step one. This suggested process makes use of the Confusion Matrix as a kind of very simple chess board, where pieces (aka cases) are introduced on to the board, one at a time, and then sometimes moved to other adjacent positions (depending on their relation to other pieces on the board).Or, the theory behind their chosen location is updated.

If there are only ten cases available to study, and these have an even distribution of outcomes present and absent, then this three step process of analysis could be reiterated five times i.e. once for each case where the outcome was present. Thus involving  up to 10 case comparisons, out of the 45 possible.

Measuring similarity

The above process depends on the ability to make systematic and transparent judgements about similarity. One way of doing this, which I have previously built into an Excel app called EvalC3, is to start by describing each case with a string of binary coded attributes of the same kind as used in QCA, and in some forms of supervised machine learning. An example set of workings can be seen in this Excel sheet, showing  an imagined data set of 10 cases, with 10 different attributes and then the calculation and use of  Hamming Distance as the similarity measure to chose cases for the kinds of comparisons described above. That list of attributes and the Hamming distance measure, is likely to  need to be updated, as the investigation of False Positives and False Negatives proceeds.

Incidentally, the more attributes that have been coded per case, the more discriminating this kind of approach can become. In contrast to cross-case analysis where an increase in numbers of attributes per case is usually problematic

Related sources

For some of my earlier thoughts on case comparative analysis see  here, These were developed for use within the context of a cross-case analysis process. But the argument above is about how to proceed when the staring point is a within-case analysis.

See also:
  • Nielsen, R. A. (2014). Case Selection via Matching
  • de Meur, G., Bursens, P., & Gottcheiner, A. (2006). MSDO/MDSO Revisited for Public Policy Analysis. In B. Rihoux & H. Grimm (Eds.), Innovative Comparative Methods for Policy Analysis (pp. 67–94). Springer US. 
  • de Meur, G., & Gottcheiner, A. (2012). The Logic and Assumptions of MDSO–MSDO Designs. In The SAGE Handbook of Case-Based Methods (pp. 208–221). 
  • Rihoux, B., & Ragin, C. C. (Eds.). (2009). Configurational Comparative Methods: Qualitative Comparative Analysis (QCA) and Related Techniques. Sage. Pages 28-32 for a description of "MSDO/MDSO: A systematic  procedure for matching cases and conditions". 
  • Goertz, G. (2017). Multimethod research, causal mechanisms, and case studies: An integrated approach. Princeton University Press.

Saturday, September 26, 2020

EvalC3 versus QCA - compared via a re-analysis of one data set


I was recently asked whether that EvalC3 could be used to do a synthesis study, analysing the results from multiple evaluations.  My immediate response was yes, in principle.  But it probably needs more thought.

I then recalled that I had seen somewhere an Oxfam synthesis study of multiple evaluation results that used QCA.  This is the reference, in case you want to read, which I suggest you do.

Shephard, D., Ellersiek, A., Meuer, J., & Rupietta, C. (2018). Influencing Policy and Civic Space: A meta-review of Oxfam’s Policy Influence, Citizen Voice and Good Governance Effectiveness Reviews | Oxfam Policy & Practice. Oxfam. https://policy-practice.oxfam.org.uk/publications/*

Like other good examples of QCA analyses in practice, this paper includes the original data set in an appendix, in the form of a truth table.  This means it is possible for other people like me to reanalyse this data using other methods that might be of interest, including EvalC3.  So, this is what I did.

The Oxfam dataset includes five conditions a.k.a. attributes of the programs that were evaluated.  Along with two outcomes each pursued by some of the programs.  In total there was data on the attributes and outcomes of twenty-two programs concerned with expanding civic space and fifteen programs concerned with policy influence.  These were subject to two different QCA analyses.

The analysis of civic space outcomes

In the Oxfam analysis of the fifteen programs concerned with expanding civic space, QCA analysis found four “solutions” a.k.a. combinations of conditions which were associated with the outcome of successful civic space.  Each of these combinations of conditions was found to be sufficient for the outcome to occur.  Together they accounted for the outcomes found in 93% or fourteen of the fifteen cases.  But there was overlap in the cases covered by each of these solutions, leaving the question open as to which solution best fitted/explained those cases.  Six of the fourteen cases had two or more solutions that fitted them.

In contrast, the EvalC3 analysis found two predictive models (=solutions) which are associated with the outcome of expanded civic space.  Each of these combinations of conditions was found to be sufficient for the outcome to occur.  Together they accounted for all fifteen cases where the outcome occurred.  In addition, there was no overlap in the cases covered by each of these models.

The analysis of policy influencing outcomes

in the Oxfam analysis of the twenty-two programs concerned with policy influencing the QCA analysis found two solutions associated with the outcome of expanding civic space.  Each of these was sufficient for the outcome, and together they accounted for all the outcomes.  But there was some overlap in coverage, one of the six cases were covered by both solutions.

In contrast, the EvalC3 analysis found one predictive model which was necessary and sufficient for the outcome, and which accounted for all the outcomes achieved.

Conclusions?

Based on parsimony alone, the EvalC3 solutions/predictive models would be preferable.  But parsimony is not the only appropriate criteria to evaluate a model.  Arguably a more important criterion is the extent which a model fits the details of the cases covered when those cases are closely examined.  So, really what the EvalC3 analysis has done is to generate some extra models that need close attention, in addition to those already generated by the QCA analysis.  The number of cases covered by multiple models is been increased.

In the Oxfam study, there was no follow-on attention given to resolving what was happening in the cases that were identified by more than one solution/predictive model.  In my experience of reading other QCA analyses, this lack of follow-up is not uncommon.

However, in the Oxfam study for each of the solutions found at least one detailed description was given of an example case that that solution covered.  In principle, this is good practice. But unfortunately, as far as I can see, it was not clear whether that particular case was exclusively covered by that solution, or part of a shared solution.  Even amongst those cases which were exclusively covered by a solution there are still choices that need to be made (and explained) about how to select particular cases as exemplars and/or for a detailed examination of any causal mechanisms at work.  

QCA software does not provide any help with this task.  However, I did find some guidance in specialist text on QCA:  Schneider, C. Q., & Wagemann, C. (2012). Set-Theoretic Methods for the Social Sciences: A Guide to Qualitative Comparative Analysis. Cambridge University Press. https://doi.org/10.1017/CBO9781139004244 (it’s a heavy read in part of this book but overall, it is very informative).  In section 11.4 titled Set-Theoretic Methods and Case Selection, the authors note ‘Much emphasis is put on the importance of intimate case knowledge for a successful QCA.  As a matter of fact, the idea of QCA is a research approach and of going back-and-forth between ideas and evidence largely consists of combining comparative within case studies and QCA is a technique.  So far, the literature has focused mainly on how to choose cases prior to and during but not after a QCA were by QCA we here refer to the analytic moment of analysing a truth table it is therefore puzzling that little systematic and specific guidance has so far been provided on which cases to select for within case studies based on the results of i.e. after a QCA… ‘The authors then go on to provide some guidance (a total of 7 pages of 320).

In contrast to QCA software, EvalC3 has a number of built-in tools and some associated guidance on the EvalC3 website, on how to think about case selection as a step between cross-case analysis and subsequent within-case analysis.  One of the steps in the seven-stage EvalC3 workflow (Compare Models) is the generation of a table that compares the case coverage of multiple selected alternative models found by one’s analysis to that point.  This enables the identification of cases which are covered by two or more models.  These types of cases would clearly warrant subsequent within-case investigation.

Another step in the EvalC3 workflow called Compare Cases, provides another means of identifying specific cases for follow-up within-case investigations.  In this worksheet individual cases can be identified as modal or extreme examples within various categories that may be of interest e.g. True Positives, False Positives, et cetera.  It is also possible to identify for a chosen case what other case is most similar and most different to that case, when all its attributes available in the dataset are considered.  These measurement capacities are backed up by technical advice on the EvalC3 website on the particular types of questions that can be asked in relation to different types of cases selected on the basis of their similarities and differences. Your comments on these suggested strategies would be very welcome.

I should explain...

...why the models found by EvalC3 were different from those found from the QCA analysis.  QCA software finds solutions i.e. predictive models by reducing all the configurations found in a truth table down to the smallest possible set, using what is known as a minimisation algorithm called the Quine McCluskey algorithm.

In contrast, EvalC3 provides users with a choice of four different search algorithms combined with multiple alternative performance measures that can be used to automatically assess the results generated by those search algorithms. All algorithms have their strengths and weaknesses, in terms of the kinds of results they can find and cannot find, including the QCA Quine McCluskey algorithm and the simple machine learning algorithms built into EvalC3. I think the McCluskey algorithm has particular problems with datasets which have limited diversity, in other words, where cases only represent a small proportion of all the possible combinations of the conditions documented in the dataset. Whereas the simple search algorithms built into EvalC3 don't experience that is a difficulty. This is my conjecture, not yet rigorously tested.

[In the above data set the cases represented the two data sets analysed represented 47% and 68% of all the possible configurations given the presence of five different conditions]

While EvalC3 results described above did differ from the QCA analyses, they were not in outright contradiction. The same has been my experience when I reanalysed other QCA datasets. EvalC3 will either simply duplicate the QCA findings or produce variations on those, and often those which are better performing. 


Wednesday, July 29, 2020

Converting a continuous variable into a binary variable i.e. dichotomising


If you Google "dichotomising data" you will find lots of warnings that this is basically a bad idea!. Why so? Because if you do so you will lose information. All those fine details of differences between observations will be lost.

But what if you are dealing with something like responses to an attitude survey? Typically these have five-pointed scales ranging from disagree to neutral to agree, or the like. Quite a few of the fine differences in ratings on this scale may well be nothing more than "noise", i.e. variations unconnected with the phenomenon you are trying to measure. A more likely explanation is that they reflect differences in respondents "response styles", or something more random

Aggregation or "binning" of observations into two classes (higher and lower) can be done in different ways. You could simply find the median value and split the observations at that point. Or, you could look for a  "natural" gap in the frequency distribution and make the split there. Or, you may have a prior theoretical reason that it makes sense to split the range of observations at some other specific point.

I have been trying out a different approach. This involved not just looking at the continuous variable I wanted to dichotomise, but also its relationship with an outcome that will be of interest in subsequent analyses. This could also be a continuous variable or a binary measure.

There are two ways of doing this. The first is a relatively simple manual approach. In the first approach, the cut-off point for the outcome variable has already been decided, by one means or another.  We then needed to vary the cut-off point in the range of values for the independent variable to see what effect they had on the numbers of observations of the outcome above and below its cut-off value. For any specific cut-off value for the independent variable an Excel spreadsheet will be used to calculate the following:
  1. # of True Positives - where the independent variable value was high and so was the outcome variable value
  2. # of False Positives - where the independent variable value was high but the outcome variable value was low
  3. # of False Negative - where the independent variable value was low but the outcome variable value was high
  4. # of True Negatives - where the independent variable value was low and the outcome variable value was low
When doing this we are in effect treating cut-off criteria for the independent variable as a predictor of the dependent variable.  Or more precisely, a predictor of the prevalence of observations with values above a specified cut-off point on the dependent variable.

In Excel, I constructed the following:
  • Cells for entering the raw data - the values of each variable for each observation
  • Cells for entering the cut-off points
  • Cells for defining the status of each observation  
  • A Confusion Matrix, to summarise the total number of observations with each of the four possible types described above.
  • A set of 6 widely used performance measures, calculated using the number of observations in each cell of the Confusion Matrix.
    • These performance measures tell me how good the chosen cut-off point is as a predictor of the outcome as specified. At best, all those observations fitting the cut-off criterion will be in the True Positive group and all those not fitting it would be in the True Negative group. In reality, there are also likely to be observations in the False Positive and False Negative groups.
By varying the cut-off points it is possible to find the best possible predictor i.e. one with very few False Positive and very few False Negatives. This can be done manually when the cut-off point for the outcome variable has already been decided.

Alternatively, if the cut-off point has not been decided for the outcome variable, a search algorithm can be used to find the best combination of two cut-off points (one for the independent and one for the dependent variable).  Within Excel, there is an add-in called Solver, which uses an evolutionary algorithm to do such a search, to find the optimal combination

Postscript 2020 11 05: An Excel file with the dichotomization formula and a built-in example data set is available here  

  
2020 08 13: Also relevant: 

Hofstad, T. (2019). QCA and the Robustness Range of Calibration Thresholds: How Sensitive are Solution Terms to Changing Calibrations? COMPASSS Working Papers, 2019–92. http://www.compasss.org/wpseries/Hofstad2019.pdf  

 This paper emphasises the importance of declaring the range of original (pre-dichotomised) values over which the performance of a predictive model remains stable 

Sunday, June 09, 2019

Extracting additional value from the analysis of QuIP data



James Copestake, Marlies Morsink and Fiona Remnant are the authors of "Attributing Development Impact: The Qualitative Impact Protocol Casebook" published this year by Practical Action
As the title suggests, the book is all about how to attribute development impact, using qualitative data - an important topic that will be of interest to many. The book contents include:
  • Two introductory chapters  
    • 1. Introducing the causal attribution challenge and the QuIP | 
    • 2. Comparing the QuIP with other approaches to development impact evaluation 
  • Seven chapters describing case studies of its use in Ethiopia, Mexico, India, Uganda, Tanzania, Eastern Africa, and England
  • A final chapter synthesising issues arising in these case studies
  • An appendix detailing guidelines for QuIP use

QuIP is innovative in many respects. Perhaps most notably, at first introduction, in the way data is gathered and coded. Neither the field researchers or the communities of interest are told which organisation's interventions are of interest, an approach known as "double blindfolding".  The aim here is to mitigate the risk of "confirmation bias", as much as is practical in a given context.

The process of coding the qualitative data that is collected is also interesting. The focus is on identifying causal pathways in the qualitative descriptions of change obtained through one to one interviews and focus group discussions. In Chapter One the authors explain how the QuIP process uses a triple coding approach, which divides each reported causal pathway into  three elements:

  • Drivers of change (causes). What led to change, positive or negative?
  • Outcomes (effects) What change/s occurred, positive or negative?
  • Attribution: What is the strength of association between the causal claim and the activity or project being evaluated
"Once all change data is coded then it is possible to use frequency counts to tabulate and visualise the data in many ways, as the chapters that follow illustrate". An important point to note here is that although text is being converted to numbers, because of the software which has been developed, it is always possible to identify the source text for any count that is used. And numbers are not the only basis which conclusions are reached about what has happened, the text of respondents' narratives are also very important sources.

That said, what interests me most at present are the emerging options for the analysis of the coded data. Data collation and analysis was initially based on a custom-designed Excel file, used because 99% of evaluators and program managers are already familiar with the use of Excel. However, more recently, investment has been made in the development of a customised version of MicroStrategy, a free desktop data analysis and visualization dashboard. This enables field researchers and evaluation clients to "slice and dice" the collated data in many different ways, without risk of damaging the underlying data, and its use involves a minimal learning curve. One of the options within MicroStrategy is to visualise the relationships between all the identified drivers and outcomes, as a network structure. This is of particular interest to me, and is something that I have been exploring with the QuIP team and with Steve Powell, who has been working with them

The network structure of driver and outcome relationships 


One way QuIP coded data has been tabulated is in the form of a matrix, where rows = drivers and columns = outcomes and cell values = incidence of reports of the connection between a given row and a given column (See tables on pages 67 and 131). In these matrices, we can see that some drivers affect multiple outcomes and some outcomes are affected by multiple drivers. By themselves, the contents of these matrices are not easy to interpret, especially as they get bigger. One matrix provided to me had 95 drivers, 80 outcomes and 254 linkages between these. Some form of network visualisation is an absolute necessity.

Figure 1 is a network visualisation of the 254 connections between drivers and outcomes. The red nodes are reported drivers and the blue nodes are the outcomes that they have reportedly led to. Green nodes are outcomes that in turn have been drivers for other outcomes (I have deliberately left off the node labels in this example). While this was generated using Ucinet/Netdraw, I noticed that the same structure can also be generated by the MicroStrategy.


Figure 1

It is clear from a quick glance that there is still more complexity here than can be easily made sense of. Most notably in the "hairball" on the right.

One way of partitioning this complexity is to focus in on a specific "ego network" of interest. An ego network is a combination of (a) an outcome plus (b) all the other drivers and outcomes it is immediately linked to,  plus (c) the links between those. MicroStartegy already provides (a) and (b) but probably could be tweaked to also provide (c). In Ucinet/Netdraw it is also possible to define the width of the ego network, i.e. how many links out from ego to collect connections to and between  "alters". Here in Figure 2 is one ego network that can be seen in the dense cluster in Figure 1

Figure 2


Within this selective view, we can see more clearly the different causal pathways to an outcome. There are also a number of feedback loops here, between pairs (3) and larger groups of outcomes (2).

PS: Ego networks can be defined in relation to a node which represents a driver or an outcome. If one selected a driver as the ego in the ego network then the resulting network view would provide an "effects of a cause" perspective. Whereas if one selected an outcome as the ego, then the resulting network view would provide a "causes of an outcome" perspective.

Understanding specific connections


Each of the links in the above diagrams, and in the source matrices, have a specific "strength" based on a choice of how that relationship was coded. In the above example, these values were "citation counts,  meaning one count per domain per respondent. Associated with each of these counts are the text sources, which can shed more light on what those connections meant.

What is missing from the above diagrams is numerical information about the nodes, i.e. the frequency of mentions of drivers and outcomes. The same is the case for the tabulated data examples in the book (pages 67, 131). But that data is within reach.

Here in Figure 7 and 8 is a simplified matrix. and associated network diagram, taken from this publication: QuIP and the Yin/Yang of Quant and Qual: How to navigate QuIP visualisations"

In Figure 7 I  have added row and column summary values in red, and copied these in white on to the respective nodes in Figure 8. These provide values for the nodes, as distinct from the connections between them.


Why bother? Link strength by itself is not all that meaningful. Link strengths need to be seen in context, specifically: (a) how often the associated driver was reported at all, and (b) how often the associated outcome was reported at all.  These are the numbers in white that I have added to the nodes in the network diagram above.

Once this extra information is provided we can insert it into a Confusion Matrix and use it to generate two items of missing information: (a) the number of False Positives, in the top right cell, and (b) the number of False Negatives (in the bottom left cell). In Figure 3, I have used Confusion Matrices to describe two of the links in the Figure 8 diagram.


Figure 3
It now becomes clear that there is an argument for saying that  the link with a value of 5, between "Alternative Income" and "Increased income" is more important than the link with the value of 14 between "Rain/recovering from drought" and "Healthier livestock"

The reason? Despite the first link looking stronger (14 versus 5) there is more chance that the expected outcome will occur when the second driver is present. With the first driver the expected outcome only happens 14 of 33 times. But with the second driver the expected outcome happens 5 of the 7 times.

When this analysis is repeated for all the links where there is data (6 in Figure 8 above), it turns out that only two links are of this kind, where the outcome is more likely to be present when the driver is present. The second one is the link between Increased price of livestock and
Increased income, as shown in the Confusion Matrix in Figure 4 below

Figure 4
There are some other aspects of this kind of analysis worth noting.  When "Increased price of livestock" is compared to the one above (Alternative income...),  it accounts for a bigger proportion of cases where the outcome is reported i.e. 11/73 versus 5/73.

One can also imagine situations where the top right cell (False Positive)  is zero. In this case, the driver appears to be sufficient for the outcome i.e. where it is present the outcome is present. And one can imagine situations where the bottom left cell (False Negative) is zero. In this case, the driver appears to be necessary for the outcome i.e. where it is not present the outcome is also not present.


Filtered visualisations using Confusion Matrix data



When data from a Confusion Matrix is available, this provides analysts with additional options for generating filtered views of the network of reported causes. These are:

  1. Show only those connected drivers which seem to account for most instances of a reported outcome. I.e. the number of True Positives (in the top left cell) exceeds the number of False negatives (in the bottom left cell)
  2. Show only those connected drivers which are more often associated with instances of a reported outcome (the True Positive in the top left cell), than its absence (the false Positive, in the top right cell). 

Drivers accounting for the majority of instances of the reported outcome


Figure 5 is a filtered version of the blue, green and red network diagram shown in Figure 1 above. A filter has retained links where the True Positive value in the top left cell of the Confusion Matrix (i.e. the link value) is greater than the associated False Negative value in the bottom left cell. This presents a very different picture to the one in Figure 1.

Figure 5


Key: Red nodes = drivers, Blue nodes = outcomes, Green nodes = outcomes that were also in the role of drivers.

Drivers more often associated with the presence of a reported outcome than its absence


A filter has retained links where the True Positive value in the top left cell of the Confusion Matrix (i.e. the link value) is greater than the associated False Positive value in the top right cell. 

Figure 6

Other possibilities


While there are a lot of interesting possibilities as to how to analyse QuIP data , one option does not yet seem available. That is the possibility of identifying instances of "configurational causality" By this, I mean packages of causes that must be jointly present for an outcome to occur. When we look at the rows in Figure 7 it seems we have lists of single causes, each of which can account for some of the instances of the outcome of interest. And when we look at Figures 2 and 8 we can see that there is more than one way of achieving an outcome. But we can't easily identify any "causal packages" that might be at work.

I am wondering to what extent this might be a limitation built into the coding process. Or, if better use of existing coded information might work. Perhaps the way the row and column summary values are generated in Figure 7 needs rethinking.

Looking at the existing network diagrams these provide no information about which connections were reported by whom. In Figure 2, take the links into "Increased income" from "Part of organisation x project" and "Social Cash Transfer (Gov)"  Each of these links could have been reported by a different set of people, or they could have been reported by the same set of people. If the latter, then this could be an instance of "configurational causality" . To be more confident we would need to establish that people who reported only one of the two drivers did not also report the outcome.

Because all QuIP coded values can be linked back to specific sources and their texts, it seems that this sort of analysis should be possible. But it will take some programming work to make this kind of analysis quick and easy.

PS 1: Actually maybe not so difficult. All we need is a matrix where:

  • Rows = respondents
  • Columns = drivers & outcomes mentioned by respondents
  • Cell values =  1/0 mean that row respondent did or did not mention that column driver or outcome
Then use QCA, or EvalC3, or other machine learning software, to find predictable associations between one or more drivers and any outcome of interest. Then check these associations against text details of each mention to see if a causal role is referred to and plausible.

That said, the text evidence would not necessarily provide the last word (pun unintended). It is possible that a respondent may mention various driver-outcome relationships e.g. A>B, C>D, and A>E but not C>E. Yet, when analysing data from multiple respondents we might find a consistent co-presence of references to C and E (though no report of actual causal relations between them). The explanation may simply be that in the confines of a specific interview there was not time or inclination to mention this additional specific relationship.

In response...

James Copestake has elaborated on this final section as follows "We have discussed this a lot, and I agree it is an area in need of further research. You suggest that there may be causal configurations in the source text which our coding system is not yet geared up to tease out. That may be true and is something we are working on. But there are two other possibilities. First, that interviewers and interviewing guidelines are not primed as much as they could be to identify these. Second, respondents narrative habits (linked to how they think and the language at their disposal) may constrain people from telling configurational stories. This means the research agenda for exploring this issue goes beyond looking at coding"

PS: Also of interest: Attributing development impact: lessons from road testing the QuIP. James Copestake, January 2019

Saturday, July 16, 2016

EvalC3 - an Excel-based package of tools for exploring and evaluating complex causal configurations


Over the last few years I have been exposed to two different approaches to identifying and evaluating complex causal configurations within sets of data describing the attributes of projects and their outcomes. One is Qualitative Comparative Analysis (QCA) and the other is Predictive Analytics (and particularly Decision Tree algorithms). Both can work with binary data, which is easier to access than numerical data, but both require specialist software - which requires time and effort to learn how to use

In the last year I have spent some time and money, in association with a software company called Aptivate (Mark Skipper in particular) developing an Excel based package which will do many of the things that both of the above software packages can do, as well as provide some additional capacities that neither have.

This is called EvalC3, and is now available [free] to people who are interested to test it out, either using their own data and/or some example data sets that are available. The "manual" on how to use EvalC3 is a supporting website of the same name, found here: https://evalc3.net/  There is also a  short introductory video here.

Its purpose is to enable users: (a) to identify sets of project & context attributes which are  good predictors of the achievement of an outcome of interest,  (b) to compare and evaluate the performance of these predictive models, and (c) to identify relevant cases for follow-up within-case investigations to uncover any causal mechanisms at work.

The overall approach is based on the view that “association is a necessary but insufficient basis for a strong claim about causation, which is a more useful perspective than simply saying “correlation does not equal causation”.While the process involves systematic quantitative cross-case comparisons, its use should be informed by  within-case knowledge at both the pre-analysis planning and post-analysis interpretation stages.

The EvalC3 tools are organised in a work flow as shown below:



The selling points:




  • EvalC3 is free, and distributed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
  • It uses Excel, which many people already have and know how to use
  • It uses binary data. Numerical data can be converted to binary but not the other way
  • It combines manual hypothesis testing with  algorithm based (i.e. automated) searches for good performing predictive models
  • There are four different algorithms that can be used
  • Prediction models can be saved and compared
  • There are case-selection strategies for follow-up case-comparisons to identify any casual mechanisms at work "underneath" the prediction models

If you would like to try using EvalC3 email rick.davies at gmail.com

Skype video support can be provided in some instances. i.e. if your application is of interest to me :-)

Tuesday, June 23, 2015

Is QCA it's own worst enemy?

[As you may have read elsewhere on this blog] QCA stands for Qualitative Comparative Analysis. It is a method that is finding increased use as an evaluation tool, especially for exploring claims about the causal role of interventions of various kinds. What I like about it is its ability to recognize and analyse complex causal configurations, which have some fit with the complexity of the real world as we know it.

What I don't like about it is its complexity, it can sometimes be annoyingly obscure and excessively complicated. This is a serious problem if you want to see the method being used more widely and if you want the results to be effectively communicated and properly understood. I have seen instances recently where this has been such a problem that it threatened to derail an ongoing evaluation.

In this blog post I want to highlight where the QCA methodology is unnecessary complex and and suggest some ways to avoid this type of problem. In fact I will start with the simple solution, then explain how QCA manages to make it more complex.

Let me start with a relatively simple perspective. QCA analyses fall in to the broad category of "classifiers". These include a variety of algorithmic processes for deciding what category various instances belongs to. For example which types of projects were successful or not in achieving their objectives.

I will start with a two by two table, a Truth Table, showing the various possible results that can be found, by QCA and other methods. Configuration X here is a particular combination of conditions that an analysis has found to be associated with the presence of an outcome. The Truth Table helps us identify just how good that association is, by comparing the incidences where the configuration is present or absent with the incidences where the outcome is present or absent.


As I have explained in an earlier blog, one way of assessing the adequately of the result shown in such a matrix is by using a statistical test such as Chi Square, to see if the distribution is significantly different from what a chance distribution would look like.There are only two possible results when the outcome is present: the association is statistically significant or it is not.

However, if you import the ideas of Necessary and/or Sufficient causes the range of interesting results increases. The matrix can now show four possible types of results when the outcome is present:

  1. The configuration of conditions is Necessary and Sufficient for the outcome to be present. Here cells C and B would be empty of cases
  2. The configuration of conditions is Necessary but Insufficient for the outcome to be present. Here cell C would be empty of cases
  3. The configuration of conditions is Unnecessary but Sufficient for the outcome to be present. Here cell  B would be empty of cases
  4. The configuration of conditions is Unnecessary and Insufficient for the outcome to be present. Here no cells would be empty of cases
The interesting thing about the first three options is that they are easy to disprove. There only needs to be one case found in the cell(s) meant to be empty, for that claim to be falsified.

And we can provide a lot more nuance to the type 4 results, by looking at the proportion of cases found in cells B and C, relative to cell A. The proportion of A/(A+B) tells us about the consistency of the results, in the simple sense of consistency of results found via an examination of a QCA Truth Table. The proportion of A/(A+B) tells us about the coverage of the results, as in the proportion of all present outcomes that exist that were identified by the configuration. 

So how does QCA deal with all this? Well, as far as I can see, it does so in a way makes it more complex than necessary. Here I am basing my understanding mainly on Schneider and Wagemann's account of QCA.
  1. Firstly, they leave aside the simplest notions of Necessity and Sufficiency as described above, which are based on a categorical notion of Necessity and Sufficiency i..e a configuration either is or is not Sufficient etc. One of the arguments I have seen for doing this is these types of results are rare and part of this may be due to measurement error, so we should take  more generous/less demanding view of what constitutes Necessity and Sufficiency
  2. Instead they focus on Truth Tables with results as shown below (classed as 4. Unnecessary and Insufficient above). They then propose ways of analyzing these in terms of having degrees of Necessity and Sufficiency conditions. This involves two counter-intuitive mirror-opposite ways of measuring the consistency and coverage of the results, according to whether the focus is on analyzing the extent of Sufficiency or Necessity conditions (see Chapter 5 for details)
  3. Further complicating the analysis is the introduction of a minimum thresholds for the consistency of Necessity and Sufficiency conditions (because the more basic categorical idea has been put aside). There is no straightforward basis for defining these levels. It is suggested that they depend on the nature of the problem being identified.

  Configuration X contains conditions which are neither Necessary or Sufficient 

Using my strict interpretation of Sufficiency and Necessity there is no need for a consistency measure where a condition (or configuration) is found to be Sufficient but Unnecessary, because there will be no cases in cell B. Likewise, there is no need for a coverage measure where a condition (or configuration) is found to be Necessary but Insufficient, because there will be no cells in cell C,

We do need to know the consistency where a condition (or configuration) is Necessary but Insufficient, and the the coverage, where where a condition (or configuration) is found to be Sufficient but Unnecessary.

Monday, May 25, 2015

Characterising purposive samples



In some situations it is not possible to develop a random sample of cases to examine for evaluation purposes. There may be more immediate challenges, such as finding enough cases with sufficient information and sufficient quality of information.

The problem then is knowing to what extent, if at all, the findings from this purposive sample can be generalised, even in the more informal sense of speculating on the relevance of findings to other cases in the same general population.

One way this process can be facilitated is by "characterising" the sample, a term I have taken from elsewhere. It means to describe the distinctive features of something. This could best be done using attributes or measures that can, and probably already have been, used to describe the wider population where the sample came from. For example, the sample of people could be described as being of average age of 35 versus 25 in the whole population, and 35% women versus 55% in the wider population. This seems a rather basic idea, but it is not always applied.

Another more holistic way of doing so is to measure the diversity of the sample. This is relatively easy to do when the data set associated with the sample is in binary form, as for example is used in QCA analysis (i.e. cases are rows, columns are attributes and cell values of 0 or 1 indicate if the attributes was absent or present)

As noted in earlier blog postings,Simpsons Reciprocal Index is a useful measure of diversity. This takes into account two aspects of diversity: (a) richness, which in a data set could be seen in the number of unique configurations of attributes found across all the cases( think metaphorically of organisms - cases, chromosomes-configurations and genes-attributes) and (b) evenness, which could be seen in the relative number of cases having particular configurations. When the number of cases is evenly distributed across all configurations this is seen as being more diverse than when the number of cases per configuration varies.

The degree of diversity in a data base can have consequences. Where a data set that has little diversity in terms of "richness" there is a possibility that configurations that are identified by QCA or other algorithmic based methods, will have limited external validity, because they may easily be contradicted by cases outside the sample data set that are different from already encountered configurations. A simple way of measuring this form of diversity is to calculate the original number of unique configurations in the sample data set as a percentage of the total number possible, given the number of binary attributes in the sample data set (which is 2 to the power of the number of attributes). The higher the percentage, the less risk that the findings will be contradicted by configurations found in new sets of data (all other things being constant).

Where a data set has little diversity in terms of "balance" it will be more difficult to assess the consistency of any configuration's association with an outcome, compared to others, because there will be more cases associated with some configurations than others. Where there are more cases of a given configuration there will be more opportunities for its consistency of association with an outcome to be challenged by contrary cases.

My suggestion therefore is that when results are published from the analysis of purposive samples there should be adequate characterisation of the sample, both in terms of: (a) simple descriptive statistics available on the sample and wider population, and (b) the internal diversity of the sample, relative to the maximum scores possible on the two aspects of diversity.



Wednesday, May 20, 2015

Evaluating the performance of binary predictions


(Updated 2015 06 06)

Background: This blog posting has its origins in a recent review of a QCA oriented evaluation, in which a number of hypotheses were proposed and then tested using a QCA type data set. In these data set cases (projects) are listed row by row and the attributes of these projects are listed in columns. Additional columns to the right describe associated outcomes of interest. The attributes of the projects may include features of the context as well as the interventions involved. The cell values in the data sets were binary (1=attribute present, 0= not present), though there are other options.

When such a data set is available a search can be made for configurations of conditions that are strongly associated with an outcome of interest. This can be done inductively or deductively. Inductive searches involve the uses of systematic search processes (aka algorithms), of which there are a number available. QCA uses the Quine–McCluskey algorithm. Deductive searches involve the development of specific hypotheses from a body of theory, for example about the relationship between the context, intervention and outcome.

Regardless of which approach is used, the resulting claims of association need evaluation. There are a number of different approaches to doing this that I know of, and probably more. All involve, in the simplest form, the analysis of a truth table in this form:


In this truth table the cell values refer to the number of cases that have each combination of configuration and outcome. For further reference below I will label each cell as A and B (top row) and C and D (bottom row)

The first approach to testing is a statistical approach. I am sure that there are a number of ways of doing this, but the one I am most familiar with is the widely used Chi-Square test. Results will be seen as most statistically significant when all cases  are in the A and D cells. They will be least significant when they are equally distributed across all four cells.

The second approach to testing is the one used by QCA. There are two performance measures. One is Consistency, which is the proportion of all cases where the configuration is present and the outcome is also present (=A/(A+B)). The other is Coverage, which is the proportion of all outcomes that are associated with the configuration (=(A/(A+C)).

When some of the cells have 0 values three categorical judgements can also be made. If only cell B is empty then it can be said that the configuration is Sufficient but not Necessary. Because there are still values in cell C this means there are other ways of achieving the outcome in addition to this configuration.

If only cell C is empty then it can be said that the configuration is Necessary but not Sufficient. Because there are still values in cell B this means there are other additional conditions that are needed to ensure the outcome.

If cells B and C are empty then it can be said that the configuration is both Necessary and Sufficient

In all three situations there only needs to be one case to be found in a previously empty cell(s) to disprove the standing proposition. This is a logical test, not a statistical test.

The third approach is one used in the field of machine learning, where the above matrix is known as a Confusion Matrix. Here there is a profusion of performance measures available (at least 13). Some of the more immediately useful measures are:
  • Accuracy: (A+D)/(A+B+C+D), which is similar to but different from the Chi Square measure above
  • True Positives: A/(A+B), also called Precision, which corresponds to QCA consistency
  • True Negatives: D/(B+D)
  • False Positives: C/(A+C)
  • False Negatives: B/(B+D)
  • Positive predictive value: A/(A+C), also called Recall, which corresponds to QCA coverage
  • Negative predictive value: D/(C+D)
In addition to these three types of tests there are three other criteria that are worth taking into account as well: simplicity, diversity and similarity

Simplicity: Essentially the same idea as that captured in Occam's Razor. This is that simpler configurations are preferable, all other things being equal. For example: A+F+J leads to D is a simpler hypothesis than A+X+Y+W+F leads to D. Complex configurations can have a better fit with the data, but at the cost of being poor at generalising to other contexts. In Decision Tree modelling this is called "over-fitting" and solution is "pruning", i.e. cutting back on the complexity of the configuration.  Simplicity has practical value, when it comes to applying tested hypotheses in real life programmes. They are easier to communicate and to implement. Simplicity can be measured at two levels: (a) the number of attributes in a configuration that is associated with an outcome, (b) and the number of configurations needed to account for an outcome.

Diversity: The diversity of configurations is simply the number of different specific configurations in a data set. It can be made into a comparable measure by calculating it as a percentage of the total number possible. The total number possible is 2 to the power of A where A = number of kinds of attributes in the data set. A bigger percentage = more diversity.

If you want to find how "robust" a hypothesis is, you could calculate the diversity present in the configurations of all the cases covered by the hypothesis (i.e. not just the attributes specified by the hypotheses, which will be all the same). If that percentage is large this suggests the hypothesis works in a greater diversity of circumstances, a feature that could be of real practical value.

This notion of diversity is to some extent implicit in the Coverage measure. More coverage implies more diversity of circumstances. But two hypotheses with the same coverage (i.e. proportion of cases they apply to) could be working in circumstances with quite different degrees of diversity (i.e. the cases covered were much more diverse in their overall configurations).

Similarity: Each row in a QCA like data set is a string of binary values. The similarity of these configurations of attributes can be measured in a number of ways:
  • Jaccard index, the proportion of all instances in two configurations where the binary value 1 is present in the same position i.e. the same attribute is present.
  • Hamming distance, the number of positions at which the corresponding values in two configurations are different. This includes the values 0 and 1, whereas Jaccard only looks at 1 values
These measures are relevant in two ways, which are discussed in more detail further down this post:
  • If you want to find a "representative" case in a  data set, you would look for the case with the lowest average Hamming distance in the whole data set
  • If you wanted to compare the two most similar cases, you would look for the pair of cases with the lowest Hamming distance.
Similarity can be seen as a third facet of diversity, a measure of the distance between any two types of cases. Stirling (2007) used the term disparity to describe the same thing.

Choosing relevant criteria: It is important to note that the relevance of these different association tests and criteria will depend on the context. A surgeon would want a very high level of consistency, even if it was at the cost of low coverage (i.e. applicable only in a limited range of situations). However, a stock market investor would be happy with a consistency of 0.55 (i.e 55%), especially if it had wide coverage. Even more so if that wide coverage contained a high level of diversity. Returning to the medical example, a false positive might have different consequences to false negatives e.g. unnecessary surgery versus unnecessary deaths. In other non-medical circumstances, false positives may be more expensive mistakes than false negatives.

Applying the criteria: My immediate interest is in the use of these kinds of tests for two evaluation purposes. The first is selective screening of hypotheses about causal configurations that are worth more time intensive investigations, an issue raised in a recent blog.
  • Configurations that are Sufficient and not Necessary or Necessary but not Sufficient. 
    • Among these, configurations which were Sufficient but not Necessary, and with high coverage should be selected, 
    • And configurations which were Necessary but not Sufficient, and with high consistency, should also be selected. 
  • Plus all configurations that were Sufficient and Necessary (which are likely to be less common)
The second purpose is to identify implications for more time consuming within-case investigations. These are essential, in order to identify casual mechanism at work that connect the conditions that are associated in a given configuration. As I have argued elsewhere, associations are a necessary but insufficient basis for a strong claim of causation. Evidence of mechanisms is like muscles on the bones of a body, enabling it to move.

Having done the filtering suggested above, the following kinds of within-case investigations would seem useful:
  • Are there any common casual mechanisms underlying all the cases  found to be Necessary and Sufficient, i.e those within cell A? 
    • A good starting point would be a case within this set of cases that had the lowest average Hamming distance, i.e. one with the highest level of similarity with all the other cases. 
    • Once one or more plausible mechanism were discovered in that case a check could be made to see if they are present in other cases in that set, this could be done in two ways: (a) incrementally, by examining adjacent cases, i.e cases with the lowest Hamming distance from the representative case, (b) by partitioning the rest of the cases, and examining a case with a median level Hamming distance, i.e. half way between being the most similar and most different cases.
  • Where the configuration is Necessary but not Sufficient, how do the cases in cell B differ from those in cell A, in ways that might shed more light on how the same configuration leads to different outcomes? This is what has been called a MostSimilarDifferentOutcome (MSDO) comparison,
    • If there are many cases this could be quite a challenge, because the cases could differ on many dimensions (i.e. on many attributes). But using the Hamming distance measure we could make this problem more manageable by selecting a case from cell A and B that had the lowest possible Hamming distance. Then a within-case investigation could find additional undocumented differences that account for some or all of the difference in outcomes. 
      • That difference could then be incorporated into the current hypothesis (and data set) enabling more cases from cell B to now be found in cell A i..e Consistency would be improved
  • Where the configuration is Sufficient but not Necessary,in what ways are the cases in cell C the same as those in cell A, in ways that might shed more light on how the same outcome is achieved by different configurations? This is what has been called a MostDifferentSimilarOutcome (MDSO) comparison,
    • As above, if there are many cases this could be quite a challenge. Here I am less clear, but de Meur et al (page 72) say the correct approach is "...one has to look for similarities in the characteristics of initiatives that differ the most from each other; firstly the identification of the most differing pair of cases and secondly the identification of similarities between those two cases" The within-case investigation should look for undocumented similarities that account for some of the similar outcomes. 
      • That difference could then be incorporated into the current hypothesis (and data set) enabling more cases from cell C to now be found in cell A i..e Coverage would be improved