[working draft]

**The challenge**

The other day I was asked for some advice on how to implement a QCA type of analysis within an evaluation plan that was already fairly circumscribed in its design. Circumscribed both by the commissioner and by the team proposing to carry out the evaluation. The commissioner had already indicated that they wanted a case study orientated approach and had even identified the maximum number of case studies that they wanted to see (ten) . While the evaluation team could see the potential use of a QCA type analyses they were already committed to undertaking a process type evaluation, and did not want a QCA type analyses to dominate their approach. In addition, it appeared that there already was quite developed conceptual framework that included many different factors which might be contribute causes to the outcomes of interest.

As is often the case, there seemed to be a shortage of cases and excess of potentially explanatory variables. In addition, there were doubts within the evaluation team as to whether a thorough QCA analysis would be possible or justifiable given the available resources and priorities.

**Paired case comparisons as the alternative**

My first suggestion to the evaluation team was to recognise that there is a middle ground between across-case analysis involving medium to large numbers of cases, and a within-case analysis. Typically, a QCA analysis will use both, going back and forth, using one to inform the other, over a number of iterations.. The middle ground between these two options is case comparisons – particularly *comparisons of pairs of cases.* Although in the situation described above there will be a maximum of 10 cases that can be explored, the number of pairs of these cases that can be compared is still quite big (45). With these sort of numbers some sort of strategy is necessary for making choices about the types of pairs of cases that will be compared. Fortunately there is already a large literature on case selection. My favourite summary is the one by Gerring, J., & Cojocaru, L. (2015). Case-Selection: A Diversity of Methods and Criteria.

My suggested approach was to use what is known as the Confusion Matrix as the basis for structuring the choice of cases to be compared. A Confusion Matrix is a simple truth table, showing a combination of two sets of possibilities, for example as follows:

- True Positives where there are cases with attributes that fit my theory and where the expected outcome is present
- False Positives, where there are cases with attributes that fit my theory but where the expected outcome is absent
- False Negatives, where there are cases which do not have attributes that fit my theory but where nevertheless the outcome is present
- True Negatives, where there are cases which do not have attributes that fit my theory and where the outcome is absent as expected

**1. Starting with True Positives**

**2. Comparing False Positives and True Positives**

*as similar as possible*in all its attributes to the True Positive case, with the obvious exception of the outcome not being present. This type of analysis choice is called MSDO, standing for most similar design, different outcome - see de Meur reference below. Also see below on how to measure similarity.

*blocking*factors in the true positive that prevents the hypothesised causal attribute to not work as expected. The other might be the absence of some additional

*enabling*factor in the false positive case that otherwise enables the hypothesised causal attribute to work as expected. If either can be found then the original theory regarding the True Positive case can be updated, and the (previously) False Positive case now be moved into that category. The theory describing the two True Positive cases can now be seen as provisionally "sufficient"for the outcome, until another False Positive case is found and needs to be examined in a similar fashion.But if no explanation can be found the case can remain as a False Positive.

**3. Comparing False Negatives and True Positives**

*as different as possible*in all its attributes to the True Positive case. This type of analysis choice is called MDSO, standing for most different design, same outcome.

**Measuring similarity**

**Related sources**

- Nielsen, R. A. (2014).
*Case Selection via Matching*. - de Meur, G., Bursens, P., & Gottcheiner, A. (2006). MSDO/MDSO Revisited for Public Policy Analysis. In B. Rihoux & H. Grimm (Eds.), Innovative Comparative Methods for Policy Analysis (pp. 67–94). Springer US.
- de Meur, G., & Gottcheiner, A. (2012). The Logic and Assumptions of MDSO–MSDO Designs. In The SAGE Handbook of Case-Based Methods (pp. 208–221).
- Rihoux, B., & Ragin, C. C. (Eds.). (2009). Configurational Comparative Methods: Qualitative Comparative Analysis (QCA) and Related Techniques. Sage. Pages 28-32 for a description of
*"MSDO/MDSO: A systematic procedure for matching cases and conditions".* - Goertz, G. (2017)
*. Multimethod research, causal mechanisms, and case studies: An integrated approach.*Princeton University Press.