[Take care, this is still very much a working draft!] Criticisms and comments welcome though

**The challenge**

The other day I was asked for some advice on how to implement a QCA type of analysis within an evaluation that was already fairly circumscribed in its design. Circumscribed both by the commissioner and by the team proposing to carry out the evaluation. The commissioner had already indicated that they wanted a case study orientated approach and had even identified the maximum number of case studies that they wanted to see (ten) . While the evaluation team could see the potential use of a QCA type analyses they were already committed to undertaking a process type evaluation, and did not want a QCA type analyses to dominate their approach. In addition, it appeared that there already was a quite developed conceptual framework that included many different factors which might be contribute causes to the outcomes of interest.

As is often the case, there seemed to be a shortage of cases and an excess of potentially explanatory variables. In addition, there were doubts within the evaluation team as to whether a thorough QCA analysis would be possible or justifiable given the available resources and priorities.

**Paired case comparisons as the alternative**

My first suggestion to the evaluation team was to recognise that there is some middle ground between across-case analysis involving medium to large numbers of cases, and a within-case analysis. As described by Rihoux and Ragin (2009) a QCA analysis will use both, going back and forth, using one to inform the other, over a number of iterations.. The middle ground between these two options is case comparisons – particularly *comparisons of pairs of cases.* Although in the situation described above there will be a maximum of 10 cases that can be explored, the number of pairs of these cases that can be compared is still quite big (45). With these sort of numbers some sort of strategy is necessary for making choices about the types of pairs of cases that will be compared. Fortunately there is already a large literature on case selection. My favourite summary is the one by Gerring, J., & Cojocaru, L. (2015). Case-Selection: A Diversity of Methods and Criteria.

My suggested approach was to use what is known as the Confusion Matrix as the basis for structuring the choice of cases to be compared. A Confusion Matrix is a simple truth table, showing a combination of two sets of possibilities (rows and columns), and the incidence of those possibilities (cell values). For example as follows:

- True Positives where there are cases with attributes that fit my theory and where the expected outcome is present
- False Positives, where there are cases with attributes that fit my theory but where the expected outcome is absent
- False Negatives, where there are cases which do not have attributes that fit my theory but where nevertheless the outcome is present
- True Negatives, where there are cases which do not have attributes that fit my theory and where the outcome is absent as expected

**1. Starting with True Positives**

**2. Comparing False Positives and True Positives**

*as similar as possible*in all its other attributes to the True Positive case.This type of analysis choice is called MSDO, standing for "most similar design, different outcome" - see the de Meur reference below. Also see below on how to measure this form of similarity.

*enabling*factor, whereas in the latter case it could be seen as more like a

*blocking*factor. If nether can be found by comparison of coded attributes of the cases then a more intensive examination of raw data on the case might still identify them, and lead to an updated/elaboration of theory behind the True Positive case. Alternately, that examination might suggest measurement error is the problem and that the False Positive case needs to be reclassified as True Positive.

**3. Comparing False Negatives and True Positives**

*as different as possible*in all its attributes to the True Positive case. This type of analysis choice is called MDSO, standing for "most different design, same outcome".

**Measuring similarity**

**Related sources**

- Nielsen, R. A. (2014).
*Case Selection via Matching*. - de Meur, G., Bursens, P., & Gottcheiner, A. (2006). MSDO/MDSO Revisited for Public Policy Analysis. In B. Rihoux & H. Grimm (Eds.), Innovative Comparative Methods for Policy Analysis (pp. 67–94). Springer US.
- de Meur, G., & Gottcheiner, A. (2012). The Logic and Assumptions of MDSO–MSDO Designs. In The SAGE Handbook of Case-Based Methods (pp. 208–221).
- Rihoux, B., & Ragin, C. C. (Eds.). (2009). Configurational Comparative Methods: Qualitative Comparative Analysis (QCA) and Related Techniques. Sage. Pages 28-32 for a description of
*"MSDO/MDSO: A systematic procedure for matching cases and conditions".* - Goertz, G. (2017)
*. Multimethod research, causal mechanisms, and case studies: An integrated approach.*Princeton University Press.