Sunday, June 09, 2019

Extracting additional value from the analysis of QuIP data



James Copestake, Marlies Morsink and Fiona Remnant are the authors of "Attributing Development Impact: The Qualitative Impact Protocol Casebook" published this year by Practical Action
As the title suggests, the book is all about how to attribute development impact, using qualitative data - an important topic that will be of interest to many. The book contents include:
  • Two introductory chapters  
    • 1. Introducing the causal attribution challenge and the QuIP | 
    • 2. Comparing the QuIP with other approaches to development impact evaluation 
  • Seven chapters describing case studies of its use in Ethiopia, Mexico, India, Uganda, Tanzania, Eastern Africa, and England
  • A final chapter synthesising issues arising in these case studies
  • An appendix detailing guidelines for QuIP use

QuIP is innovative in many respects. Perhaps most notably, at first introduction, in the way data is gathered and coded. Neither the field researchers or the communities of interest are told which organisation's interventions are of interest, an approach known as "double blindfolding".  The aim here is to mitigate the risk of "confirmation bias", as much as is practical in a given context.

The process of coding the qualitative data that is collected is also interesting. The focus is on identifying causal pathways in the qualitative descriptions of change obtained through one to one interviews and focus group discussions. In Chapter One the authors explain how the QuIP process uses a triple coding approach, which divides each reported causal pathway into  three elements:

  • Drivers of change (causes). What led to change, positive or negative?
  • Outcomes (effects) What change/s occurred, positive or negative?
  • Attribution: What is the strength of association between the causal claim and the activity or project being evaluated
"Once all change data is coded then it is possible to use frequency counts to tabulate and visualise the data in many ways, as the chapters that follow illustrate". An important point to note here is that although text is being converted to numbers, because of the software which has been developed, it is always possible to identify the source text for any count that is used. And numbers are not the only basis which conclusions are reached about what has happened, the text of respondents' narratives are also very important sources.

That said, what interests me most at present are the emerging options for the analysis of the coded data. Data collation and analysis was initially based on a custom-designed Excel file, used because 99% of evaluators and program managers are already familiar with the use of Excel. However, more recently, investment has been made in the development of a customised version of MicroStrategy, a free desktop data analysis and visualization dashboard. This enables field researchers and evaluation clients to "slice and dice" the collated data in many different ways, without risk of damaging the underlying data, and its use involves a minimal learning curve. One of the options within MicroStrategy is to visualise the relationships between all the identified drivers and outcomes, as a network structure. This is of particular interest to me, and is something that I have been exploring with the QuIP team and with Steve Powell, who has been working with them

The network structure of driver and outcome relationships 


One way QuIP coded data has been tabulated is in the form of a matrix, where rows = drivers and columns = outcomes and cell values = incidence of reports of the connection between a given row and a given column (See tables on pages 67 and 131). In these matrices, we can see that some drivers affect multiple outcomes and some outcomes are affected by multiple drivers. By themselves, the contents of these matrices are not easy to interpret, especially as they get bigger. One matrix provided to me had 95 drivers, 80 outcomes and 254 linkages between these. Some form of network visualisation is an absolute necessity.

Figure 1 is a network visualisation of the 254 connections between drivers and outcomes. The red nodes are reported drivers and the blue nodes are the outcomes that they have reportedly led to. Green nodes are outcomes that in turn have been drivers for other outcomes (I have deliberately left off the node labels in this example). While this was generated using Ucinet/Netdraw, I noticed that the same structure can also be generated by the MicroStrategy.


Figure 1

It is clear from a quick glance that there is still more complexity here than can be easily made sense of. Most notably in the "hairball" on the right.

One way of partitioning this complexity is to focus in on a specific "ego network" of interest. An ego network is a combination of (a) an outcome plus (b) all the other drivers and outcomes it is immediately linked to,  plus (c) the links between those. MicroStartegy already provides (a) and (b) but probably could be tweaked to also provide (c). In Ucinet/Netdraw it is also possible to define the width of the ego network, i.e. how many links out from ego to collect connections to and between  "alters". Here in Figure 2 is one ego network that can be seen in the dense cluster in Figure 1

Figure 2


Within this selective view, we can see more clearly the different causal pathways to an outcome. There are also a number of feedback loops here, between pairs (3) and larger groups of outcomes (2).

PS: Ego networks can be defined in relation to a node which represents a driver or an outcome. If one selected a driver as the ego in the ego network then the resulting network view would provide an "effects of a cause" perspective. Whereas if one selected an outcome as the ego, then the resulting network view would provide a "causes of an outcome" perspective.

Understanding specific connections


Each of the links in the above diagrams, and in the source matrices, have a specific "strength" based on a choice of how that relationship was coded. In the above example, these values were "citation counts,  meaning one count per domain per respondent. Associated with each of these counts are the text sources, which can shed more light on what those connections meant.

What is missing from the above diagrams is numerical information about the nodes, i.e. the frequency of mentions of drivers and outcomes. The same is the case for the tabulated data examples in the book (pages 67, 131). But that data is within reach.

Here in Figure 7 and 8 is a simplified matrix. and associated network diagram, taken from this publication: QuIP and the Yin/Yang of Quant and Qual: How to navigate QuIP visualisations"

In Figure 7 I  have added row and column summary values in red, and copied these in white on to the respective nodes in Figure 8. These provide values for the nodes, as distinct from the connections between them.


Why bother? Link strength by itself is not all that meaningful. Link strengths need to be seen in context, specifically: (a) how often the associated driver was reported at all, and (b) how often the associated outcome was reported at all.  These are the numbers in white that I have added to the nodes in the network diagram above.

Once this extra information is provided we can insert it into a Confusion Matrix and use it to generate two items of missing information: (a) the number of False Positives, in the top right cell, and (b) the number of False Negatives (in the bottom left cell). In Figure 3, I have used Confusion Matrices to describe two of the links in the Figure 8 diagram.


Figure 3
It now becomes clear that there is an argument for saying that  the link with a value of 5, between "Alternative Income" and "Increased income" is more important than the link with the value of 14 between "Rain/recovering from drought" and "Healthier livestock"

The reason? Despite the first link looking stronger (14 versus 5) there is more chance that the expected outcome will occur when the second driver is present. With the first driver the expected outcome only happens 14 of 33 times. But with the second driver the expected outcome happens 5 of the 7 times.

When this analysis is repeated for all the links where there is data (6 in Figure 8 above), it turns out that only two links are of this kind, where the outcome is more likely to be present when the driver is present. The second one is the link between Increased price of livestock and
Increased income, as shown in the Confusion Matrix in Figure 4 below

Figure 4
There are some other aspects of this kind of analysis worth noting.  When "Increased price of livestock" is compared to the one above (Alternative income...),  it accounts for a bigger proportion of cases where the outcome is reported i.e. 11/73 versus 5/73.

One can also imagine situations where the top right cell (False Positive)  is zero. In this case, the driver appears to be sufficient for the outcome i.e. where it is present the outcome is present. And one can imagine situations where the bottom left cell (False Negative) is zero. In this case, the driver appears to be necessary for the outcome i.e. where it is not present the outcome is also not present.


Filtered visualisations using Confusion Matrix data



When data from a Confusion Matrix is available, this provides analysts with additional options for generating filtered views of the network of reported causes. These are:

  1. Show only those connected drivers which seem to account for most instances of a reported outcome. I.e. the number of True Positives (in the top left cell) exceeds the number of False negatives (in the bottom left cell)
  2. Show only those connected drivers which are more often associated with instances of a reported outcome (the True Positive in the top left cell), than its absence (the false Positive, in the top right cell). 

Drivers accounting for the majority of instances of the reported outcome


Figure 5 is a filtered version of the blue, green and red network diagram shown in Figure 1 above. A filter has retained links where the True Positive value in the top left cell of the Confusion Matrix (i.e. the link value) is greater than the associated False Negative value in the bottom left cell. This presents a very different picture to the one in Figure 1.

Figure 5


Key: Red nodes = drivers, Blue nodes = outcomes, Green nodes = outcomes that were also in the role of drivers.

Drivers more often associated with the presence of a reported outcome than its absence


A filter has retained links where the True Positive value in the top left cell of the Confusion Matrix (i.e. the link value) is greater than the associated False Positive value in the top right cell. 

Figure 6

Other possibilities


While there are a lot of interesting possibilities as to how to analyse QuIP data , one option does not yet seem available. That is the possibility of identifying instances of "configurational causality" By this, I mean packages of causes that must be jointly present for an outcome to occur. When we look at the rows in Figure 7 it seems we have lists of single causes, each of which can account for some of the instances of the outcome of interest. And when we look at Figures 2 and 8 we can see that there is more than one way of achieving an outcome. But we can't easily identify any "causal packages" that might be at work.

I am wondering to what extent this might be a limitation built into the coding process. Or, if better use of existing coded information might work. Perhaps the way the row and column summary values are generated in Figure 7 needs rethinking.

Looking at the existing network diagrams these provide no information about which connections were reported by whom. In Figure 2, take the links into "Increased income" from "Part of organisation x project" and "Social Cash Transfer (Gov)"  Each of these links could have been reported by a different set of people, or they could have been reported by the same set of people. If the latter, then this could be an instance of "configurational causality" . To be more confident we would need to establish that people who reported only one of the two drivers did not also report the outcome.

Because all QuIP coded values can be linked back to specific sources and their texts, it seems that this sort of analysis should be possible. But it will take some programming work to make this kind of analysis quick and easy.

PS 1: Actually maybe not so difficult. All we need is a matrix where:

  • Rows = respondents
  • Columns = drivers & outcomes mentioned by respondents
  • Cell values =  1/0 mean that row respondent did or did not mention that column driver or outcome
Then use QCA, or EvalC3, or other machine learning software, to find predictable associations between one or more drivers and any outcome of interest. Then check these associations against text details of each mention to see if a causal role is referred to and plausible.

That said, the text evidence would not necessarily provide the last word (pun unintended). It is possible that a respondent may mention various driver-outcome relationships e.g. A>B, C>D, and A>E but not C>E. Yet, when analysing data from multiple respondents we might find a consistent co-presence of references to C and E (though no report of actual causal relations between them). The explanation may simply be that in the confines of a specific interview there was not time or inclination to mention this additional specific relationship.

In response...

James Copestake has elaborated on this final section as follows "We have discussed this a lot, and I agree it is an area in need of further research. You suggest that there may be causal configurations in the source text which our coding system is not yet geared up to tease out. That may be true and is something we are working on. But there are two other possibilities. First, that interviewers and interviewing guidelines are not primed as much as they could be to identify these. Second, respondents narrative habits (linked to how they think and the language at their disposal) may constrain people from telling configurational stories. This means the research agenda for exploring this issue goes beyond looking at coding"

PS: Also of interest: Attributing development impact: lessons from road testing the QuIP. James Copestake, January 2019

No comments:

Post a Comment