Sunday, March 21, 2021

Mapping the "structure of cooperation": Adding the time dimension and thinking about further analyses

 

In October 2020 I wrote the first blog of the same name, based on some experiences with analysing the results of a ParEvo exercise. (ParEvo is a web assisted participatory scenario planning process).

The focus of that blog posting was a scatter plot of the kind shown below. 

Figure 1: Blue nodes = ParEvo exercise participants. Indegree and Outdegree explained below. Green lines = average indegree and average outdegree

The two axes describe two very basic aspects of network structures, including human social networks. Indegree, in the above example, is the number of other participants who built on that participant's contributions. Outdegree is the number of other participant's contributions that participant built on.  Combining these two measures we can generate (in classic consultants' 2 x 2 matrix style!) four broad categories of behavior, as labelled above. Behaviors , not types of people, because in the above instance we have no idea how generalisable the participants' behaviors are across different contexts. 

There is another way of labelling two of the quarters of the scatter plot, using a distinction widely used in evolutionary theory and the study of organisational behavior (March, 1991Wilden et al, 2019). Bridging behavior can be seen as a form of "exploitation" behavior, i.e., it involves making use of others prior contributions, and in turn having one's contributions built on by others.  Isolating behavior can be seen as a form of "exploration" behavior, i.e., building storylines with minimal help from other participants.  General opinion suggest that there is no ideal balance of these two approaches, rather it is thought to be context dependent. But, in stable environments exploitation is thought to be more relevant whereas in unstable environments, exploration is seen as more relevant.

What does interest me is the possibility of applying this updated analytical framework to other contexts. In particular to: (a) citation networks, (b) systems mapping exercises. I will explore citation networks first. Here is an example of a citation network extracted from a public online bibliographic database covering the field of computer science. Any research funding programme will be able to generate such data, both from funding applications and subsequent research generated publications.

Figure 2: A network of published papers, linked by cited references


Looking at the indegree and outdegree attributes of all the documents within this network the average indegree, and outdegree, was 3.9. When this was used as a cutoff value for identifying the four types of cooperation behavior, their distribution was as follows: 

  • Isolating / exploration = 59% of publications
  • Leading = 17%
  • Following = 15%
  • Bridging / exploitation = 8%
Their location within the Figure 2 network diagram is shown below in this set of filtered views.

Figure 3: Top view = all four types, Yellow view = Bridging/Exploitation, Blue = Following, Red = Leading, Green = Isolating/Exploration

It makes some sense to find the bridging/exploitation type papers in the center of the network, and the isolating/exploration type papers more scattered and especially out in the disconnected peripheries. 

It would be interesting to see whether the apparently high emphasis on exploration found in this data set would be found in other research areas. 

The examination of citation networks suggests a third possible dimension to the cooperation structure scatter plot. This is time, as represented in the above example as year of publication. Not surprisingly, the oldest papers have the higher indegree and the newest papers have the lower. Older papers (by definition, within an age bounded set of papers) have lower outdegree compared to newer papers).  But what is interesting here is the potential occurrence of outliers, of two types: "rising stars" and "laggards". That is, new papers with higher than expected indegree ("rising stars") and old papers with lower than expected indegree ("laggards", or a better name??), as seen in the imagined examples (a) and (b) below.

Another implication of considering the time dimension is the possibility of tracking the pathways of individual authors over time, across the scatter plot space. Their strategies may change over time. "If we take the scientist .. it is reasonable to assume that his/her optimal strategy as a graduate student should differ considerably from his/her optimal strategy once he/she received tenure" ( Berger-Tal, et al, 2014) They might start by exploring, then following, then bridging, then leading.

Figure 4: Red line = Imagined career path of one publication author. A and B = "Rising Star" and "Laggard" authors


There seem to be two types of opportunities present here for further analyses:
  1. Macro-level analysis of differences, in the structure of cooperation across different fields of research. Are there significant differences in the scatter plot distribution of behaviors? If so, to what extent are these differences associated with different types of outcomes across those fields? And if so, is there a plausible causal relationship that could be explored and even tested?  
  2. Micro-level analysis of differences, in the behavior of individual researchers within a given field. Do individuals tend to stick to one type of cooperation behavior (as categorised above). Or is their behavior more variable over time? If the latter , there any relatively common trajectory? What are the implications for these micro-level behaviors for the balance of exploration and exploitation taking place in a particular field?






Thursday, January 28, 2021

Connecting Scenario Planning and Theories of Change


This blog posting was prompted by Tom Aston’s recent comment at the end of an article about theories of change and their difficulties.  There he said “I do think that there are opportunities to combine Theories Of Change with scenario planning. In particular, context monitoring and assumption monitoring are intimately connected. So, there’s an area for further exploration”

Scenario planning, in its various forms, typically generates multiple narratives about what might happen in the future. A Theory Of Change does something similar but in a different way.  It is usually in a more diagrammatic rather than narrative form. Often it is simply about one particular view of how change might happen i.e., particular causal pathway or package thereof.  But in more complex network representations Theories Of Change do implicitly present multiple views of the future, in as much as there are multiple causal pathways that can work through these networks.

ParEvo is a participatory approach to scenario planning which I have developed and which has some relevance to discussion of the relationship between scenario planning and Theories Of Change.  ParEvo is different from many scenario planning methods in that it typically generates a larger number of alternative narratives about the future, and these narratives proceed rather than follow a more abstract analysis of causal processes that might be at work generating those narratives. My notion is that this narrative–first approach involves less cognitive demands on the participants, and is an easier activity to get participants engaged in from the beginning. Another point worth noting about the narratives is that they are collectively constructed, by different self-identified combinations of (anonymised) participants.

At the end of a ParEvo exercise participants are asked to rate all the surviving storylines in terms of their likelihood of happening in real life and their desirability.  These ratings can then be displayed in a scatterplot, of the kind shown in the two examples below.  The numbered points in the scatterplot are IDs for specific storylines generated in the same ParEvo exercise. Each of the two scatterplot represents a different ParEvo exercise.

 



The location of particular storylines in a scatterplot has consequences. I would argue that storylines which are in the likely but undesirable quadrant of the scatterplot deserve the most immediate attention.  They constitute risks which, if at all possible, need to be forfended, or at least responded to appropriately when they do take place. The storylines in the unlikely but desirable quadrant problem justify the next lot of attention.  This is the territory of opportunity. The focus here would be on identifying ways of enabling aspects of those developments to take place.  

Then attention could move to the likely and desirable quadrant.  Here attention could be given to the relationship between what is anticipated in the storylines and any pre-existing Theory Of Change.  The narratives in this quadrant may suggest necessary revisions to the Theory Of Change.  Or, the Theory of Change may highlight what is missing or misconceived in the narratives. The early reflections on the risk and opportunity quadrants might also have implications for revisions to the Theory Of Change.

The fourth quadrant contains those storylines which are seen as unlikely and undesirable.  Perhaps the appropriate response here is simply to periodically to check and update the judgements about likelihood and undesirability.

These four views can be likened to the different views seen from within a car.  There is the front view, which is concerned about likely and desirable events, our expected an intended direction of change.  Then there are two peripheral views, to the right and left, which are concerned with risks and opportunities, present in the desirable but unlikely, and undesirable but likely quadrants. Then there is the rear view, out the back, looking at undesirable and unlikely events.

In this explanation I have talked about storylines in different quadrants, but in the actual scatterplots develop so far the picture is a bit more complex.  Some storylines are way out in the corners of the scatterplot and clearly need attention, but others are more muted and mixed in the position characteristics, so prioritising which of these to give attention to first versus later could be a challenge.

There is also a less visible third dimension to this scatterplot. Some of the participants judgements about likelihood and desirability were not unanimous. These are the red dots in the scatterplot above. In these instances some resolution of differences of opinion about the storylines would need to be the first priority. However it is likely that some of these differences will not be resolvable, so these particular storylines will fall into the category of "Knightian uncertainties", where probabilities are simply unknown. These types of developments can't be planned for in the same way as the others where some judgements about likelihood could be made. This is the territory where bet hedging strategies are appropriate, a strategy seen both in evolutionary biology and in human affairs.  Bet hedging is a response which will be functional in most situations but optimal in none. For example the accumulation of capital reserves in a company, which provides insurance against unexpected shocks, but which is at the cost of efficient use of capital..

There are some other opportunities for connecting thinking about Theories Of Change and the multiple alternative futures that can be identified through a ParEvo process.  These relate to systems type modelling that can be done by extracting keywords from the narratives and mapping their cooccurrence in the paragraphs that make up these narratives, using social network analysis visualisation software.  I will describe these in more detail in the near future, hopefully.


Tuesday, December 15, 2020

The implications of complex program designs: Six proposals worth exploring?

Last week I was involved in a seminar discussion of a draft CEDIL paper reviewing methods that can be used to evaluate complex interventions. That discussion prompted me to the following speculations, which could have practical implications for the evaluation of complex interventions.

Caveat: As might be expected, any discussion in this area will hinge upon the definition of complexity. My provisional definition of complexity is based on a network perspective, something I've advocated for almost two decades now (Davies, 2003). That is, the degree of complexity depends on the number of nodes (e.g. people, objects or events), and the density and diversity of types of interactions between them. Some might object and say what you have described here is simply something which is complicated rather than complex. But I think I can be fairly confident in saying that as you move along this scale of increasing complexity (as I have defined it here) the behaviour of the network will become more unpredictable. I think unpredictability, or at least difficulty of prediction, is a fairly widely recognised characteristic of complex systems (But see Footnote).

The proposals:

Proposal 1. As the complexity of an intervention increases, the task of model development (e.g. a Theory of Change), especially model specification,  becomes increasingly important relative to that of model testing. This is because there are more and more parameters that could make a difference/ be "wrongly" specified

Proposal 2. When the confident specification of model parameters becomes more difficult then perhaps model testing will then become more like an exploratory search of a combinatorial space rather than more focused hypothesis testing.This probably has some implications for the types of methods that can be used. For example, more attention to the use of simulations, or predictive analytics.

Proposal 3. In this situation where more exploration is needed, where will all the relevant empirical data come from, to test the effects of different specifications? Might it be that as complexity increases there is more and more need for monitoring (/time-series data, relative to evaluation / once-off type data?

Proposal 4. And if a complex intervention may lead to complex effects – in terms of behaviour over time – then the timing of any collection of relevant data becomes important. A once-off data collection would capture the state of the intervention+context system at one point in an impact trajectory that could actually take many different shapes (e.g. linear, sinusoidal, exponential, etc. The conclusions drawn could be seriously misleading.

Proposal 5. And going back to model specification, what sort of impact trajectory is the intervention aiming for? One where change happens then plateaus, or one where there is an ongoing increase. This needs specification because it will affect the timing and type of data collection needed.

Proposal 6. And there may be implications for the process of model building. As the intervention gets more complex – in terms of nodes in the network –, there will be more actors involved, each of which will have a view on how the parts and perhaps the w0hole package is and should be working, and the role of their particular part in that process. Participatory, or at least consultative, design approaches would seem to become more necessary

Are there any other implications that can be identified? Please use the Comment facility below.

Footnote: Yes, I know you can also find complex (as in difficult to predict) behaviour in relatively simple systems, like a logistic equation that describes the interaction between predator and prey populations.  And there may be some quite complex systems (by my definition) that are relatively stable. My definition of complexity is more probabilistic than determinist

Friday, December 11, 2020

"If you want to think outside of the box, you first need to find the box" - some practical evaluative thinking about Futures Literacy




Over the last two days, I have participated in a Futures Literacy Lab, run by Riel Miller and organised as part of UNESCO's Futures Literacy Summit. Here are some off-the-cuff reflections.

Firstly the definition of futures literacy. I could not find a decent one, but my search was brief so I expect readers of this blog posting will quickly come up with a decent one. Until then this is my provisional interpretation. Futures literacy includes two types of skills, both of which need to be mastered, although some people will be better at one type than the other:


1. The ability to generate many different alternative views of what might happen in the future.


2. The ability to evaluate a diversity of alternative views of the future, using a range of potentially relevant criteria.

There is probably also a third skill, i.e. the ability to extract useful implications for action from the above two activities,  

The process that I took part in highlighted to me (perhaps not surprising because I'm an evaluator) the importance of the second type of skill above - evaluation. There are two reasons I can think of for taking this view:


1. The ability to critically evaluate one's ideas (e.g. multiple different views of the possible future) is a metacognitive skill which is essential. There is no value in being able to to generate many imagined futures if one is then incapable of sorting the "wheat from the chaff" - however that may be defined.


2. The ability to evaluate a diversity of alternative views of the future, can actually have a useful feedback effect, enabling us to improve the way we search for other imagined futures


Here is my argument for the second claim. In the first part of the exercise yesterday each participant was asked to imagine a possible future development in the way that evaluation will be done, and the role of evaluators, in the year 2050. We were asked to place these ideas on Post-It Notes on an online whiteboard, on a linear scale that ranged between Optimistic and Pessimistic. 

Then a second and orthogonal scale was introduced, which ranged from "I can make a difference" to I can't make a difference". When that second axis was introduced we were asked to adjust our Post-It Notes into a new position that represented our view of its possibility and our ability to make a difference to that event.  These two steps can be seen as a form of self-evaluation of our own imagined futures. Here is the result (don't bother try to read the note details).


Later on, as the process proceeded we were encouraged to 'think out of the box" But how do you do that, ...how do you know what is "out of the box"? Unless you deliberately go to extremes, with the associated risk that whatever you come up with be less useful (however defined)

Looking back at that task now, it strikes me that what the above scatterplot does is show you where the box is, so to speak.  And by contrast, where outside the box also is located.  "Inside the box" is the part of scatterplot where the biggest concentration of posts is located.  The emptiest area and thus most "out of the box" area is the top right quadrant.  There is only one Post-it Note there. So, if more out of the box thinking is needed in this particular exercise setting then perhaps we should start brainstorming about "Optimistic future possibilities and of a kind where I think "I can't make a difference"  - now there is a challenge!

The above example can be considered as a kind of toy model, a simple version of a larger and more complex range of possible applications. That is, that any combination of evaluative dimensions will generate a combinatorial space, which will be densely populated with ideas about possible futures in some areas and empty in others To explore those kinds of areas we will need to do some imaginative thinking at a higher level of abstraction, i.e. of the different kinds of evaluative dimensions that might be relevant. My impression is that this meta-territory has not yet been explored very much. When you look at the futures/foresight literature the most common evaluative dimensions are those of "possibility" and "desirability" (and ones I have used myself, within the ParEvo app). But there must be others that are also relevant in various circumstances.

Postscript 2020 12 11: This afternoon we had a meeting to review the Futures Literacy Lab experience. In that meeting one of the facilitators produced this definition of Futures Literacy, which I have visibly edited, to improve it :-)



 Lots more to be discussed here, for example:

1. Different search strategies that can be used to find interesting alternate futures. For example, random search, and "the adjacent possible" searches are two that come to mind

2. Ways of getting more value from the alternate futures already identified e.g. by recombination 

3. Ways of mapping the diversity of alternate futures that have already been identified e.g using network maps of kind I discussed earlier on this blog (Evaluating Innovation)

4. The potential worth of getting independent third parties to review/evaluate the (a) contents generated by participants, and (b) participants' self-evaluations of their content


For an earlier discussion of mine that might be of interest, see 

"Evaluating the Future"Podcast and paper prepared with and for the EU Evaluation Support Services Unit, 2020




Monday, December 07, 2020

Has the meaning of impact evaluation been hijacked?

 



This morning I have been reading, with interest, Giel Ton's 2020 paper: 
Development policy and impact evaluation: Contribution analysis for learning and accountability in private sector development 

 I have one immediate reaction; which I must admit I have been storing up for some time.  It is to do with what I would call the hijacking of the meaning or definition of 'impact evaluation'.  These days impact evaluation seems to be all about causal attribution. But I think this is an overly narrow definition and almost self-serving of the interests of those trying to promote methods specifically dealing with causal attribution e.g., experimental studies, realist evaluation, contribution analysis and process tracing. (PS: This is not something I am accusing Giel of doing!)

 I would like to see impact evaluations widen their perspective in the following way:

1. Description: Spend time describing the many forms of impact a particular intervention is having. I think the technical term here is multifinality. In a private-sector development programme, multifinality is an extremely likely phenomenon.  I think Giel has in effect said so at the beginning of his paper: " Generally, PSD programmes generate outcomes in a wide range of private sector firms in the recipient country (and often also in the donor country), directly or indirectly."

 2. Valuation: Spend time seeking relevant participants’ valuations of the different forms of impact they are experiencing and/or observing. I'm not talking here about narrow economic definitions of value, but the wider moral perspective on how people value things - the interpretations and associated judgements they make. Participatory approaches to development and evaluation in the 1990s gave a lot of attention to people's valuation of their experiences, but this perspective seems to have disappeared into the background in most discussions of impact evaluation these days. In my view, how people value what is happening should be at the heart of evaluation, not an afterthought. Perhaps we need to routinely highlight the stem of the word Evaluation.

 3. Explanation: Yes, do also seek explanations of how different interventions worked and failed to work (aka causal attribution).  Paying attention of course to heterogeneity, both in the forms of equifinality and multifinality Please Note: I am not arguing that causal attribution should be ignored - just placed within a wider perspective! It is part of the picture, not the whole picture.

 4. Prediction: And in the process don't be too dismissive of the value of identifying reliable predictions that may be useful in future programmes, even if the causal mechanisms are not known or perhaps are not even there.  When it comes to future events there are some that we may be able to change or influence, because we have accumulated useful explanatory knowledge.  But there are also many which we acknowledge are beyond our ability to change, but where with good predictive knowledge we still may be able to respond to appropriately.

Two examples, one contemporary, one very old: If someone could give me a predictive model of sharemarket price movements that had even a modest 55% accuracy I would grab it and run, even though the likelihood of finding any associated causal mechanism would probably be very slim.  Because I’m not a billionaire investor, I have no expectation of being able to use an explanatory model to actually change the way markets behave.  But I do think I could respond in a timely way if I had relevant predictive knowledge.

 Similarly, with the movements of the sun, people have had predictive knowledge about the movement of the sun for millennia, and this informed their agricultural practices.  But even now that we have much improved explanatory knowledge about the sun’s movement few feel this would think that this will help us change the way the seasons progress.

 I will now continue reading Giel's paper…


2021 02 19: I have just come across a special issue of the Evaluation journal of Australasia, on the subject of values. Here is the Editorial section.

Sunday, December 06, 2020

Quality of Evidence criteria that can be applied to Most Significant Change (MSC) stories

 


Two recent documents have prompted me to do some thinking on this subject

If we view Most Significant Change (MSC) stories as evidence of change (and what people think about those changes) what should we look for in terms of quality - what are the attributes of quality we should look for?

Some suggestions that others might like to edit or add to, or even delete...

1. There is clear ownership of an MSC story and the reasons for its selection by the storyteller. Without this, there is no possibility of clarification of any elements of the story and its meaning, let alone more detailed investigation/verification

2. There was some protection against random/impulsive choice. The person who told the story was asked to identify a range of changes that had happened, before being asked to identify the one which was most significant 

3. There was some protection against interpreter/observer error. If another person recorded the story, did they read back their version to the storyteller, to enable them to make any necessary corrections?

4. There has been no violation of ethical standards: Confidentiality has been offered and then respected. Care has been taken not only with the interests of the storyteller but also of those mentioned in a story.

5. Have any intended sources of bias been identified and explained? Sometimes it may be appropriate to ask about " most significant changes caused by....xx..." or "most significant changes of ...x ...type"

6. Have any unintended sources of bias been anticipated and responded to? For example, by also asking about "most significant negative changes " or "any other changes that are most significant"?

7. There is transparency of sources. If stories were solicited from a number of people, we know how these people were identified and who was excluded and why so. If respondents were self-selected we know how they compare to those that did not self-select.

8. There is transparency of selection process: If multiple stories were initially collected then the most significant of these have been selected then reported and used elsewhere, the details of the selection process should be available, including (a) who was involved, (b) how choices were made, and (c) the reasons given for the final choice(s) made

9. Fidelity: Has the written account of why a selection panel chose a story as most significant done the participants' discussion justice? Was it sufficiently detailed, as well as being truthful?

10. Have potential biases in the selection processes been considered? Do most of the finally selected most significant change stories come from people of one kind versus another e.g. men rather than women, one ethnic or religious group versus others? In other words, is the membership of the selection panel transparent? (thanks to Maleeha below).

11.    your thoughts here on.. (using the Comment facility below).

Please note 

1. That in focusing here on "quality of evidence" I am not suggesting that the only use of MSC stories is to serve as forms of evidence. Often the process of dialogue is immensely important and it is the clarification of values and who values what and why so, that is most important. And there are also bound to be other purposes also served

2. (Perhaps the same point, expressed in another way) The above list is intentionally focused on minimal rather than optimal criteria. As noted above, a major part of the MSC process is about the discovery of what is of value, to the participants.  

For more on the MSC technique, see the resources here.




Wednesday, October 28, 2020

Mapping the structure of cooperation


Over the last year or so I have been developing a web application known as ParEvo. The purpose of ParEvo is to enable people to take part in a participatory scenario planning process, online. How the process works is described in detail on this website. The main point that I need to make clear here in this post is that the process consists of people writing short paragraphs of text describing what might happen next. Participants choose which previously written paragraphs their paragraphs should be added to. In turn, other participants may choose to add their own paragraph of text to these. The net result is a series of branching storylines describing alternative futures, which can vary in the way that they constructed i.e. who was involved in the construction of which storyline.

One of the advantages of using ParEvo is that data can be downloaded showing whose text contribution was added to whose. While the ParEvo app does show all the contributions and how they connected into different storylines in the form of a tree structure it does this in an anonymous way – it is not possible for participants, or observers, to see who wrote which contributions. However one of the advantages of using ParEvo is that an exercise facilitator can download data showing the otherwise hidden data on whose contribution was added to whose. This data can be downloaded in the form of an "adjacency matrix". This shows the participants listed by row and the same participants listed by column by column. The cells in the matrix show which row participant added a contribution to an existing contribution made by the column participant. This kind of matrix data is easy to then visualise as a social network structure. Here is an anonymized example from one ParEvo exercise.

Blue nodes = participants. Grey links = contributions to the pointed participant. Red links = reciprocated contributions. Big nodes have many links, small nodes have few links

Another way of summarising the structure of participation is to create a scatterplot, as in this example shown below. The X-axis represents the number of other participants who have added contribution to one's own contributions (SNA term = indegree). The Y-axis represents the number of other participants that one has added one's own contributions to (SNA term = outdegree. The data points in the scatterplot identify the individual participants in the exercise and their characteristics as described by the two axes. The four corners of the scatterplot can be seen as four extreme types of participation:

– Isolates: who only build on their own contributions and nobody else builds on these
– Leaders: who only build on their own contributions, but others also build on these
– Followers: who only build on others' contributions, but others do not build on theirs
– Connectors: who built on others' contributions and others build on theirs

The maximum value of the Y-axis is defined by the number of iterations in the exercise. The maximum value of the X-axis is defined by the number of participants in the exercise. The graph below needs updating to show an X-axis maximum value of 10, not 8




One observer of this ParEvo exercise commented that ' It makes sense to me: those three leaders are the three most senior staff in the group, and it makes sense that they might have produced contributions that others would follow, and that they might be the people most sure of their own narrative"

What interested me was the absence of any participants in the Isolates and Connectors corners of the scatter plot. The absence of isolates is probably a good thing within an organisation, though it could mean a reduced diversity of ideas overall. The absence of Connectors seems more problematic - it might suggest a situation where there are multiple conceptual silos/cliques that are not "talking" to each other. It will be interesting to see in other ParEvo exercises what this scatter plot structure looks like, and how the owners of those exercises interpret them.


Saturday, September 26, 2020

EvalC3 versus QCA - compared via a re-analysis of one data set


I was recently asked whether that EvalC3 could be used to do a synthesis study, analysing the results from multiple evaluations.  My immediate response was yes, in principle.  But it probably needs more thought.

I then recalled that I had seen somewhere an Oxfam synthesis study of multiple evaluation results that used QCA.  This is the reference, in case you want to read, which I suggest you do.

Shephard, D., Ellersiek, A., Meuer, J., & Rupietta, C. (2018). Influencing Policy and Civic Space: A meta-review of Oxfam’s Policy Influence, Citizen Voice and Good Governance Effectiveness Reviews | Oxfam Policy & Practice. Oxfam. https://policy-practice.oxfam.org.uk/publications/*

Like other good examples of QCA analyses in practice, this paper includes the original data set in an appendix, in the form of a truth table.  This means it is possible for other people like me to reanalyse this data using other methods that might be of interest, including EvalC3.  So, this is what I did.

The Oxfam dataset includes five conditions a.k.a. attributes of the programs that were evaluated.  Along with two outcomes each pursued by some of the programs.  In total there was data on the attributes and outcomes of twenty-two programs concerned with expanding civic space and fifteen programs concerned with policy influence.  These were subject to two different QCA analyses.

The analysis of civic space outcomes

In the Oxfam analysis of the fifteen programs concerned with expanding civic space, QCA analysis found four “solutions” a.k.a. combinations of conditions which were associated with the outcome of successful civic space.  Each of these combinations of conditions was found to be sufficient for the outcome to occur.  Together they accounted for the outcomes found in 93% or fourteen of the fifteen cases.  But there was overlap in the cases covered by each of these solutions, leaving the question open as to which solution best fitted/explained those cases.  Six of the fourteen cases had two or more solutions that fitted them.

In contrast, the EvalC3 analysis found two predictive models (=solutions) which are associated with the outcome of expanded civic space.  Each of these combinations of conditions was found to be sufficient for the outcome to occur.  Together they accounted for all fifteen cases where the outcome occurred.  In addition, there was no overlap in the cases covered by each of these models.

The analysis of policy influencing outcomes

in the Oxfam analysis of the twenty-two programs concerned with policy influencing the QCA analysis found two solutions associated with the outcome of expanding civic space.  Each of these was sufficient for the outcome, and together they accounted for all the outcomes.  But there was some overlap in coverage, one of the six cases were covered by both solutions.

In contrast, the EvalC3 analysis found one predictive model which was necessary and sufficient for the outcome, and which accounted for all the outcomes achieved.

Conclusions?

Based on parsimony alone, the EvalC3 solutions/predictive models would be preferable.  But parsimony is not the only appropriate criteria to evaluate a model.  Arguably a more important criterion is the extent which a model fits the details of the cases covered when those cases are closely examined.  So, really what the EvalC3 analysis has done is to generate some extra models that need close attention, in addition to those already generated by the QCA analysis.  The number of cases covered by multiple models is been increased.

In the Oxfam study, there was no follow-on attention given to resolving what was happening in the cases that were identified by more than one solution/predictive model.  In my experience of reading other QCA analyses, this lack of follow-up is not uncommon.

However, in the Oxfam study for each of the solutions found at least one detailed description was given of an example case that that solution covered.  In principle, this is good practice. But unfortunately, as far as I can see, it was not clear whether that particular case was exclusively covered by that solution, or part of a shared solution.  Even amongst those cases which were exclusively covered by a solution there are still choices that need to be made (and explained) about how to select particular cases as exemplars and/or for a detailed examination of any causal mechanisms at work.  

QCA software does not provide any help with this task.  However, I did find some guidance in specialist text on QCA:  Schneider, C. Q., & Wagemann, C. (2012). Set-Theoretic Methods for the Social Sciences: A Guide to Qualitative Comparative Analysis. Cambridge University Press. https://doi.org/10.1017/CBO9781139004244 (it’s a heavy read in part of this book but overall, it is very informative).  In section 11.4 titled Set-Theoretic Methods and Case Selection, the authors note ‘Much emphasis is put on the importance of intimate case knowledge for a successful QCA.  As a matter of fact, the idea of QCA is a research approach and of going back-and-forth between ideas and evidence largely consists of combining comparative within case studies and QCA is a technique.  So far, the literature has focused mainly on how to choose cases prior to and during but not after a QCA were by QCA we here refer to the analytic moment of analysing a truth table it is therefore puzzling that little systematic and specific guidance has so far been provided on which cases to select for within case studies based on the results of i.e. after a QCA… ‘The authors then go on to provide some guidance (a total of 7 pages of 320).

In contrast to QCA software, EvalC3 has a number of built-in tools and some associated guidance on the EvalC3 website, on how to think about case selection as a step between cross-case analysis and subsequent within-case analysis.  One of the steps in the seven-stage EvalC3 workflow (Compare Models) is the generation of a table that compares the case coverage of multiple selected alternative models found by one’s analysis to that point.  This enables the identification of cases which are covered by two or more models.  These types of cases would clearly warrant subsequent within-case investigation.

Another step in the EvalC3 workflow called Compare Cases, provides another means of identifying specific cases for follow-up within-case investigations.  In this worksheet individual cases can be identified as modal or extreme examples within various categories that may be of interest e.g. True Positives, False Positives, et cetera.  It is also possible to identify for a chosen case what other case is most similar and most different to that case, when all its attributes available in the dataset are considered.  These measurement capacities are backed up by technical advice on the EvalC3 website on the particular types of questions that can be asked in relation to different types of cases selected on the basis of their similarities and differences. Your comments on these suggested strategies would be very welcome.

I should explain...

...why the models found by EvalC3 were different from those found from the QCA analysis.  QCA software finds solutions i.e. predictive models by reducing all the configurations found in a truth table down to the smallest possible set, using what is known as a minimisation algorithm called the Quine McCluskey algorithm.

In contrast, EvalC3 provides users with a choice of four different search algorithms combined with multiple alternative performance measures that can be used to automatically assess the results generated by those search algorithms. All algorithms have their strengths and weaknesses, in terms of the kinds of results they can find and cannot find, including the QCA Quine McCluskey algorithm and the simple machine learning algorithms built into EvalC3. I think the McCluskey algorithm has particular problems with datasets which have limited diversity, in other words, where cases only represent a small proportion of all the possible combinations of the conditions documented in the dataset. Whereas the simple search algorithms built into EvalC3 don't experience that is a difficulty. This is my conjecture, not yet rigorously tested.

[In the above data set the cases represented the two data sets analysed represented 47% and 68% of all the possible configurations given the presence of five different conditions]

While EvalC3 results described above did differ from the QCA analyses, they were not in outright contradiction. The same has been my experience when I reanalysed other QCA datasets. EvalC3 will either simply duplicate the QCA findings or produce variations on those, and often those which are better performing. 


Wednesday, July 29, 2020

Converting a continuous variable into a binary variable i.e. dichotomising


If you Google "dichotomising data" you will find lots of warnings that this is basically a bad idea!. Why so? Because if you do so you will lose information. All those fine details of differences between observations will be lost.

But what if you are dealing with something like responses to an attitude survey? Typically these have five-pointed scales ranging from disagree to neutral to agree, or the like. Quite a few of the fine differences in ratings on this scale may well be nothing more than "noise", i.e. variations unconnected with the phenomenon you are trying to measure. A more likely explanation is that they reflect differences in respondents "response styles", or something more random

Aggregation or "binning" of observations into two classes (higher and lower) can be done in different ways. You could simply find the median value and split the observations at that point. Or, you could look for a  "natural" gap in the frequency distribution and make the split there. Or, you may have a prior theoretical reason that it makes sense to split the range of observations at some other specific point.

I have been trying out a different approach. This involved not just looking at the continuous variable I wanted to dichotomise, but also its relationship with an outcome that will be of interest in subsequent analyses. This could also be a continuous variable or a binary measure.

There are two ways of doing this. The first is a relatively simple manual approach. In the first approach, the cut-off point for the outcome variable has already been decided, by one means or another.  We then needed to vary the cut-off point in the range of values for the independent variable to see what effect they had on the numbers of observations of the outcome above and below its cut-off value. For any specific cut-off value for the independent variable an Excel spreadsheet will be used to calculate the following:
  1. # of True Positives - where the independent variable value was high and so was the outcome variable value
  2. # of False Positives - where the independent variable value was high but the outcome variable value was low
  3. # of False Negative - where the independent variable value was low but the outcome variable value was high
  4. # of True Negatives - where the independent variable value was low and the outcome variable value was low
When doing this we are in effect treating cut-off criteria for the independent variable as a predictor of the dependent variable.  Or more precisely, a predictor of the prevalence of observations with values above a specified cut-off point on the dependent variable.

In Excel, I constructed the following:
  • Cells for entering the raw data - the values of each variable for each observation
  • Cells for entering the cut-off points
  • Cells for defining the status of each observation  
  • A Confusion Matrix, to summarise the total number of observations with each of the four possible types described above.
  • A set of 6 widely used performance measures, calculated using the number of observations in each cell of the Confusion Matrix.
    • These performance measures tell me how good the chosen cut-off point is as a predictor of the outcome as specified. At best, all those observations fitting the cut-off criterion will be in the True Positive group and all those not fitting it would be in the True Negative group. In reality, there are also likely to be observations in the False Positive and False Negative groups.
By varying the cut-off points it is possible to find the best possible predictor i.e. one with very few False Positive and very few False Negatives. This can be done manually when the cut-off point for the outcome variable has already been decided.

Alternatively, if the cut-off point has not been decided for the outcome variable, a search algorithm can be used to find the best combination of two cut-off points (one for the independent and one for the dependent variable).  Within Excel, there is an add-in called Solver, which uses an evolutionary algorithm to do such a search, to find the optimal combination

Postscript 2020 11 05: An Excel file with the dichotomization formula and a built-in example data set is available here  

  
2020 08 13: Also relevant: 

Hofstad, T. (2019). QCA and the Robustness Range of Calibration Thresholds: How Sensitive are Solution Terms to Changing Calibrations? COMPASSS Working Papers, 2019–92. http://www.compasss.org/wpseries/Hofstad2019.pdf  

 This paper emphasises the importance of declaring the range of original (pre-dichotomised) values over which the performance of a predictive model remains stable 

Tuesday, April 07, 2020

Rubrics? Yes, but...




The blog posting is a response to Tom Aston's blog posting: Rubrics as a harness for complexity

I have just reviewed an evaluation of the effectiveness of policy influencing activities of programs funded by HMG as part of the International Carbon Finance Initiative.  In the technical report there are a number of uses of rubrics to explain how various judgements were made.  Here, for example, is one summarising the strength of evidence found during process tracing exercises:
  • Strong support – smoking gun (or DD) tests passed and no hoop tests (nor DDs) failed.
  • Some support – multiple straw in the wind tests passed and no hoop tests (nor DDs) failed; also, no smoking guns nor DDs passed.
  • Mixed – mixture of smoking gun or DD tests passed but some hoop tests (or DDs) failed – this required the CMO to be revised.
  • Failed – some hoop (or DD) tests failed, no double decisive or smoking gun tests passed – this required the theory to be rejected and the CMO abandoned or significantly revised. 

Another rubric described in great detail how three different levels of strength of evidence were differentiated (Convincing Plausible, Tentative).  There was no doubt in my mind that these rubrics contributed significantly to the value of the evaluation report.  Particularly by giving readers confidence in the judgements that were made by the evaluation team.

But… I can't help feel that the enthusiasm for rubrics seems to be out of proportion with their role within an evaluation.  They are a useful measurement device that can make complex judgements more transparent and thus more accountable.  Note the emphasis on the ‘more‘… There are often plenty of not necessarily so transparent judgements present in the explanatory text which is used to annotate each point in a rubric scale.  Take, for example, the first line of text in Tom Aston’s first example here, which reads “Excellent: Clear example of exemplary performance or very good practice in this domain: no weakness”

As noted in Tom’s blog it has been argued that rubrics have a wider value i.e. rubrics are useful when trying to describe and agree what success looks like for tracking changes in complex phenomena”.  This is where I would definitely argue “Buyer beware” because rubrics have serious limitations in respect of this task.

The first problem is that description and valuation are separate cognitive tasks.  Events that take place can be described, they can also be given a particular value by observers (e.g. good or bad).  This dual process is implied in the above definition of how rubrics are useful.  Both of these types of judgements are often present in a rubrics explanatory text e.g. Clear example of exemplary performance or very good practice in this domain: no weakness”

The second problem is that complex events usually have multiple facets, each of which has a descriptive and value aspect.  This is evident in the use of multiple statements linked by colons in the same example rubric I refer to above.

So for any point in a rubric’s scale the explanatory text has quite a big task on its hands.  It has to describe a specific subset of events and give a particular value to each of those.  In addition, each adjacent point on the scale has to do the same in a way that suggests there are only small incremental differences between each of these points judgements. And being a linear scale, this suggests or even requires, that there is only one path from the bottom to the top of the scale. Say goodbye to equifinality!

So, what alternatives are there, for describing and agreeing on what success looks like when trying to track changes in complex phenomena?  One solution which I have argued for, intermittently, over a period of years, is the wider use of weighted checklists.  These are described at length here.  

Their design addresses three problems mentioned above.  Firstly, description and valuation are separated out as two distinct judgements.  Secondly, the events that are described and valued can be quite numerous and yet each can be separately judged on these two criteria.  There is then a mechanism for combining these judgements in an aggregate scale. And there is more than one route from the bottom to the top of this aggregate scale.

“The proof is in the pudding”.  One particular weighted checklist, known as the Basic Necessities Survey, was designed to measure and track changes in household-level poverty.  Changes in poverty levels must surely qualify as ‘complex phenomena ‘.  Since its development in the 1990s, the Basic Necessities Survey has been widely used in Africa and Asia by international environment/conservation organisations.  There is now a bibliography available online describing some of its users and uses. https://www.zotero.org/groups/2440491/basic_necessities_survey/library








 [RD1]Impressive rubric