Tuesday, December 15, 2020

The implications of complex program designs: Six proposals worth exploring?

Last week I was involved in a seminar discussion of a draft CEDIL paper reviewing methods that can be used to evaluate complex interventions. That discussion prompted me to the following speculations, which could have practical implications for the evaluation of complex interventions.

Caveat: As might be expected, any discussion in this area will hinge upon the definition of complexity. My provisional definition of complexity is based on a network perspective, something I've advocated for almost two decades now (Davies, 2003). That is, the degree of complexity depends on the number of nodes (e.g. people, objects or events), and the density and diversity of types of interactions between them. Some might object and say what you have described here is simply something which is complicated rather than complex. But I think I can be fairly confident in saying that as you move along this scale of increasing complexity (as I have defined it here) the behaviour of the network will become more unpredictable. I think unpredictability, or at least difficulty of prediction, is a fairly widely recognised characteristic of complex systems (But see Footnote).

The proposals:

Proposal 1. As the complexity of an intervention increases, the task of model development (e.g. a Theory of Change), especially model specification,  becomes increasingly important relative to that of model testing. This is because there are more and more parameters that could make a difference/ be "wrongly" specified

Proposal 2. When the confident specification of model parameters becomes more difficult then perhaps model testing will then become more like an exploratory search of a combinatorial space rather than more focused hypothesis testing.This probably has some implications for the types of methods that can be used. For example, more attention to the use of simulations, or predictive analytics.

Proposal 3. In this situation where more exploration is needed, where will all the relevant empirical data come from, to test the effects of different specifications? Might it be that as complexity increases there is more and more need for monitoring (/time-series data, relative to evaluation / once-off type data?

Proposal 4. And if a complex intervention may lead to complex effects – in terms of behaviour over time – then the timing of any collection of relevant data becomes important. A once-off data collection would capture the state of the intervention+context system at one point in an impact trajectory that could actually take many different shapes (e.g. linear, sinusoidal, exponential, etc. The conclusions drawn could be seriously misleading.

Proposal 5. And going back to model specification, what sort of impact trajectory is the intervention aiming for? One where change happens then plateaus, or one where there is an ongoing increase. This needs specification because it will affect the timing and type of data collection needed.

Proposal 6. And there may be implications for the process of model building. As the intervention gets more complex – in terms of nodes in the network –, there will be more actors involved, each of which will have a view on how the parts and perhaps the w0hole package is and should be working, and the role of their particular part in that process. Participatory, or at least consultative, design approaches would seem to become more necessary

Are there any other implications that can be identified? Please use the Comment facility below.

Footnote: Yes, I know you can also find complex (as in difficult to predict) behaviour in relatively simple systems, like a logistic equation that describes the interaction between predator and prey populations.  And there may be some quite complex systems (by my definition) that are relatively stable. My definition of complexity is more probabilistic than determinist

Friday, December 11, 2020

"If you want to think outside of the box, you first need to find the box" - some practical evaluative thinking about Futures Literacy




Over the last two days, I have participated in a Futures Literacy Lab, run by Riel Miller and organised as part of UNESCO's Futures Literacy Summit. Here are some off-the-cuff reflections.

Firstly the definition of futures literacy. I could not find a decent one, but my search was brief so I expect readers of this blog posting will quickly come up with a decent one. Until then this is my provisional interpretation. Futures literacy includes two types of skills, both of which need to be mastered, although some people will be better at one type than the other:


1. The ability to generate many different alternative views of what might happen in the future.


2. The ability to evaluate a diversity of alternative views of the future, using a range of potentially relevant criteria.

There is probably also a third skill, i.e. the ability to extract useful implications for action from the above two activities,  

The process that I took part in highlighted to me (perhaps not surprising because I'm an evaluator) the importance of the second type of skill above - evaluation. There are two reasons I can think of for taking this view:


1. The ability to critically evaluate one's ideas (e.g. multiple different views of the possible future) is a metacognitive skill which is essential. There is no value in being able to to generate many imagined futures if one is then incapable of sorting the "wheat from the chaff" - however that may be defined.


2. The ability to evaluate a diversity of alternative views of the future, can actually have a useful feedback effect, enabling us to improve the way we search for other imagined futures


Here is my argument for the second claim. In the first part of the exercise yesterday each participant was asked to imagine a possible future development in the way that evaluation will be done, and the role of evaluators, in the year 2050. We were asked to place these ideas on Post-It Notes on an online whiteboard, on a linear scale that ranged between Optimistic and Pessimistic. 

Then a second and orthogonal scale was introduced, which ranged from "I can make a difference" to I can't make a difference". When that second axis was introduced we were asked to adjust our Post-It Notes into a new position that represented our view of its possibility and our ability to make a difference to that event.  These two steps can be seen as a form of self-evaluation of our own imagined futures. Here is the result (don't bother try to read the note details).


Later on, as the process proceeded we were encouraged to 'think out of the box" But how do you do that, ...how do you know what is "out of the box"? Unless you deliberately go to extremes, with the associated risk that whatever you come up with be less useful (however defined)

Looking back at that task now, it strikes me that what the above scatterplot does is show you where the box is, so to speak.  And by contrast, where outside the box also is located.  "Inside the box" is the part of scatterplot where the biggest concentration of posts is located.  The emptiest area and thus most "out of the box" area is the top right quadrant.  There is only one Post-it Note there. So, if more out of the box thinking is needed in this particular exercise setting then perhaps we should start brainstorming about "Optimistic future possibilities and of a kind where I think "I can't make a difference"  - now there is a challenge!

The above example can be considered as a kind of toy model, a simple version of a larger and more complex range of possible applications. That is, that any combination of evaluative dimensions will generate a combinatorial space, which will be densely populated with ideas about possible futures in some areas and empty in others To explore those kinds of areas we will need to do some imaginative thinking at a higher level of abstraction, i.e. of the different kinds of evaluative dimensions that might be relevant. My impression is that this meta-territory has not yet been explored very much. When you look at the futures/foresight literature the most common evaluative dimensions are those of "possibility" and "desirability" (and ones I have used myself, within the ParEvo app). But there must be others that are also relevant in various circumstances.

Postscript 2020 12 11: This afternoon we had a meeting to review the Futures Literacy Lab experience. In that meeting one of the facilitators produced this definition of Futures Literacy, which I have visibly edited, to improve it :-)



 Lots more to be discussed here, for example:

1. Different search strategies that can be used to find interesting alternate futures. For example, random search, and "the adjacent possible" searches are two that come to mind

2. Ways of getting more value from the alternate futures already identified e.g. by recombination 

3. Ways of mapping the diversity of alternate futures that have already been identified e.g using network maps of kind I discussed earlier on this blog (Evaluating Innovation)

4. The potential worth of getting independent third parties to review/evaluate the (a) contents generated by participants, and (b) participants' self-evaluations of their content


For an earlier discussion of mine that might be of interest, see 

"Evaluating the Future"Podcast and paper prepared with and for the EU Evaluation Support Services Unit, 2020




Monday, December 07, 2020

Has the meaning of impact evaluation been hijacked?

 



This morning I have been reading, with interest, Giel Ton's 2020 paper: 
Development policy and impact evaluation: Contribution analysis for learning and accountability in private sector development 

 I have one immediate reaction; which I must admit I have been storing up for some time.  It is to do with what I would call the hijacking of the meaning or definition of 'impact evaluation'.  These days impact evaluation seems to be all about causal attribution. But I think this is an overly narrow definition and almost self-serving of the interests of those trying to promote methods specifically dealing with causal attribution e.g., experimental studies, realist evaluation, contribution analysis and process tracing. (PS: This is not something I am accusing Giel of doing!)

 I would like to see impact evaluations widen their perspective in the following way:

1. Description: Spend time describing the many forms of impact a particular intervention is having. I think the technical term here is multifinality. In a private-sector development programme, multifinality is an extremely likely phenomenon.  I think Giel has in effect said so at the beginning of his paper: " Generally, PSD programmes generate outcomes in a wide range of private sector firms in the recipient country (and often also in the donor country), directly or indirectly."

 2. Valuation: Spend time seeking relevant participants’ valuations of the different forms of impact they are experiencing and/or observing. I'm not talking here about narrow economic definitions of value, but the wider moral perspective on how people value things - the interpretations and associated judgements they make. Participatory approaches to development and evaluation in the 1990s gave a lot of attention to people's valuation of their experiences, but this perspective seems to have disappeared into the background in most discussions of impact evaluation these days. In my view, how people value what is happening should be at the heart of evaluation, not an afterthought. Perhaps we need to routinely highlight the stem of the word Evaluation.

 3. Explanation: Yes, do also seek explanations of how different interventions worked and failed to work (aka causal attribution).  Paying attention of course to heterogeneity, both in the forms of equifinality and multifinality Please Note: I am not arguing that causal attribution should be ignored - just placed within a wider perspective! It is part of the picture, not the whole picture.

 4. Prediction: And in the process don't be too dismissive of the value of identifying reliable predictions that may be useful in future programmes, even if the causal mechanisms are not known or perhaps are not even there.  When it comes to future events there are some that we may be able to change or influence, because we have accumulated useful explanatory knowledge.  But there are also many which we acknowledge are beyond our ability to change, but where with good predictive knowledge we still may be able to respond to appropriately.

Two examples, one contemporary, one very old: If someone could give me a predictive model of sharemarket price movements that had even a modest 55% accuracy I would grab it and run, even though the likelihood of finding any associated causal mechanism would probably be very slim.  Because I’m not a billionaire investor, I have no expectation of being able to use an explanatory model to actually change the way markets behave.  But I do think I could respond in a timely way if I had relevant predictive knowledge.

 Similarly, with the movements of the sun, people have had predictive knowledge about the movement of the sun for millennia, and this informed their agricultural practices.  But even now that we have much improved explanatory knowledge about the sun’s movement few feel this would think that this will help us change the way the seasons progress.

 I will now continue reading Giel's paper…


2021 02 19: I have just come across a special issue of the Evaluation journal of Australasia, on the subject of values. Here is the Editorial section.

Sunday, December 06, 2020

Quality of Evidence criteria that can be applied to Most Significant Change (MSC) stories

 


Two recent documents have prompted me to do some thinking on this subject

If we view Most Significant Change (MSC) stories as evidence of change (and what people think about those changes) what should we look for in terms of quality - what are the attributes of quality we should look for?

Some suggestions that others might like to edit or add to, or even delete...

1. There is clear ownership of an MSC story and the reasons for its selection by the storyteller. Without this, there is no possibility of clarification of any elements of the story and its meaning, let alone more detailed investigation/verification

2. There was some protection against random/impulsive choice. The person who told the story was asked to identify a range of changes that had happened, before being asked to identify the one which was most significant 

3. There was some protection against interpreter/observer error. If another person recorded the story, did they read back their version to the storyteller, to enable them to make any necessary corrections?

4. There has been no violation of ethical standards: Confidentiality has been offered and then respected. Care has been taken not only with the interests of the storyteller but also of those mentioned in a story.

5. Have any intended sources of bias been identified and explained? Sometimes it may be appropriate to ask about " most significant changes caused by....xx..." or "most significant changes of ...x ...type"

6. Have any unintended sources of bias been anticipated and responded to? For example, by also asking about "most significant negative changes " or "any other changes that are most significant"?

7. There is transparency of sources. If stories were solicited from a number of people, we know how these people were identified and who was excluded and why so. If respondents were self-selected we know how they compare to those that did not self-select.

8. There is transparency of selection process: If multiple stories were initially collected then the most significant of these have been selected then reported and used elsewhere, the details of the selection process should be available, including (a) who was involved, (b) how choices were made, and (c) the reasons given for the final choice(s) made

9. Fidelity: Has the written account of why a selection panel chose a story as most significant done the participants' discussion justice? Was it sufficiently detailed, as well as being truthful?

10. Have potential biases in the selection processes been considered? Do most of the finally selected most significant change stories come from people of one kind versus another e.g. men rather than women, one ethnic or religious group versus others? In other words, is the membership of the selection panel transparent? (thanks to Maleeha below).

11.    your thoughts here on.. (using the Comment facility below).

Please note 

1. That in focusing here on "quality of evidence" I am not suggesting that the only use of MSC stories is to serve as forms of evidence. Often the process of dialogue is immensely important and it is the clarification of values and who values what and why so, that is most important. And there are also bound to be other purposes also served

2. (Perhaps the same point, expressed in another way) The above list is intentionally focused on minimal rather than optimal criteria. As noted above, a major part of the MSC process is about the discovery of what is of value, to the participants.  

For more on the MSC technique, see the resources here.




Wednesday, October 28, 2020

Mapping the structure of cooperation


Over the last year or so I have been developing a web application known as ParEvo. The purpose of ParEvo is to enable people to take part in a participatory scenario planning process, online. How the process works is described in detail on this website. The main point that I need to make clear here in this post is that the process consists of people writing short paragraphs of text describing what might happen next. Participants choose which previously written paragraphs their paragraphs should be added to. In turn, other participants may choose to add their own paragraph of text to these. The net result is a series of branching storylines describing alternative futures, which can vary in the way that they constructed i.e. who was involved in the construction of which storyline.

One of the advantages of using ParEvo is that data can be downloaded showing whose text contribution was added to whose. While the ParEvo app does show all the contributions and how they connected into different storylines in the form of a tree structure it does this in an anonymous way – it is not possible for participants, or observers, to see who wrote which contributions. However one of the advantages of using ParEvo is that an exercise facilitator can download data showing the otherwise hidden data on whose contribution was added to whose. This data can be downloaded in the form of an "adjacency matrix". This shows the participants listed by row and the same participants listed by column by column. The cells in the matrix show which row participant added a contribution to an existing contribution made by the column participant. This kind of matrix data is easy to then visualise as a social network structure. Here is an anonymized example from one ParEvo exercise.

Blue nodes = participants. Grey links = contributions to the pointed participant. Red links = reciprocated contributions. Big nodes have many links, small nodes have few links

Another way of summarising the structure of participation is to create a scatterplot, as in this example shown below. The X-axis represents the number of other participants who have added contribution to one's own contributions (SNA term = indegree). The Y-axis represents the number of other participants that one has added one's own contributions to (SNA term = outdegree. The data points in the scatterplot identify the individual participants in the exercise and their characteristics as described by the two axes. The four corners of the scatterplot can be seen as four extreme types of participation:

– Isolates: who only build on their own contributions and nobody else builds on these
– Leaders: who only build on their own contributions, but others also build on these
– Followers: who only build on others' contributions, but others do not build on theirs
– Connectors: who built on others' contributions and others build on theirs

The maximum value of the Y-axis is defined by the number of iterations in the exercise. The maximum value of the X-axis is defined by the number of participants in the exercise. The graph below needs updating to show an X-axis maximum value of 10, not 8




One observer of this ParEvo exercise commented that ' It makes sense to me: those three leaders are the three most senior staff in the group, and it makes sense that they might have produced contributions that others would follow, and that they might be the people most sure of their own narrative"

What interested me was the absence of any participants in the Isolates and Connectors corners of the scatter plot. The absence of isolates is probably a good thing within an organisation, though it could mean a reduced diversity of ideas overall. The absence of Connectors seems more problematic - it might suggest a situation where there are multiple conceptual silos/cliques that are not "talking" to each other. It will be interesting to see in other ParEvo exercises what this scatter plot structure looks like, and how the owners of those exercises interpret them.


Saturday, September 26, 2020

EvalC3 versus QCA - compared via a re-analysis of one data set


I was recently asked whether that EvalC3 could be used to do a synthesis study, analysing the results from multiple evaluations.  My immediate response was yes, in principle.  But it probably needs more thought.

I then recalled that I had seen somewhere an Oxfam synthesis study of multiple evaluation results that used QCA.  This is the reference, in case you want to read, which I suggest you do.

Shephard, D., Ellersiek, A., Meuer, J., & Rupietta, C. (2018). Influencing Policy and Civic Space: A meta-review of Oxfam’s Policy Influence, Citizen Voice and Good Governance Effectiveness Reviews | Oxfam Policy & Practice. Oxfam. https://policy-practice.oxfam.org.uk/publications/*

Like other good examples of QCA analyses in practice, this paper includes the original data set in an appendix, in the form of a truth table.  This means it is possible for other people like me to reanalyse this data using other methods that might be of interest, including EvalC3.  So, this is what I did.

The Oxfam dataset includes five conditions a.k.a. attributes of the programs that were evaluated.  Along with two outcomes each pursued by some of the programs.  In total there was data on the attributes and outcomes of twenty-two programs concerned with expanding civic space and fifteen programs concerned with policy influence.  These were subject to two different QCA analyses.

The analysis of civic space outcomes

In the Oxfam analysis of the fifteen programs concerned with expanding civic space, QCA analysis found four “solutions” a.k.a. combinations of conditions which were associated with the outcome of successful civic space.  Each of these combinations of conditions was found to be sufficient for the outcome to occur.  Together they accounted for the outcomes found in 93% or fourteen of the fifteen cases.  But there was overlap in the cases covered by each of these solutions, leaving the question open as to which solution best fitted/explained those cases.  Six of the fourteen cases had two or more solutions that fitted them.

In contrast, the EvalC3 analysis found two predictive models (=solutions) which are associated with the outcome of expanded civic space.  Each of these combinations of conditions was found to be sufficient for the outcome to occur.  Together they accounted for all fifteen cases where the outcome occurred.  In addition, there was no overlap in the cases covered by each of these models.

The analysis of policy influencing outcomes

in the Oxfam analysis of the twenty-two programs concerned with policy influencing the QCA analysis found two solutions associated with the outcome of expanding civic space.  Each of these was sufficient for the outcome, and together they accounted for all the outcomes.  But there was some overlap in coverage, one of the six cases were covered by both solutions.

In contrast, the EvalC3 analysis found one predictive model which was necessary and sufficient for the outcome, and which accounted for all the outcomes achieved.

Conclusions?

Based on parsimony alone, the EvalC3 solutions/predictive models would be preferable.  But parsimony is not the only appropriate criteria to evaluate a model.  Arguably a more important criterion is the extent which a model fits the details of the cases covered when those cases are closely examined.  So, really what the EvalC3 analysis has done is to generate some extra models that need close attention, in addition to those already generated by the QCA analysis.  The number of cases covered by multiple models is been increased.

In the Oxfam study, there was no follow-on attention given to resolving what was happening in the cases that were identified by more than one solution/predictive model.  In my experience of reading other QCA analyses, this lack of follow-up is not uncommon.

However, in the Oxfam study for each of the solutions found at least one detailed description was given of an example case that that solution covered.  In principle, this is good practice. But unfortunately, as far as I can see, it was not clear whether that particular case was exclusively covered by that solution, or part of a shared solution.  Even amongst those cases which were exclusively covered by a solution there are still choices that need to be made (and explained) about how to select particular cases as exemplars and/or for a detailed examination of any causal mechanisms at work.  

QCA software does not provide any help with this task.  However, I did find some guidance in specialist text on QCA:  Schneider, C. Q., & Wagemann, C. (2012). Set-Theoretic Methods for the Social Sciences: A Guide to Qualitative Comparative Analysis. Cambridge University Press. https://doi.org/10.1017/CBO9781139004244 (it’s a heavy read in part of this book but overall, it is very informative).  In section 11.4 titled Set-Theoretic Methods and Case Selection, the authors note ‘Much emphasis is put on the importance of intimate case knowledge for a successful QCA.  As a matter of fact, the idea of QCA is a research approach and of going back-and-forth between ideas and evidence largely consists of combining comparative within case studies and QCA is a technique.  So far, the literature has focused mainly on how to choose cases prior to and during but not after a QCA were by QCA we here refer to the analytic moment of analysing a truth table it is therefore puzzling that little systematic and specific guidance has so far been provided on which cases to select for within case studies based on the results of i.e. after a QCA… ‘The authors then go on to provide some guidance (a total of 7 pages of 320).

In contrast to QCA software, EvalC3 has a number of built-in tools and some associated guidance on the EvalC3 website, on how to think about case selection as a step between cross-case analysis and subsequent within-case analysis.  One of the steps in the seven-stage EvalC3 workflow (Compare Models) is the generation of a table that compares the case coverage of multiple selected alternative models found by one’s analysis to that point.  This enables the identification of cases which are covered by two or more models.  These types of cases would clearly warrant subsequent within-case investigation.

Another step in the EvalC3 workflow called Compare Cases, provides another means of identifying specific cases for follow-up within-case investigations.  In this worksheet individual cases can be identified as modal or extreme examples within various categories that may be of interest e.g. True Positives, False Positives, et cetera.  It is also possible to identify for a chosen case what other case is most similar and most different to that case, when all its attributes available in the dataset are considered.  These measurement capacities are backed up by technical advice on the EvalC3 website on the particular types of questions that can be asked in relation to different types of cases selected on the basis of their similarities and differences. Your comments on these suggested strategies would be very welcome.

I should explain...

...why the models found by EvalC3 were different from those found from the QCA analysis.  QCA software finds solutions i.e. predictive models by reducing all the configurations found in a truth table down to the smallest possible set, using what is known as a minimisation algorithm called the Quine McCluskey algorithm.

In contrast, EvalC3 provides users with a choice of four different search algorithms combined with multiple alternative performance measures that can be used to automatically assess the results generated by those search algorithms. All algorithms have their strengths and weaknesses, in terms of the kinds of results they can find and cannot find, including the QCA Quine McCluskey algorithm and the simple machine learning algorithms built into EvalC3. I think the McCluskey algorithm has particular problems with datasets which have limited diversity, in other words, where cases only represent a small proportion of all the possible combinations of the conditions documented in the dataset. Whereas the simple search algorithms built into EvalC3 don't experience that is a difficulty. This is my conjecture, not yet rigorously tested.

[In the above data set the cases represented the two data sets analysed represented 47% and 68% of all the possible configurations given the presence of five different conditions]

While EvalC3 results described above did differ from the QCA analyses, they were not in outright contradiction. The same has been my experience when I reanalysed other QCA datasets. EvalC3 will either simply duplicate the QCA findings or produce variations on those, and often those which are better performing. 


Wednesday, July 29, 2020

Converting a continuous variable into a binary variable i.e. dichotomising


If you Google "dichotomising data" you will find lots of warnings that this is basically a bad idea!. Why so? Because if you do so you will lose information. All those fine details of differences between observations will be lost.

But what if you are dealing with something like responses to an attitude survey? Typically these have five-pointed scales ranging from disagree to neutral to agree, or the like. Quite a few of the fine differences in ratings on this scale may well be nothing more than "noise", i.e. variations unconnected with the phenomenon you are trying to measure. A more likely explanation is that they reflect differences in respondents "response styles", or something more random

Aggregation or "binning" of observations into two classes (higher and lower) can be done in different ways. You could simply find the median value and split the observations at that point. Or, you could look for a  "natural" gap in the frequency distribution and make the split there. Or, you may have a prior theoretical reason that it makes sense to split the range of observations at some other specific point.

I have been trying out a different approach. This involved not just looking at the continuous variable I wanted to dichotomise, but also its relationship with an outcome that will be of interest in subsequent analyses. This could also be a continuous variable or a binary measure.

There are two ways of doing this. The first is a relatively simple manual approach. In the first approach, the cut-off point for the outcome variable has already been decided, by one means or another.  We then needed to vary the cut-off point in the range of values for the independent variable to see what effect they had on the numbers of observations of the outcome above and below its cut-off value. For any specific cut-off value for the independent variable an Excel spreadsheet will be used to calculate the following:
  1. # of True Positives - where the independent variable value was high and so was the outcome variable value
  2. # of False Positives - where the independent variable value was high but the outcome variable value was low
  3. # of False Negative - where the independent variable value was low but the outcome variable value was high
  4. # of True Negatives - where the independent variable value was low and the outcome variable value was low
When doing this we are in effect treating cut-off criteria for the independent variable as a predictor of the dependent variable.  Or more precisely, a predictor of the prevalence of observations with values above a specified cut-off point on the dependent variable.

In Excel, I constructed the following:
  • Cells for entering the raw data - the values of each variable for each observation
  • Cells for entering the cut-off points
  • Cells for defining the status of each observation  
  • A Confusion Matrix, to summarise the total number of observations with each of the four possible types described above.
  • A set of 6 widely used performance measures, calculated using the number of observations in each cell of the Confusion Matrix.
    • These performance measures tell me how good the chosen cut-off point is as a predictor of the outcome as specified. At best, all those observations fitting the cut-off criterion will be in the True Positive group and all those not fitting it would be in the True Negative group. In reality, there are also likely to be observations in the False Positive and False Negative groups.
By varying the cut-off points it is possible to find the best possible predictor i.e. one with very few False Positive and very few False Negatives. This can be done manually when the cut-off point for the outcome variable has already been decided.

Alternatively, if the cut-off point has not been decided for the outcome variable, a search algorithm can be used to find the best combination of two cut-off points (one for the independent and one for the dependent variable).  Within Excel, there is an add-in called Solver, which uses an evolutionary algorithm to do such a search, to find the optimal combination

Postscript 2020 11 05: An Excel file with the dichotomization formula and a built-in example data set is available here  in Google Docs

  
2020 08 13: Also relevant: 

Hofstad, T. (2019). QCA and the Robustness Range of Calibration Thresholds: How Sensitive are Solution Terms to Changing Calibrations? COMPASSS Working Papers, 2019–92. http://www.compasss.org/wpseries/Hofstad2019.pdf  

 This paper emphasises the importance of declaring the range of original (pre-dichotomised) values over which the performance of a predictive model remains stable 

Tuesday, April 07, 2020

Rubrics? Yes, but...




The blog posting is a response to Tom Aston's blog posting: Rubrics as a harness for complexity

I have just reviewed an evaluation of the effectiveness of policy influencing activities of programs funded by HMG as part of the International Carbon Finance Initiative.  In the technical report there are a number of uses of rubrics to explain how various judgements were made.  Here, for example, is one summarising the strength of evidence found during process tracing exercises:
  • Strong support – smoking gun (or DD) tests passed and no hoop tests (nor DDs) failed.
  • Some support – multiple straw in the wind tests passed and no hoop tests (nor DDs) failed; also, no smoking guns nor DDs passed.
  • Mixed – mixture of smoking gun or DD tests passed but some hoop tests (or DDs) failed – this required the CMO to be revised.
  • Failed – some hoop (or DD) tests failed, no double decisive or smoking gun tests passed – this required the theory to be rejected and the CMO abandoned or significantly revised. 

Another rubric described in great detail how three different levels of strength of evidence were differentiated (Convincing Plausible, Tentative).  There was no doubt in my mind that these rubrics contributed significantly to the value of the evaluation report.  Particularly by giving readers confidence in the judgements that were made by the evaluation team.

But… I can't help feel that the enthusiasm for rubrics seems to be out of proportion with their role within an evaluation.  They are a useful measurement device that can make complex judgements more transparent and thus more accountable.  Note the emphasis on the ‘more‘… There are often plenty of not necessarily so transparent judgements present in the explanatory text which is used to annotate each point in a rubric scale.  Take, for example, the first line of text in Tom Aston’s first example here, which reads “Excellent: Clear example of exemplary performance or very good practice in this domain: no weakness”

As noted in Tom’s blog it has been argued that rubrics have a wider value i.e. rubrics are useful when trying to describe and agree what success looks like for tracking changes in complex phenomena”.  This is where I would definitely argue “Buyer beware” because rubrics have serious limitations in respect of this task.

The first problem is that description and valuation are separate cognitive tasks.  Events that take place can be described, they can also be given a particular value by observers (e.g. good or bad).  This dual process is implied in the above definition of how rubrics are useful.  Both of these types of judgements are often present in a rubrics explanatory text e.g. Clear example of exemplary performance or very good practice in this domain: no weakness”

The second problem is that complex events usually have multiple facets, each of which has a descriptive and value aspect.  This is evident in the use of multiple statements linked by colons in the same example rubric I refer to above.

So for any point in a rubric’s scale the explanatory text has quite a big task on its hands.  It has to describe a specific subset of events and give a particular value to each of those.  In addition, each adjacent point on the scale has to do the same in a way that suggests there are only small incremental differences between each of these points judgements. And being a linear scale, this suggests or even requires, that there is only one path from the bottom to the top of the scale. Say goodbye to equifinality!

So, what alternatives are there, for describing and agreeing on what success looks like when trying to track changes in complex phenomena?  One solution which I have argued for, intermittently, over a period of years, is the wider use of weighted checklists.  These are described at length here.  

Their design addresses three problems mentioned above.  Firstly, description and valuation are separated out as two distinct judgements.  Secondly, the events that are described and valued can be quite numerous and yet each can be separately judged on these two criteria.  There is then a mechanism for combining these judgements in an aggregate scale. And there is more than one route from the bottom to the top of this aggregate scale.

“The proof is in the pudding”.  One particular weighted checklist, known as the Basic Necessities Survey, was designed to measure and track changes in household-level poverty.  Changes in poverty levels must surely qualify as ‘complex phenomena ‘.  Since its development in the 1990s, the Basic Necessities Survey has been widely used in Africa and Asia by international environment/conservation organisations.  There is now a bibliography available online describing some of its users and uses. https://www.zotero.org/groups/2440491/basic_necessities_survey/library








 [RD1]Impressive rubric

Friday, February 28, 2020

Temporal networks: Useful static representations of dynamic events

.
I have just found out about the existence of a field of study called "temporal networks"  Here are two papers I came across

Linhares, C. D. G., Ponciano, J. R., Paiva, J. G. S., Travençolo, B. A. N., & Rocha, L. E. C. (2019). Visualisation of Structure and Processes on Temporal Networks. In P. Holme & J. Saramäki (Eds.), Temporal Network Theory (pp. 83–105). Springer International Publishing. https://doi.org/10.1007/978-3-030-23495-9_5
Li, A., Cornelius, S. P., Liu, Y.-Y., Wang, L., & Barabási, A.-L. (2017). The fundamental advantages of temporal networks. Science, 358(6366), 1042–1046. https://doi.org/10.1126/science.aai7488

Here is an example of a temporal network:
Figure 1


The x-axis represents intervals of time The y-axis represents six different actors. The curved lines represent particular connections between particular actors at particular moments of time. For example, email messages or phone calls.

In Figure 2 below, we can see a more familiar type of network structure. This is the same network as that shown in Figure 1. The difference is that it is an aggregation of all the interactions over the 24 time periods shown in Figure 1. The numbers in red refer to the number of times that each communication link was active in this whole period.

This diagram has both some strengths and weaknesses. Unlike Figure 1 it shows us the overall structure of interactions. On the other hand, it is obscuring the possible significance of variations in the sequence within which these interactions take place over time. In a social setting involving people talking to each other, the sequencing of when different people talk to each other could make a big difference to the final state of the relationships between the people in the network.

Figure 2
How might the Figure 1 way of representing temporal networks be useful?

The first would be as a means of translating narrative accounts of events into network models of those events. Imagine that the 24 time periods are a duration of time covered by events described in a novel. And events in periods 1 to 5 are described in one particular chapter of the novel. In a chapter, the story is all about the interactions between actors 2, 3 and 4. In subsequent chapters, their interactions with other actors are described.
Figure 3
Now, instead of a novel, imagine a narrative describing the expected implementation and effects of a particular development programme. Different stakeholders will be involved at different stages. Their relationships could be "transcribed" into a temporal network, and also then into a static network diagram (as in Figure 2) which would describe the overall set of relationships for the whole programme period.

The second possible use would be to adapt the structure of a temporal network model to convert it into a temporal causal network model. Such as shown in Figure 4 below. The basic structure would remain the same, with actors list row by row and time listed column by column. The differences would be that:

  1. The nodes in the network could be heterogeneous, reflecting different kinds of activities or events, undertaken/involved in by each actor. Not homogenous as in Figure 1 example above.
  2. The connections between activities/events would be causal, in one direction or in both directions. The latter signifying a two-way exchange of some kind. In Figure 1 causality may be possible and even implied, but it can't simply be assumed.
  3. There could also be causal links between activities within the same row, meaning an actor's particular at T1 influenced another of their activities in T3, for example. This possibility is not available in Figure 1 type model
  4. Some spacer" rows and columns are useful to give the node descriptions more and to make the connections between them more visible

Figure 4 is a stylised example. By this I mean I have not detailed the specifics of each event or characterised the nature of the connections between them. In a real-life example this would be necessary. Space limitations on the chart would necessitate very brief titles + reference numbers or hypertext links.
Figure 4: Stylised example
While this temporal causal network looks something a Gantt chart it is different and better.

  1. Each row is an about a specific actor, whereas in a Gantt chart each row is about a specific activity 
  2. Links between activities signal a form of causal influence , whereas in a Gantt chart they signal precedence which may or may not have causal implications
  3. Time periods can be more flexibly and abstractly defined, so long as they follow a temporal sequence. Whereas in a Gannt chart these are more likely to be defined in specific units like days, weeks or months, or specific calendar dates


How does a temporal causal network compare to more conventional representations of Theories of Change? Results chains versions of a Theory of Change do make use of a y-axis to represent time but are often much less clear about the actors involved in the various events that happen over time. Too often these describe what might be called a sequence of disembodied events i.e. abstract descriptions of key events. On the other hand, more network like Theories of Change can be better at identifying the actors involved in the relationships between them. But it is very difficult to also capture the time dimension in a static network diagram. Associated with this problem is the difficulty of then constructing any form of text narrative about the events described in the network.

One possible problem is whether measurable indicators could be developed for each activity that is shown. Another is how longer-term outcomes, happening over a period of time, might be captured. Perhaps the activities associated with their measurement would be what would be shown in a Figure 4 type model.

Postscript: The temporal dimension of network structures is addressed in dynamic network models, such as those captured in Fuzzy Cognitive Networks. With each iteration of a dynamic network model, the states of the nodes/events/actors in the network are updated according to the nature of the links they have with others in the network. This can lead to quite complex patterns of change in the network over time. But one of the assumptions built into such models is that all relationships are re-enacted in each iteration. This is clearly not the case in our social life. Some relationships are updated daily, others much less frequently. The kind of structure shown in Figure 1 above seems more appropriate view. But can these be used for simulation purposes, where all nodes would have values that are influenced by their relationships with each other?