Thursday, November 25, 2021

Choosing between simpler and more complex versions of a Theory of Change


Background: Over the last few months I have been involved as a member of the Evaluation Task Force, convened by the Association of  Professional Futurists. Futurists being people who explore alternative futures using various foresight and scenario planning methods. The intention is to help strengthen the evaluation capacity of those doing this kind of work

One part of this work will involve the development of various forms of introductory materials and guidelines documents. These will inevitably include discussion of the use of Theories of Change, and questions about appropriate levels of detail and complexity that they should involve.

In my dialogues with other Task Force members I have recently  made the following comments, which may be of wider interest:

As already noted, a ToC can take various forms, from very simple linear versions to very complex network versions. 

I have a hypothesis that may be useful when we are developing guidance on use of ToC by futurists. In fact I have two hypotheses:

H1: A simple linear ToC is more likely to be appropriate when dealing with outcomes that are closest in time to a given foresight activity of interest. Outcomes that are more distant in time, happening long after the foresight activity has finished, would be better represented in a ToC that took a more complex network (i.e. systems map type) form

Why so?: As time passes after a foresight activity, more and more other forces, or various kinds, are likely to come into play and influence the longer term outcome of interest. As a proportion of all influences, the foresight activity will grow progressively smaller and smaller. A type of ToC that takes into account this widening set of influences would seem essential 

H2: This need for progressively more complex ToC, as the outcome of interest is located further away in time, can be moderated by a second variable, which is the social distance between those involved in the foresight activity and those involved in the outcome of interest . [Social distance is measured in social network analysis (SNA) terms by units known as  "degree", i.e, the number of person-to-person linkages needed for information to flow between one person and another]. So, if the outcome is a change in the functioning of the same organization that the foresight exercise participants they themselves belong to, this distance will be short, relative to an outcome relating to another organisation altogether - where there may be few if any direct links between the exercise participants and staff of that organisation

The implications of these two perspectives could be graphically represented in a scatter plot or two-by-two matrix e.g.


On reflection, this view probably needs some more articulation. Social distance will probably not be present in the form of a single pathway through a network of actors. Especially given that any foresight activity will typically involve multiple participants, each with their own access to relevant networks. So there may be a third relevant dimension here to think about, which is the diversity of the participants. Greater diversity being plausibly associated with a greater range of social (and causal) pathways to the outcome of interest. And thus the need for more complex representations of the Theory of Change.





Monday, November 01, 2021

Exploring counterfactual histories of an intervention


Background

I and others  are providing technical advisory support to the evaluation of a large complex multilateral health intervention, one which is still underway. The intervention has multiple parts implemented by different partners and the surrounding context is changing. The intervention design is being adapted as time moves on. Probably not a unique situation.

The proposal

As one of a number of parts of a multi-year evaluation process I might suggest the following:

1. A timeline is developed describing what the partners in the intervention see as key moments in its history, defined as where decisions have been taken to change , stop or continue, a particular course(s) of action. 

2. Take each of those moments as a kind of case study, which the evaluation team then elaborates in detail: (a) the options that were discussed, and others not, (d) the rationales for choosing for and against those discussed options at the time, (d) the current merit of those assessments, as seen in the light of subsequent events. [See more details below] 

The objective? 

To identify (a) how well the intervention has responded to changing circumstances and (b) any lessons that might be relevant to the future of the intervention, or generalisable to other similar intervention.

This seems like a form of (contemporary and micro-level) historical research, investigating actual versus possible causal pathways. It seems different from a Theory of Change (ToC) based evaluation, where the focus is on what was expected to happen and then what did happen. Whereas with this proposed historical research in to decisions taken the primary reference point is what did happen, then what could have happened. 

It also seems different from what I understand is a process tracing form of inquiry where, I think, the focus is on particular hypothesised causal pathway. Not the consideration of multiple alternative possible pathways, as would be the case within each of a series of decision making case studies proposed here. There may be a breadth rather than depth of inquiry difference here.[Though I may be over-emphasising the difference here, ...I am just reading Mahoney, 2013 on use of process tracing in historical research]

The multiple possible alternatives that could have been chosen are the counterfactuals I am referring to here.

The challenges?

As Henry Ford allegedly said "History is just one damn thing after another" There are way too many events in most interventions where alternative histories could have taken off in a different direction. For example, at a normally trivial level, someone might have missed their train. So to be practical but also systematic and transparent the process of inquiry would need to focus on specific types of events, involving particular kinds of actors. Preferably where decisions were made about courses of action. Such as Board Meetings.

And in such individual settings how wide should the should the evaluation focus be? For example, only on where decisions were made to change something, or also where decisions were made to continue doing something? And what about the absence of decisions being even considered, when they might have been expected to be considered. That is, decisions about decisions.

Reading some of the literature about counter-factual history, written by historians, there is clearly a risk of developing historical counterfactuals that stray too far from what is known to have happened, in terms of imagined consequences of consequences, etc. In response, some historians talk about the need to limit inquiries to"constrained counterfactuals" and the use of a "Minimal Rewrite Rule". [I will find out more about these]

Perhaps another way forward is to talk about counter-factual reasoning, rather than counterfactual history (Schatzberg, 2014) . This seems to be more like what the proposed line of inquiry might be all about i.e. how the alternatives to what actually was decided and happened were considered (or not even considered) by the intervening agency. But even then, the evaluators' assessments of these reasonings would seem to necessarily involve some exploration of consequences of these decisions, and only some of which will have been observable, and others only conjectured.

The merits?

When compared to a ToC testing approach this historical approach does seem to have some merit. One of the problems of a ToC approach, particularly when applied to a complex intervention is the multiplicity of possible causal pathways, relative to the limited time and resources available available to an evaluation team. Choices usually need to be made, because not all avenues can be explored (unless some can excluded by machine learning explorations or other quantitative processes of analysis).

However, on reflection, the contrast with a historical analysis of the reality of what actually happened is not so black and white. In large complex programmes there are typically many people working away in parallel, generating their own local and sometimes intersecting histories. There is not just one history from within which to sample decision making events  In this context a well articulated ToC may be a useful map, a means of identifying where to look for those histories in the making. 

Where next

I have since found that that the evaluation team has been thinking along similar lines to myself i.e. about the need to document and analyse the history of key decisions made. If so, the focus now should be on elaborating questions that would be practically useful, some of which are touched on above. Including:

1. How to identify and sample important decision making points

At least two options here:

1. Identify a specific type of event where it is know that relevant decisions are made. E.g, Board Meetings. This is a top-down deductive approach. Risk here is that many decisions will (and have to be) made outside and prior to these events, and just receive official authorisation at these meetings. Snowball sampling backwards to original decisions may be possible...

2. Try using the HCS method to partition the total span of time of interest into smaller (and nested) periods of time. Then identify decisions that have generated the differences observed between these periods (which will sought about the intervention strategy). This is a more bottom-up inductive approach. 

2. How to analyses individual decisions.

The latter includes interesting issues such as  how much use should be made of prior/borrowed theories about what constitute good decision making, versus using a more inductive approach that emphasises understanding how the decisions were made within their own particular context. I am more in favor of the latter at present 

Here is a VERY provisional framework/checklist for what could be examined, when looking at each decision making event:

In this context it may also be useful to think about a wider set of relevant ideas like the role of "path dependency" and "sunk costs"

3. How to aggregate/synthesise/summarise the analysis of multiple individual decision making cases

This is still being thought about, so caveat emptor:

Objectives were to identify: 

(a) how well the intervention has responded to changing circumstances. 

Possible summarising device? Rate each decision making event on degree to which it was optimal under the circumstances. Backed by a rubric explaining rating values.

Cross tabulate these ratings against a ratings of the subsequent impact of the decision that was made? An "Increased/decreased potential impact" scale.? Likewise supported by a rubric (i.e. annotated scale).

(b) any lessons that might be relevant to the future of the intervention, or generalisable to other similar intervention.

Text summary of implications identified from the analysis of each decision making event, with priority to more impactful/consequential decisions?

Lot more thinking yet to be done here... 

Miscellaneous points of note hereafter...

Postscript 1: There must be a wider literature on this type of analysis, where there may be some useful experiences.  "“Ninety per cent of problems have already been solved in some other field. You just have to find them.” McCaffrey, T. (2015)  New Scientist.

Postscript 2: I just came across the idea of an "even if..." type counterfactual. As in "Even if I did catch the train, I would still not have got the job". This is where when an imagined action, different from what really what happened, still leads to the same outcome as when the real action took place. 


Conceptual Thinking by Andy McMahon
Wishmi, CC BY-SA 4.0, via Wikimedia Commons

Sunday, August 22, 2021

Reconciling the need for both horizontal and vertical dimensions in a Theory of Change diagram

 

In their usual table form, Logical Frameworks are strong on their horizontal dimension but weak on their vertical dimension. On the horizontal dimension is the explanation of what kind of data will be collected and used to measure the changes that are described. This is good for accountability. On the vertical dimension is the explanation of how events at one level will connect and cause events at another level. This is good for learning.  But unfortunately LogFrames often simply provide lists of events at each level, with relatively little description of which event will connect to which, especially where multiple and mixed sets of connections might be expected between events. On the other hand diagrammatic versions of a Theory of  Change tend to be much better at explicating the various causal pathways at work, but weak on the information they provide on the horizontal dimension - on how various events will be observed and measures. Both of these problems reflect both a lack of space to do both things and different relative priorities pursued within those constraints.

The Donor Committee for Enterprise Development (DCED) has produced a web page based Theory of Change to explain its way of  working, which I think points the way to reconciling these conflicting needs. At first glance here is what you see, when you visit this page of their website.

The different causal pathways are quite visible, more so than within a standard LogFrame table format. But another common weakness of diagrammatic versions of Theories of Change is the lack of explanation of what is going on within each of these pathways. The DCED addressed this problem by allowing visitors to click on a link and be taken to another web page, where visitors get a detailed text description of the available evidence, plus any assumptions, about the causal process(es) that connect the events connected by the arrow.

The one weakness in this DCED ToC diagram is the lack of detail about the horizontal dimension- how the various events described in the diagram will be observed./ measured and by who and when and where. But this is clearly resolvable by using the same approach with the links: enable users to click on any event and be taken to a web page where this information is provided for that specific event. As shown below:






Monday, July 19, 2021

Diversity and complexity? Where should we focus our attention?

 This posting has been promoted by Michael Bamberger's recent two blog postings on "Building complexity into development evaluation" on the 3ie website: Evidence Matters: Towards equitable, inclusive and sustainable development

I started to make some comments underneath each of the two postings but have now decided to try to organise and extend my thoughts here. 

My starting point is an ongoing concern about how unproductive the discussion has been about  complexity (especially in relation to evaluation). Like an elephant giving birth to a mouse, has been my chosen metaphor in the past. There probably is some rhetorical overkill here, but it does express some of my felt frustration with the limited value of the now quite extended discussion.

Michael's blog postings have not allayed these concerns. My concerns start with the idea of measuring complexity, both how you do it and how measuring would in fact be useful. Measuring complexity is Michael's proposed first step in a "practical five-step approach to address complexity-responsive evaluation in a systematic way" A lot of ink has already been spilled on the topic of measurement, which is the first of the five steps. A useful summary can be found in Melanie Mitchel's widely read Complexity: A guided Tour (2009:94-114) and Loyd, 1998, who counted at least 40 different ways. But I cant see any references back to any of these methods, suggesting that not much is being learned from past efforts, which is a pity.  

Along with the challenge of how to do it is the question of why you would want to do it, ...how might it be useful? The second blog posting explains that " In addition to providing stakeholders with an understanding of what complexity means for their program, the checklist also helps decide whether the program is sufficiently complex to merit the additional investment of time and resources required to conduct a complexity-focused evaluation

The second of these outcomes might be a more observable consequence, so my first question here is where is the cut-off point in a checklist derived score that would at least inform such a decision, and how is that cut-off point justified. The checklist has 25 attribute questions spread over 4 dimensions. This has not yet been made clear.

My next question is how do the results of this measurement exercise inform the next of the five steps: "Breaking the project into evaluable components and identifying the units of analysis for the component evaluation ". So far, I have not found any answers to this question either. PS 2021 07 29: Michael did reply to a Comment of mine raising the same issue, and suggested that high versus low scores on rated complexity might be one way.

Another concern which I have already written about in my comment on the blog postings is that complexity seems to be very much “in the eye of the beholder”, i.e. depending on who you are and what you are looking for. My partner sees much more complexity in the design and behaviour of moths and butterflies than I do. A friend of mine sees much more complexity in the performance of classical music than I do.  Such observations prompt me to thinking that perhaps we should not put too much effort into trying to objectively measure complexity. Rather, perhaps we should take a more ethnographic perspective on complexity – i.e. we should pay attention to where people are seeing complexity and where they are not, and what are the implications thereof.

If we accept this suggestion it is still the case that the challenge of identifying complexity is still with us, but in a different form. So, I have another suggestion, which is to pay much more attention to diversity, as an important and related concept to complexity. As Scott Page has well described, there is a close and complicated relationship between diversity and complexity. Nevertheless, there are some practically useful points to note about the concept of diversity.  

Firstly the presence of diversity is indicative of the absence of a common constraint, and the presence of many different causal influences. So can be treated as a proxy - indicating the presence of complex processes.  

Secondly, there is extensive and more internally consistent and practically useful set of ways in which diversity can be measured.  These mainly have their origins in the studies of biodiversity but have a much wider applicability.  Diversity can also be measured in other spheres, in human relationships (using social network analysis tools) and how people see the world (using forms of ethnographic enquiry known as pile or card sorting).

Thirdly, diversity has some potentially global relevance as a value and as objective.  Diversity of behaviour can be indicative of choice and empowerment. 

Fourthly, diversity can also be seen as an important independent variable as well, enabling adaptation and creativity. 

All this is not to say that diversity cannot also be problematic.  Some forms of diversity in the present could severely limit the extent of diversity in the future. For example, if within a population there was a wide range of different types of guns held by households and many different views on how and when they could legitimately be used.  At the more mundane level, within organisations different kinds of tasks may benefit from different levels of diversity within the groups addressing those tasks.  So diversity presents a useful and important problematic in a way that the concept of complexity does not. What forms of diversity we want to see, and see sustained over time, and how can they be enabled? Where do we want choice and where should be accept restriction?

Arguing for more attention to diversity, rather than complexity, does not mean there also needs to be a whole new school of evaluation developed around this idea (Get Brand X Evaluation, just hot off the press! Uhh... No). It is consistent with a number of ideas already found useful, including the idea of equifinality (An outcome can arise from multiple different causes) and multifinality ( A cause can have multiple different outcomes), and the idea of multiple conjunctural causation. It is also compatible with a social constructionist and interpretive perspective on social reality.




Monday, June 14, 2021

Paired case comparisons as an alternative to a configurational analysis (QCA or otherwise)

[Take care, this is still very much a working draft!] Criticisms and comments welcome though

The challenge

The other day I was asked for some advice on how to implement a QCA type of analysis within an evaluation that was already fairly circumscribed in its design. Circumscribed both by the commissioner and by the team proposing to carry out the evaluation. The commissioner had already indicated that they wanted a case study orientated approach and had even identified the maximum number of case studies that they wanted to see (ten) .  While the evaluation team could see the potential use of a QCA type analyses they were already committed to undertaking a process type evaluation, and did not want a QCA type analyses to dominate their approach. In addition, it appeared that there already was a quite developed conceptual framework that included many different factors which might be contribute causes to the outcomes of interest.

As is often the case, there seemed to be a shortage of cases and an excess of potentially explanatory variables. In addition, there were doubts within the evaluation team as to whether a thorough QCA analysis would be possible or justifiable given the available resources and priorities.

Paired case comparisons as the alternative

My first suggestion to the evaluation team was to recognise that there is some middle ground between across-case analysis involving medium to large numbers of cases, and a within-case analysis. As described by Rihoux and Ragin (2009)  a QCA analysis will use both, going back and forth, using one to inform the other, over a number of iterations.. The middle ground between these two options is case comparisons – particularly comparisons of pairs of cases. Although in the situation described above there will be a maximum of 10 cases that can be explored, the number of pairs of these cases that can be compared is still quite big (45).  With these sort of numbers some sort of strategy is necessary for making choices about the types of pairs of cases that will be compared. Fortunately there is already a large literature on case selection. My favourite summary is the one by  Gerring, J., & Cojocaru, L. (2015). Case-Selection: A Diversity of Methods and Criteria. 

My suggested approach was to use what is known as the Confusion Matrix as the basis for structuring the choice of cases to be compared.  A Confusion Matrix is a simple truth table, showing a combination of two sets of possibilities (rows and columns), and the incidence of those possibilities (cell values). For example as follows:


Inside the Confusion Matrix are four types of cases: 
  1. True Positives where there are cases with attributes that fit my theory and where the expected outcome is present
  2. False Positives, where there are cases with attributes that fit my theory but where the expected outcome is absent
  3. False Negatives, where there are cases which do not have attributes that fit my theory but where nevertheless the outcome is present
  4. True Negatives, where there are cases which do not have attributes that fit my theory and where the outcome is absent as expected
Both QCA and supervised machine learning approaches are good at identifying individual (or packages of)  attributes which are good predictors of when outcomes are present or when they are absent – in other words where there are large number of True Positive and True Negative cases. And the incidence of exceptions: the False Positive and False Negatives. But this type of cross case-based led analysis do not seem to be available as an option to the evaluation team I have mentioned above.

1. Starting with True Positives

So my suggestion has been to look at the 10 cases that they have at hand, and start by focusing in on those cases where the outcome is present (first column). Focus on the case that is most similar to others with the outcome present, because findings about this case may be more likely to apply to others. See below on measuring similarity) . When examining that case identify one or more attributes which is the most likely explanation for the outcome being present. Note here that this initial theory is coming from a single within-case analysis, not a cross-case analysis. The evaluation team will now have a single case in the category of True Positive. 

2. Comparing False Positives and True Positives

The next step in the analysis is to identify cases which can be provisionally described as a False Positive. Start by finding a case which has the outcome absent. Does it have the same theory-relevant attributes as the True Positive? If so, retain it as a False Positive. Otherwise, move it to the True Negative category. Repeat this move for all remaining cases with the outcome absent. From among all those qualifying as False Positives, find one which is otherwise be as similar as possible in all its other attributes to the True Positive case.This type of analysis choice is called MSDO, standing for "most similar design, different outcome" - see the de Meur reference below.  Also see below on how to measure this form of similarity. 

The aim here is to find how the causal mechanisms at work differ. One way to explore this question is to look for an additional attribute that is present in the True Positive case but absent in the False Negative case, despite those cases otherwise being most similar.  Or, an attribute that is absent in the True Positive but present in the False Negative case. In the former case the missing case could be seen as a kind of enabling factor, whereas in the latter case it could be seen as more like a blocking factor.  If nether can be found by comparison of coded attributes of the cases then a more intensive examination of raw data on the case might still identify them, and lead to an updated/elaboration of theory behind the True Positive case. Alternately, that examination might suggest measurement error is the problem and that the False Positive case needs to be reclassified as True Positive.

3. Comparing False Negatives and True Positives

The third step in the analysis is to identify at least one most relevant case which can be described as a False Negative.  This False-Negative case should be one that is as different as possible in all its attributes to the True Positive case. This type of analysis choice is called MDSO, standing for "most different design, same outcome". 

 The aim here should be to try to identify if the same or different causal mechanisms is at work,  when compared to that seen in the True Positive case. One way to explore this question is to look for one or more attributes that both the True Positive and False Negative case have in common, despite otherwise being "most different". If found, and if associated with the causal theory in the True Positive case,  then the False Negative case can now be reclassed as a True Positive. The theory describing the now two True Positive cases can now be seen as provisionally "necessary"for the outcome, until another False Negative case is found and examined in a similar fashion.If the casual mechanism seems to be different then the case remains as a False Negative.

Both the second and third step comparisons described above will help : (a0 elaborate the details, and (b) establish the limits of the scope of the theory identified in step one. This suggested process makes use of the Confusion Matrix as a kind of very simple chess board, where pieces (aka cases) are introduced on to the board, one at a time, and then sometimes moved to other adjacent positions (depending on their relation to other pieces on the board).Or, the theory behind their chosen location is updated.

If there are only ten cases available to study, and these have an even distribution of outcomes present and absent, then this three step process of analysis could be reiterated five times i.e. once for each case where the outcome was present. Thus involving  up to 10 case comparisons, out of the 45 possible.

Measuring similarity

The above process depends on the ability to make systematic and transparent judgements about similarity. One way of doing this, which I have previously built into an Excel app called EvalC3, is to start by describing each case with a string of binary coded attributes of the same kind as used in QCA, and in some forms of supervised machine learning. An example set of workings can be seen in this Excel sheet, showing  an imagined data set of 10 cases, with 10 different attributes and then the calculation and use of  Hamming Distance as the similarity measure to chose cases for the kinds of comparisons described above. That list of attributes and the Hamming distance measure, is likely to  need to be updated, as the investigation of False Positives and False Negatives proceeds.

Incidentally, the more attributes that have been coded per case, the more discriminating this kind of approach can become. In contrast to cross-case analysis where an increase in numbers of attributes per case is usually problematic

Related sources

For some of my earlier thoughts on case comparative analysis see  here, These were developed for use within the context of a cross-case analysis process. But the argument above is about how to proceed when the staring point is a within-case analysis.

See also:
  • Nielsen, R. A. (2014). Case Selection via Matching
  • de Meur, G., Bursens, P., & Gottcheiner, A. (2006). MSDO/MDSO Revisited for Public Policy Analysis. In B. Rihoux & H. Grimm (Eds.), Innovative Comparative Methods for Policy Analysis (pp. 67–94). Springer US. 
  • de Meur, G., & Gottcheiner, A. (2012). The Logic and Assumptions of MDSO–MSDO Designs. In The SAGE Handbook of Case-Based Methods (pp. 208–221). 
  • Rihoux, B., & Ragin, C. C. (Eds.). (2009). Configurational Comparative Methods: Qualitative Comparative Analysis (QCA) and Related Techniques. Sage. Pages 28-32 for a description of "MSDO/MDSO: A systematic  procedure for matching cases and conditions". 
  • Goertz, G. (2017). Multimethod research, causal mechanisms, and case studies: An integrated approach. Princeton University Press.

Monday, May 24, 2021

The potential use of Scenario Planning methods to help articulate a Theory of Change


Over the past few months I have been engaged in discussions with other members of the Association of Professional Futurists (APF) Evaluation Task Force about how activities and outcomes in the field of foresight/alternative futures/scenario planning can usefully be evaluated.

Just recently the subject of Theories of Change has come up, and it struck me that there are at least three ways of looking at Theories of Change in this context:

The first perspective: A particular scenario (i.e. an elaborated view of the future) can contain within it a particular theory of change. One view of the future may imply that technological change will be the main driver of what happens. Another might emphasise the major long-term causal influence of demographic change.

The second perspective: Those organising a scenario planning exercise are also likely to have either explicitly or implicitly or mixture of both a Theory of Change of how their exercise is expected to influence on the participants, and the influence those participants will have on others.

The third perspective looks in the opposite direction and raises the possibility that in other settings a Theory of Change may contain a particular type of future scenario. I'm thinking here particularly of Theories of Change as used by organisations planning economic and/or social interventions in developed and developing economies. This territory has been explored recently in a paper by Derbyshire (2019), titled "Use of scenario planning as a theory-driven evaluation tool. FUTURES & FORESIGHT SCIENCE, 1(1), 1–13.  In that paper he puts forward a good argument for the use of scenario planning methods as a way of developing improved Theories of Change. Improved in a number of ways.  Firstly a much more detailed articulation of the causal processes involved. Secondly, more adequate attention to risks and unintended consequences. Thirdly, more adequate involvement of stakeholders in these two processes.

Both the task force discussions and my revisiting of the paper by Derbyshire have prompted me to think about the potential use of a ParEvo exercise as a means of articulating the contents of a Theory of Change for a development intervention. And to start to reach out to people who might be interested in testing such possibilities. The following possibilities come to mind:

1.  A ParEvo exercise could be set up to explore what happens when X project is set up in Y circumstances with Z resources and expectations.  A description of this initial setting would form the seed paragraph(s) of the ParEvo exercise. The subsequent iterations would describe the various possible developments that took place over a series of calendar periods, reflecting the expected lifespan of the intervention, and perhaps a limited period thereafter. The participants would be, or act in the role of, different stakeholders in the intervention. Commentators of the emerging storylines could be independent parties with different forms of expertise relevant to the intervention and its context. 

2.  As with all previous ParEvo exercises to date, after the final iteration there would be an evaluation stage, completed by at least the participants and the commentators, but possibly also by others in observer roles.  You can see a copy of a recent evaluation survey form here, to see the types of evaluative judgements that would be sought from those involved and observing.

3.  .3.  There seemed to be at least two possible ways of using the storylines that have been generated, to inform the design of a Theory of Change. One is to take whole storylines as units of analysis. For example, a storyline evaluated as both most likely and most desirable, by more participants than any other storyline, would seem an immediately useful source of detailed information about a causal pathway that should go into a Theory of Change. Other storylines identified as most likely but least desirable would warrant attention as risks that also need to be built into a Theory Of Change, along with any potential means of preventing and/or mitigating those risks. Other storylines identified as least likely but most desirable would warrant attention as opportunities, also to be built into a Theory Of Change, along with means of enabling and exploiting those opportunities.

4. 34.  The second possible approach would give less respect to the existing branch structure, and focus more on the contents of individual contributions i.e. paragraphs in the storylines.  Individual contributions could be sorted into categories familiar to those developing Theories of Change: activities, outputs, outputs, and impacts.  These could then be recombined into one or more causal pathways that the participants thought was both possible and desirable.  In effect, a kind of linear jigsaw puzzle. If the four categories of event types were seen as being too rigid a schema (a reasonable complaint!),  but still an unfortunate necessity, they could be introduced after the recombination process, rather than before. Either way, it probably would be useful to include another evaluation stage, making a comparative evaluation of the different combinations of contributions that had been created.  Using the same metrics as are already being used with existing ParEvo exercise.


       More ideas will follow..


     The beginnings of a bibliography...

Derbyshire, J. (2019). Use of scenario planning as a theory-driven evaluation tool. FUTURES & FORESIGHT SCIENCE, 1(1), 1–13. https://doi.org/10.1002/ffo2.1
Ganguli, S. (2017). Using Scenario Planning to Surface Invisible Risks (SSIR). Stanford Social Innovation Review. https://ssir.org/articles/entry/using_scenario_planning_to_surface_invisible_risks













3

3

Sunday, March 21, 2021

Mapping the "structure of cooperation": Adding the time dimension and thinking about further analyses

 

In October 2020 I wrote the first blog of the same name, based on some experiences with analysing the results of a ParEvo exercise. (ParEvo is a web assisted participatory scenario planning process).

The focus of that blog posting was a scatter plot of the kind shown below. 

Figure 1: Blue nodes = ParEvo exercise participants. Indegree and Outdegree explained below. Green lines = average indegree and average outdegree

The two axes describe two very basic aspects of network structures, including human social networks. Indegree, in the above example, is the number of other participants who built on that participant's contributions. Outdegree is the number of other participant's contributions that participant built on.  Combining these two measures we can generate (in classic consultants' 2 x 2 matrix style!) four broad categories of behavior, as labelled above. Behaviors , not types of people, because in the above instance we have no idea how generalisable the participants' behaviors are across different contexts. 

There is another way of labelling two of the quarters of the scatter plot, using a distinction widely used in evolutionary theory and the study of organisational behavior (March, 1991Wilden et al, 2019). Bridging behavior can be seen as a form of "exploitation" behavior, i.e., it involves making use of others prior contributions, and in turn having one's contributions built on by others.  Isolating behavior can be seen as a form of "exploration" behavior, i.e., building storylines with minimal help from other participants.  General opinion suggest that there is no ideal balance of these two approaches, rather it is thought to be context dependent. But, in stable environments exploitation is thought to be more relevant whereas in unstable environments, exploration is seen as more relevant.

What does interest me is the possibility of applying this updated analytical framework to other contexts. In particular to: (a) citation networks, (b) systems mapping exercises. I will explore citation networks first. Here is an example of a citation network extracted from a public online bibliographic database covering the field of computer science. Any research funding programme will be able to generate such data, both from funding applications and subsequent research generated publications.

Figure 2: A network of published papers, linked by cited references


Looking at the indegree and outdegree attributes of all the documents within this network the average indegree, and outdegree, was 3.9. When this was used as a cutoff value for identifying the four types of cooperation behavior, their distribution was as follows: 

  • Isolating / exploration = 59% of publications
  • Leading = 17%
  • Following = 15%
  • Bridging / exploitation = 8%
Their location within the Figure 2 network diagram is shown below in this set of filtered views.

Figure 3: Top view = all four types, Yellow view = Bridging/Exploitation, Blue = Following, Red = Leading, Green = Isolating/Exploration

It makes some sense to find the bridging/exploitation type papers in the center of the network, and the isolating/exploration type papers more scattered and especially out in the disconnected peripheries. 

It would be interesting to see whether the apparently high emphasis on exploration found in this data set would be found in other research areas. 

The examination of citation networks suggests a third possible dimension to the cooperation structure scatter plot. This is time, as represented in the above example as year of publication. Not surprisingly, the oldest papers have the higher indegree and the newest papers have the lower. Older papers (by definition, within an age bounded set of papers) have lower outdegree compared to newer papers).  But what is interesting here is the potential occurrence of outliers, of two types: "rising stars" and "laggards". That is, new papers with higher than expected indegree ("rising stars") and old papers with lower than expected indegree ("laggards", or a better name??), as seen in the imagined examples (a) and (b) below.

Another implication of considering the time dimension is the possibility of tracking the pathways of individual authors over time, across the scatter plot space. Their strategies may change over time. "If we take the scientist .. it is reasonable to assume that his/her optimal strategy as a graduate student should differ considerably from his/her optimal strategy once he/she received tenure" ( Berger-Tal, et al, 2014) They might start by exploring, then following, then bridging, then leading.

Figure 4: Red line = Imagined career path of one publication author. A and B = "Rising Star" and "Laggard" authors


There seem to be two types of opportunities present here for further analyses:
  1. Macro-level analysis of differences, in the structure of cooperation across different fields of research. Are there significant differences in the scatter plot distribution of behaviors? If so, to what extent are these differences associated with different types of outcomes across those fields? And if so, is there a plausible causal relationship that could be explored and even tested?  
  2. Micro-level analysis of differences, in the behavior of individual researchers within a given field. Do individuals tend to stick to one type of cooperation behavior (as categorised above). Or is their behavior more variable over time? If the latter , there any relatively common trajectory? What are the implications for these micro-level behaviors for the balance of exploration and exploitation taking place in a particular field?






Thursday, January 28, 2021

Connecting Scenario Planning and Theories of Change


This blog posting was prompted by Tom Aston’s recent comment at the end of an article about theories of change and their difficulties.  There he said “I do think that there are opportunities to combine Theories Of Change with scenario planning. In particular, context monitoring and assumption monitoring are intimately connected. So, there’s an area for further exploration”

Scenario planning, in its various forms, typically generates multiple narratives about what might happen in the future. A Theory Of Change does something similar but in a different way.  It is usually in a more diagrammatic rather than narrative form. Often it is simply about one particular view of how change might happen i.e., particular causal pathway or package thereof.  But in more complex network representations Theories Of Change do implicitly present multiple views of the future, in as much as there are multiple causal pathways that can work through these networks.

ParEvo is a participatory approach to scenario planning which I have developed and which has some relevance to discussion of the relationship between scenario planning and Theories Of Change.  ParEvo is different from many scenario planning methods in that it typically generates a larger number of alternative narratives about the future, and these narratives proceed rather than follow a more abstract analysis of causal processes that might be at work generating those narratives. My notion is that this narrative–first approach involves less cognitive demands on the participants, and is an easier activity to get participants engaged in from the beginning. Another point worth noting about the narratives is that they are collectively constructed, by different self-identified combinations of (anonymised) participants.

At the end of a ParEvo exercise participants are asked to rate all the surviving storylines in terms of their likelihood of happening in real life and their desirability.  These ratings can then be displayed in a scatterplot, of the kind shown in the two examples below.  The numbered points in the scatterplot are IDs for specific storylines generated in the same ParEvo exercise. Each of the two scatterplot represents a different ParEvo exercise.

 



The location of particular storylines in a scatterplot has consequences. I would argue that storylines which are in the likely but undesirable quadrant of the scatterplot deserve the most immediate attention.  They constitute risks which, if at all possible, need to be forfended, or at least responded to appropriately when they do take place. The storylines in the unlikely but desirable quadrant problem justify the next lot of attention.  This is the territory of opportunity. The focus here would be on identifying ways of enabling aspects of those developments to take place.  

Then attention could move to the likely and desirable quadrant.  Here attention could be given to the relationship between what is anticipated in the storylines and any pre-existing Theory Of Change.  The narratives in this quadrant may suggest necessary revisions to the Theory Of Change.  Or, the Theory of Change may highlight what is missing or misconceived in the narratives. The early reflections on the risk and opportunity quadrants might also have implications for revisions to the Theory Of Change.

The fourth quadrant contains those storylines which are seen as unlikely and undesirable.  Perhaps the appropriate response here is simply to periodically to check and update the judgements about likelihood and undesirability.

These four views can be likened to the different views seen from within a car.  There is the front view, which is concerned about likely and desirable events, our expected an intended direction of change.  Then there are two peripheral views, to the right and left, which are concerned with risks and opportunities, present in the desirable but unlikely, and undesirable but likely quadrants. Then there is the rear view, out the back, looking at undesirable and unlikely events.

In this explanation I have talked about storylines in different quadrants, but in the actual scatterplots develop so far the picture is a bit more complex.  Some storylines are way out in the corners of the scatterplot and clearly need attention, but others are more muted and mixed in the position characteristics, so prioritising which of these to give attention to first versus later could be a challenge.

There is also a less visible third dimension to this scatterplot. Some of the participants judgements about likelihood and desirability were not unanimous. These are the red dots in the scatterplot above. In these instances some resolution of differences of opinion about the storylines would need to be the first priority. However it is likely that some of these differences will not be resolvable, so these particular storylines will fall into the category of "Knightian uncertainties", where probabilities are simply unknown. These types of developments can't be planned for in the same way as the others where some judgements about likelihood could be made. This is the territory where bet hedging strategies are appropriate, a strategy seen both in evolutionary biology and in human affairs.  Bet hedging is a response which will be functional in most situations but optimal in none. For example the accumulation of capital reserves in a company, which provides insurance against unexpected shocks, but which is at the cost of efficient use of capital..

There are some other opportunities for connecting thinking about Theories Of Change and the multiple alternative futures that can be identified through a ParEvo process.  These relate to systems type modelling that can be done by extracting keywords from the narratives and mapping their cooccurrence in the paragraphs that make up these narratives, using social network analysis visualisation software.  I will describe these in more detail in the near future, hopefully.