Thursday, November 25, 2021

Choosing between simpler and more complex versions of a Theory of Change


Background: Over the last few months I have been involved as a member of the Evaluation Task Force, convened by the Association of  Professional Futurists. Futurists being people who explore alternative futures using various foresight and scenario planning methods. The intention is to help strengthen the evaluation capacity of those doing this kind of work

One part of this work will involve the development of various forms of introductory materials and guidelines documents. These will inevitably include discussion of the use of Theories of Change, and questions about appropriate levels of detail and complexity that they should involve.

In my dialogues with other Task Force members I have recently  made the following comments, which may be of wider interest:

As already noted, a ToC can take various forms, from very simple linear versions to very complex network versions. 

I have a hypothesis that may be useful when we are developing guidance on use of ToC by futurists. In fact I have two hypotheses:

H1: A simple linear ToC is more likely to be appropriate when dealing with outcomes that are closest in time to a given foresight activity of interest. Outcomes that are more distant in time, happening long after the foresight activity has finished, would be better represented in a ToC that took a more complex network (i.e. systems map type) form

Why so?: As time passes after a foresight activity, more and more other forces, or various kinds, are likely to come into play and influence the longer term outcome of interest. As a proportion of all influences, the foresight activity will grow progressively smaller and smaller. A type of ToC that takes into account this widening set of influences would seem essential 

H2: This need for progressively more complex ToC, as the outcome of interest is located further away in time, can be moderated by a second variable, which is the social distance between those involved in the foresight activity and those involved in the outcome of interest . [Social distance is measured in social network analysis (SNA) terms by units known as  "degree", i.e, the number of person-to-person linkages needed for information to flow between one person and another]. So, if the outcome is a change in the functioning of the same organization that the foresight exercise participants they themselves belong to, this distance will be short, relative to an outcome relating to another organisation altogether - where there may be few if any direct links between the exercise participants and staff of that organisation

The implications of these two perspectives could be graphically represented in a scatter plot or two-by-two matrix e.g.


On reflection, this view probably needs some more articulation. Social distance will probably not be present in the form of a single pathway through a network of actors. Especially given that any foresight activity will typically involve multiple participants, each with their own access to relevant networks. So there may be a third relevant dimension here to think about, which is the diversity of the participants. Greater diversity being plausibly associated with a greater range of social (and causal) pathways to the outcome of interest. And thus the need for more complex representations of the Theory of Change.





Monday, November 01, 2021

Exploring counterfactual histories of an intervention


Background

I and others  are providing technical advisory support to the evaluation of a large complex multilateral health intervention, one which is still underway. The intervention has multiple parts implemented by different partners and the surrounding context is changing. The intervention design is being adapted as time moves on. Probably not a unique situation.

The proposal

As one of a number of parts of a multi-year evaluation process I might suggest the following:

1. A timeline is developed describing what the partners in the intervention see as key moments in its history, defined as where decisions have been taken to change , stop or continue, a particular course(s) of action. 

2. Take each of those moments as a kind of case study, which the evaluation team then elaborates in detail: (a) the options that were discussed, and others not, (d) the rationales for choosing for and against those discussed options at the time, (d) the current merit of those assessments, as seen in the light of subsequent events. [See more details below] 

The objective? 

To identify (a) how well the intervention has responded to changing circumstances and (b) any lessons that might be relevant to the future of the intervention, or generalisable to other similar intervention.

This seems like a form of (contemporary and micro-level) historical research, investigating actual versus possible causal pathways. It seems different from a Theory of Change (ToC) based evaluation, where the focus is on what was expected to happen and then what did happen. Whereas with this proposed historical research in to decisions taken the primary reference point is what did happen, then what could have happened. 

It also seems different from what I understand is a process tracing form of inquiry where, I think, the focus is on particular hypothesised causal pathway. Not the consideration of multiple alternative possible pathways, as would be the case within each of a series of decision making case studies proposed here. There may be a breadth rather than depth of inquiry difference here.[Though I may be over-emphasising the difference here, ...I am just reading Mahoney, 2013 on use of process tracing in historical research]

The multiple possible alternatives that could have been chosen are the counterfactuals I am referring to here.

The challenges?

As Henry Ford allegedly said "History is just one damn thing after another" There are way too many events in most interventions where alternative histories could have taken off in a different direction. For example, at a normally trivial level, someone might have missed their train. So to be practical but also systematic and transparent the process of inquiry would need to focus on specific types of events, involving particular kinds of actors. Preferably where decisions were made about courses of action. Such as Board Meetings.

And in such individual settings how wide should the should the evaluation focus be? For example, only on where decisions were made to change something, or also where decisions were made to continue doing something? And what about the absence of decisions being even considered, when they might have been expected to be considered. That is, decisions about decisions.

Reading some of the literature about counter-factual history, written by historians, there is clearly a risk of developing historical counterfactuals that stray too far from what is known to have happened, in terms of imagined consequences of consequences, etc. In response, some historians talk about the need to limit inquiries to"constrained counterfactuals" and the use of a "Minimal Rewrite Rule". [I will find out more about these]

Perhaps another way forward is to talk about counter-factual reasoning, rather than counterfactual history (Schatzberg, 2014) . This seems to be more like what the proposed line of inquiry might be all about i.e. how the alternatives to what actually was decided and happened were considered (or not even considered) by the intervening agency. But even then, the evaluators' assessments of these reasonings would seem to necessarily involve some exploration of consequences of these decisions, and only some of which will have been observable, and others only conjectured.

The merits?

When compared to a ToC testing approach this historical approach does seem to have some merit. One of the problems of a ToC approach, particularly when applied to a complex intervention is the multiplicity of possible causal pathways, relative to the limited time and resources available available to an evaluation team. Choices usually need to be made, because not all avenues can be explored (unless some can excluded by machine learning explorations or other quantitative processes of analysis).

However, on reflection, the contrast with a historical analysis of the reality of what actually happened is not so black and white. In large complex programmes there are typically many people working away in parallel, generating their own local and sometimes intersecting histories. There is not just one history from within which to sample decision making events  In this context a well articulated ToC may be a useful map, a means of identifying where to look for those histories in the making. 

Where next

I have since found that that the evaluation team has been thinking along similar lines to myself i.e. about the need to document and analyse the history of key decisions made. If so, the focus now should be on elaborating questions that would be practically useful, some of which are touched on above. Including:

1. How to identify and sample important decision making points

At least two options here:

1. Identify a specific type of event where it is know that relevant decisions are made. E.g, Board Meetings. This is a top-down deductive approach. Risk here is that many decisions will (and have to be) made outside and prior to these events, and just receive official authorisation at these meetings. Snowball sampling backwards to original decisions may be possible...

2. Try using the HCS method to partition the total span of time of interest into smaller (and nested) periods of time. Then identify decisions that have generated the differences observed between these periods (which will sought about the intervention strategy). This is a more bottom-up inductive approach. 

2. How to analyses individual decisions.

The latter includes interesting issues such as  how much use should be made of prior/borrowed theories about what constitute good decision making, versus using a more inductive approach that emphasises understanding how the decisions were made within their own particular context. I am more in favor of the latter at present 

Here is a VERY provisional framework/checklist for what could be examined, when looking at each decision making event:

In this context it may also be useful to think about a wider set of relevant ideas like the role of "path dependency" and "sunk costs"

3. How to aggregate/synthesise/summarise the analysis of multiple individual decision making cases

This is still being thought about, so caveat emptor:

Objectives were to identify: 

(a) how well the intervention has responded to changing circumstances. 

Possible summarising device? Rate each decision making event on degree to which it was optimal under the circumstances. Backed by a rubric explaining rating values.

Cross tabulate these ratings against a ratings of the subsequent impact of the decision that was made? An "Increased/decreased potential impact" scale.? Likewise supported by a rubric (i.e. annotated scale).

(b) any lessons that might be relevant to the future of the intervention, or generalisable to other similar intervention.

Text summary of implications identified from the analysis of each decision making event, with priority to more impactful/consequential decisions?

Lot more thinking yet to be done here... 

Miscellaneous points of note hereafter...

Postscript 1: There must be a wider literature on this type of analysis, where there may be some useful experiences.  "“Ninety per cent of problems have already been solved in some other field. You just have to find them.” McCaffrey, T. (2015)  New Scientist.

Postscript 2: I just came across the idea of an "even if..." type counterfactual. As in "Even if I did catch the train, I would still not have got the job". This is where when an imagined action, different from what really what happened, still leads to the same outcome as when the real action took place. 


Conceptual Thinking by Andy McMahon
Wishmi, CC BY-SA 4.0, via Wikimedia Commons

Sunday, August 22, 2021

Reconciling the need for both horizontal and vertical dimensions in a Theory of Change diagram

 

In their usual table form, Logical Frameworks are strong on their horizontal dimension but weak on their vertical dimension. On the horizontal dimension is the explanation of what kind of data will be collected and used to measure the changes that are described. This is good for accountability. On the vertical dimension is the explanation of how events at one level will connect and cause events at another level. This is good for learning.  But unfortunately LogFrames often simply provide lists of events at each level, with relatively little description of which event will connect to which, especially where multiple and mixed sets of connections might be expected between events. On the other hand diagrammatic versions of a Theory of  Change tend to be much better at explicating the various causal pathways at work, but weak on the information they provide on the horizontal dimension - on how various events will be observed and measures. Both of these problems reflect both a lack of space to do both things and different relative priorities pursued within those constraints.

The Donor Committee for Enterprise Development (DCED) has produced a web page based Theory of Change to explain its way of  working, which I think points the way to reconciling these conflicting needs. At first glance here is what you see, when you visit this page of their website.

The different causal pathways are quite visible, more so than within a standard LogFrame table format. But another common weakness of diagrammatic versions of Theories of Change is the lack of explanation of what is going on within each of these pathways. The DCED addressed this problem by allowing visitors to click on a link and be taken to another web page, where visitors get a detailed text description of the available evidence, plus any assumptions, about the causal process(es) that connect the events connected by the arrow.

The one weakness in this DCED ToC diagram is the lack of detail about the horizontal dimension- how the various events described in the diagram will be observed./ measured and by who and when and where. But this is clearly resolvable by using the same approach with the links: enable users to click on any event and be taken to a web page where this information is provided for that specific event. As shown below:






Monday, July 19, 2021

Diversity and complexity? Where should we focus our attention?

 This posting has been promoted by Michael Bamberger's recent two blog postings on "Building complexity into development evaluation" on the 3ie website: Evidence Matters: Towards equitable, inclusive and sustainable development

I started to make some comments underneath each of the two postings but have now decided to try to organise and extend my thoughts here. 

My starting point is an ongoing concern about how unproductive the discussion has been about  complexity (especially in relation to evaluation). Like an elephant giving birth to a mouse, has been my chosen metaphor in the past. There probably is some rhetorical overkill here, but it does express some of my felt frustration with the limited value of the now quite extended discussion.

Michael's blog postings have not allayed these concerns. My concerns start with the idea of measuring complexity, both how you do it and how measuring would in fact be useful. Measuring complexity is Michael's proposed first step in a "practical five-step approach to address complexity-responsive evaluation in a systematic way" A lot of ink has already been spilled on the topic of measurement, which is the first of the five steps. A useful summary can be found in Melanie Mitchel's widely read Complexity: A guided Tour (2009:94-114) and Loyd, 1998, who counted at least 40 different ways. But I cant see any references back to any of these methods, suggesting that not much is being learned from past efforts, which is a pity.  

Along with the challenge of how to do it is the question of why you would want to do it, ...how might it be useful? The second blog posting explains that " In addition to providing stakeholders with an understanding of what complexity means for their program, the checklist also helps decide whether the program is sufficiently complex to merit the additional investment of time and resources required to conduct a complexity-focused evaluation

The second of these outcomes might be a more observable consequence, so my first question here is where is the cut-off point in a checklist derived score that would at least inform such a decision, and how is that cut-off point justified. The checklist has 25 attribute questions spread over 4 dimensions. This has not yet been made clear.

My next question is how do the results of this measurement exercise inform the next of the five steps: "Breaking the project into evaluable components and identifying the units of analysis for the component evaluation ". So far, I have not found any answers to this question either. PS 2021 07 29: Michael did reply to a Comment of mine raising the same issue, and suggested that high versus low scores on rated complexity might be one way.

Another concern which I have already written about in my comment on the blog postings is that complexity seems to be very much “in the eye of the beholder”, i.e. depending on who you are and what you are looking for. My partner sees much more complexity in the design and behaviour of moths and butterflies than I do. A friend of mine sees much more complexity in the performance of classical music than I do.  Such observations prompt me to thinking that perhaps we should not put too much effort into trying to objectively measure complexity. Rather, perhaps we should take a more ethnographic perspective on complexity – i.e. we should pay attention to where people are seeing complexity and where they are not, and what are the implications thereof.

If we accept this suggestion it is still the case that the challenge of identifying complexity is still with us, but in a different form. So, I have another suggestion, which is to pay much more attention to diversity, as an important and related concept to complexity. As Scott Page has well described, there is a close and complicated relationship between diversity and complexity. Nevertheless, there are some practically useful points to note about the concept of diversity.  

Firstly the presence of diversity is indicative of the absence of a common constraint, and the presence of many different causal influences. So can be treated as a proxy - indicating the presence of complex processes.  

Secondly, there is extensive and more internally consistent and practically useful set of ways in which diversity can be measured.  These mainly have their origins in the studies of biodiversity but have a much wider applicability.  Diversity can also be measured in other spheres, in human relationships (using social network analysis tools) and how people see the world (using forms of ethnographic enquiry known as pile or card sorting).

Thirdly, diversity has some potentially global relevance as a value and as objective.  Diversity of behaviour can be indicative of choice and empowerment. 

Fourthly, diversity can also be seen as an important independent variable as well, enabling adaptation and creativity. 

All this is not to say that diversity cannot also be problematic.  Some forms of diversity in the present could severely limit the extent of diversity in the future. For example, if within a population there was a wide range of different types of guns held by households and many different views on how and when they could legitimately be used.  At the more mundane level, within organisations different kinds of tasks may benefit from different levels of diversity within the groups addressing those tasks.  So diversity presents a useful and important problematic in a way that the concept of complexity does not. What forms of diversity we want to see, and see sustained over time, and how can they be enabled? Where do we want choice and where should be accept restriction?

Arguing for more attention to diversity, rather than complexity, does not mean there also needs to be a whole new school of evaluation developed around this idea (Get Brand X Evaluation, just hot off the press! Uhh... No). It is consistent with a number of ideas already found useful, including the idea of equifinality (An outcome can arise from multiple different causes) and multifinality ( A cause can have multiple different outcomes), and the idea of multiple conjunctural causation. It is also compatible with a social constructionist and interpretive perspective on social reality.




Monday, June 14, 2021

Paired case comparisons as an alternative to a configurational analysis (QCA or otherwise)

[Take care, this is still very much a working draft!] Criticisms and comments welcome though

The challenge

The other day I was asked for some advice on how to implement a QCA type of analysis within an evaluation that was already fairly circumscribed in its design. Circumscribed both by the commissioner and by the team proposing to carry out the evaluation. The commissioner had already indicated that they wanted a case study orientated approach and had even identified the maximum number of case studies that they wanted to see (ten) .  While the evaluation team could see the potential use of a QCA type analyses they were already committed to undertaking a process type evaluation, and did not want a QCA type analyses to dominate their approach. In addition, it appeared that there already was a quite developed conceptual framework that included many different factors which might be contribute causes to the outcomes of interest.

As is often the case, there seemed to be a shortage of cases and an excess of potentially explanatory variables. In addition, there were doubts within the evaluation team as to whether a thorough QCA analysis would be possible or justifiable given the available resources and priorities.

Paired case comparisons as the alternative

My first suggestion to the evaluation team was to recognise that there is some middle ground between across-case analysis involving medium to large numbers of cases, and a within-case analysis. As described by Rihoux and Ragin (2009)  a QCA analysis will use both, going back and forth, using one to inform the other, over a number of iterations.. The middle ground between these two options is case comparisons – particularly comparisons of pairs of cases. Although in the situation described above there will be a maximum of 10 cases that can be explored, the number of pairs of these cases that can be compared is still quite big (45).  With these sort of numbers some sort of strategy is necessary for making choices about the types of pairs of cases that will be compared. Fortunately there is already a large literature on case selection. My favourite summary is the one by  Gerring, J., & Cojocaru, L. (2015). Case-Selection: A Diversity of Methods and Criteria. 

My suggested approach was to use what is known as the Confusion Matrix as the basis for structuring the choice of cases to be compared.  A Confusion Matrix is a simple truth table, showing a combination of two sets of possibilities (rows and columns), and the incidence of those possibilities (cell values). For example as follows:


Inside the Confusion Matrix are four types of cases: 
  1. True Positives where there are cases with attributes that fit my theory and where the expected outcome is present
  2. False Positives, where there are cases with attributes that fit my theory but where the expected outcome is absent
  3. False Negatives, where there are cases which do not have attributes that fit my theory but where nevertheless the outcome is present
  4. True Negatives, where there are cases which do not have attributes that fit my theory and where the outcome is absent as expected
Both QCA and supervised machine learning approaches are good at identifying individual (or packages of)  attributes which are good predictors of when outcomes are present or when they are absent – in other words where there are large number of True Positive and True Negative cases. And the incidence of exceptions: the False Positive and False Negatives. But this type of cross case-based led analysis do not seem to be available as an option to the evaluation team I have mentioned above.

1. Starting with True Positives

So my suggestion has been to look at the 10 cases that they have at hand, and start by focusing in on those cases where the outcome is present (first column). Focus on the case that is most similar to others with the outcome present, because findings about this case may be more likely to apply to others. See below on measuring similarity) . When examining that case identify one or more attributes which is the most likely explanation for the outcome being present. Note here that this initial theory is coming from a single within-case analysis, not a cross-case analysis. The evaluation team will now have a single case in the category of True Positive. 

2. Comparing False Positives and True Positives

The next step in the analysis is to identify cases which can be provisionally described as a False Positive. Start by finding a case which has the outcome absent. Does it have the same theory-relevant attributes as the True Positive? If so, retain it as a False Positive. Otherwise, move it to the True Negative category. Repeat this move for all remaining cases with the outcome absent. From among all those qualifying as False Positives, find one which is otherwise be as similar as possible in all its other attributes to the True Positive case.This type of analysis choice is called MSDO, standing for "most similar design, different outcome" - see the de Meur reference below.  Also see below on how to measure this form of similarity. 

The aim here is to find how the causal mechanisms at work differ. One way to explore this question is to look for an additional attribute that is present in the True Positive case but absent in the False Negative case, despite those cases otherwise being most similar.  Or, an attribute that is absent in the True Positive but present in the False Negative case. In the former case the missing case could be seen as a kind of enabling factor, whereas in the latter case it could be seen as more like a blocking factor.  If nether can be found by comparison of coded attributes of the cases then a more intensive examination of raw data on the case might still identify them, and lead to an updated/elaboration of theory behind the True Positive case. Alternately, that examination might suggest measurement error is the problem and that the False Positive case needs to be reclassified as True Positive.

3. Comparing False Negatives and True Positives

The third step in the analysis is to identify at least one most relevant case which can be described as a False Negative.  This False-Negative case should be one that is as different as possible in all its attributes to the True Positive case. This type of analysis choice is called MDSO, standing for "most different design, same outcome". 

 The aim here should be to try to identify if the same or different causal mechanisms is at work,  when compared to that seen in the True Positive case. One way to explore this question is to look for one or more attributes that both the True Positive and False Negative case have in common, despite otherwise being "most different". If found, and if associated with the causal theory in the True Positive case,  then the False Negative case can now be reclassed as a True Positive. The theory describing the now two True Positive cases can now be seen as provisionally "necessary"for the outcome, until another False Negative case is found and examined in a similar fashion.If the casual mechanism seems to be different then the case remains as a False Negative.

Both the second and third step comparisons described above will help : (a0 elaborate the details, and (b) establish the limits of the scope of the theory identified in step one. This suggested process makes use of the Confusion Matrix as a kind of very simple chess board, where pieces (aka cases) are introduced on to the board, one at a time, and then sometimes moved to other adjacent positions (depending on their relation to other pieces on the board).Or, the theory behind their chosen location is updated.

If there are only ten cases available to study, and these have an even distribution of outcomes present and absent, then this three step process of analysis could be reiterated five times i.e. once for each case where the outcome was present. Thus involving  up to 10 case comparisons, out of the 45 possible.

Measuring similarity

The above process depends on the ability to make systematic and transparent judgements about similarity. One way of doing this, which I have previously built into an Excel app called EvalC3, is to start by describing each case with a string of binary coded attributes of the same kind as used in QCA, and in some forms of supervised machine learning. An example set of workings can be seen in this Excel sheet, showing  an imagined data set of 10 cases, with 10 different attributes and then the calculation and use of  Hamming Distance as the similarity measure to chose cases for the kinds of comparisons described above. That list of attributes and the Hamming distance measure, is likely to  need to be updated, as the investigation of False Positives and False Negatives proceeds.

Incidentally, the more attributes that have been coded per case, the more discriminating this kind of approach can become. In contrast to cross-case analysis where an increase in numbers of attributes per case is usually problematic

Related sources

For some of my earlier thoughts on case comparative analysis see  here, These were developed for use within the context of a cross-case analysis process. But the argument above is about how to proceed when the staring point is a within-case analysis.

See also:
  • Nielsen, R. A. (2014). Case Selection via Matching
  • de Meur, G., Bursens, P., & Gottcheiner, A. (2006). MSDO/MDSO Revisited for Public Policy Analysis. In B. Rihoux & H. Grimm (Eds.), Innovative Comparative Methods for Policy Analysis (pp. 67–94). Springer US. 
  • de Meur, G., & Gottcheiner, A. (2012). The Logic and Assumptions of MDSO–MSDO Designs. In The SAGE Handbook of Case-Based Methods (pp. 208–221). 
  • Rihoux, B., & Ragin, C. C. (Eds.). (2009). Configurational Comparative Methods: Qualitative Comparative Analysis (QCA) and Related Techniques. Sage. Pages 28-32 for a description of "MSDO/MDSO: A systematic  procedure for matching cases and conditions". 
  • Goertz, G. (2017). Multimethod research, causal mechanisms, and case studies: An integrated approach. Princeton University Press.

Monday, May 24, 2021

The potential use of Scenario Planning methods to help articulate a Theory of Change


Over the past few months I have been engaged in discussions with other members of the Association of Professional Futurists (APF) Evaluation Task Force about how activities and outcomes in the field of foresight/alternative futures/scenario planning can usefully be evaluated.

Just recently the subject of Theories of Change has come up, and it struck me that there are at least three ways of looking at Theories of Change in this context:

The first perspective: A particular scenario (i.e. an elaborated view of the future) can contain within it a particular theory of change. One view of the future may imply that technological change will be the main driver of what happens. Another might emphasise the major long-term causal influence of demographic change.

The second perspective: Those organising a scenario planning exercise are also likely to have either explicitly or implicitly or mixture of both a Theory of Change of how their exercise is expected to influence on the participants, and the influence those participants will have on others.

The third perspective looks in the opposite direction and raises the possibility that in other settings a Theory of Change may contain a particular type of future scenario. I'm thinking here particularly of Theories of Change as used by organisations planning economic and/or social interventions in developed and developing economies. This territory has been explored recently in a paper by Derbyshire (2019), titled "Use of scenario planning as a theory-driven evaluation tool. FUTURES & FORESIGHT SCIENCE, 1(1), 1–13.  In that paper he puts forward a good argument for the use of scenario planning methods as a way of developing improved Theories of Change. Improved in a number of ways.  Firstly a much more detailed articulation of the causal processes involved. Secondly, more adequate attention to risks and unintended consequences. Thirdly, more adequate involvement of stakeholders in these two processes.

Both the task force discussions and my revisiting of the paper by Derbyshire have prompted me to think about the potential use of a ParEvo exercise as a means of articulating the contents of a Theory of Change for a development intervention. And to start to reach out to people who might be interested in testing such possibilities. The following possibilities come to mind:

1.  A ParEvo exercise could be set up to explore what happens when X project is set up in Y circumstances with Z resources and expectations.  A description of this initial setting would form the seed paragraph(s) of the ParEvo exercise. The subsequent iterations would describe the various possible developments that took place over a series of calendar periods, reflecting the expected lifespan of the intervention, and perhaps a limited period thereafter. The participants would be, or act in the role of, different stakeholders in the intervention. Commentators of the emerging storylines could be independent parties with different forms of expertise relevant to the intervention and its context. 

2.  As with all previous ParEvo exercises to date, after the final iteration there would be an evaluation stage, completed by at least the participants and the commentators, but possibly also by others in observer roles.  You can see a copy of a recent evaluation survey form here, to see the types of evaluative judgements that would be sought from those involved and observing.

3.  .3.  There seemed to be at least two possible ways of using the storylines that have been generated, to inform the design of a Theory of Change. One is to take whole storylines as units of analysis. For example, a storyline evaluated as both most likely and most desirable, by more participants than any other storyline, would seem an immediately useful source of detailed information about a causal pathway that should go into a Theory of Change. Other storylines identified as most likely but least desirable would warrant attention as risks that also need to be built into a Theory Of Change, along with any potential means of preventing and/or mitigating those risks. Other storylines identified as least likely but most desirable would warrant attention as opportunities, also to be built into a Theory Of Change, along with means of enabling and exploiting those opportunities.

4. 34.  The second possible approach would give less respect to the existing branch structure, and focus more on the contents of individual contributions i.e. paragraphs in the storylines.  Individual contributions could be sorted into categories familiar to those developing Theories of Change: activities, outputs, outputs, and impacts.  These could then be recombined into one or more causal pathways that the participants thought was both possible and desirable.  In effect, a kind of linear jigsaw puzzle. If the four categories of event types were seen as being too rigid a schema (a reasonable complaint!),  but still an unfortunate necessity, they could be introduced after the recombination process, rather than before. Either way, it probably would be useful to include another evaluation stage, making a comparative evaluation of the different combinations of contributions that had been created.  Using the same metrics as are already being used with existing ParEvo exercise.


       More ideas will follow..


     The beginnings of a bibliography...

Derbyshire, J. (2019). Use of scenario planning as a theory-driven evaluation tool. FUTURES & FORESIGHT SCIENCE, 1(1), 1–13. https://doi.org/10.1002/ffo2.1
Ganguli, S. (2017). Using Scenario Planning to Surface Invisible Risks (SSIR). Stanford Social Innovation Review. https://ssir.org/articles/entry/using_scenario_planning_to_surface_invisible_risks













3

3

Sunday, March 21, 2021

Mapping the "structure of cooperation": Adding the time dimension and thinking about further analyses

 

In October 2020 I wrote the first blog of the same name, based on some experiences with analysing the results of a ParEvo exercise. (ParEvo is a web assisted participatory scenario planning process).

The focus of that blog posting was a scatter plot of the kind shown below. 

Figure 1: Blue nodes = ParEvo exercise participants. Indegree and Outdegree explained below. Green lines = average indegree and average outdegree

The two axes describe two very basic aspects of network structures, including human social networks. Indegree, in the above example, is the number of other participants who built on that participant's contributions. Outdegree is the number of other participant's contributions that participant built on.  Combining these two measures we can generate (in classic consultants' 2 x 2 matrix style!) four broad categories of behavior, as labelled above. Behaviors , not types of people, because in the above instance we have no idea how generalisable the participants' behaviors are across different contexts. 

There is another way of labelling two of the quarters of the scatter plot, using a distinction widely used in evolutionary theory and the study of organisational behavior (March, 1991Wilden et al, 2019). Bridging behavior can be seen as a form of "exploitation" behavior, i.e., it involves making use of others prior contributions, and in turn having one's contributions built on by others.  Isolating behavior can be seen as a form of "exploration" behavior, i.e., building storylines with minimal help from other participants.  General opinion suggest that there is no ideal balance of these two approaches, rather it is thought to be context dependent. But, in stable environments exploitation is thought to be more relevant whereas in unstable environments, exploration is seen as more relevant.

What does interest me is the possibility of applying this updated analytical framework to other contexts. In particular to: (a) citation networks, (b) systems mapping exercises. I will explore citation networks first. Here is an example of a citation network extracted from a public online bibliographic database covering the field of computer science. Any research funding programme will be able to generate such data, both from funding applications and subsequent research generated publications.

Figure 2: A network of published papers, linked by cited references


Looking at the indegree and outdegree attributes of all the documents within this network the average indegree, and outdegree, was 3.9. When this was used as a cutoff value for identifying the four types of cooperation behavior, their distribution was as follows: 

  • Isolating / exploration = 59% of publications
  • Leading = 17%
  • Following = 15%
  • Bridging / exploitation = 8%
Their location within the Figure 2 network diagram is shown below in this set of filtered views.

Figure 3: Top view = all four types, Yellow view = Bridging/Exploitation, Blue = Following, Red = Leading, Green = Isolating/Exploration

It makes some sense to find the bridging/exploitation type papers in the center of the network, and the isolating/exploration type papers more scattered and especially out in the disconnected peripheries. 

It would be interesting to see whether the apparently high emphasis on exploration found in this data set would be found in other research areas. 

The examination of citation networks suggests a third possible dimension to the cooperation structure scatter plot. This is time, as represented in the above example as year of publication. Not surprisingly, the oldest papers have the higher indegree and the newest papers have the lower. Older papers (by definition, within an age bounded set of papers) have lower outdegree compared to newer papers).  But what is interesting here is the potential occurrence of outliers, of two types: "rising stars" and "laggards". That is, new papers with higher than expected indegree ("rising stars") and old papers with lower than expected indegree ("laggards", or a better name??), as seen in the imagined examples (a) and (b) below.

Another implication of considering the time dimension is the possibility of tracking the pathways of individual authors over time, across the scatter plot space. Their strategies may change over time. "If we take the scientist .. it is reasonable to assume that his/her optimal strategy as a graduate student should differ considerably from his/her optimal strategy once he/she received tenure" ( Berger-Tal, et al, 2014) They might start by exploring, then following, then bridging, then leading.

Figure 4: Red line = Imagined career path of one publication author. A and B = "Rising Star" and "Laggard" authors


There seem to be two types of opportunities present here for further analyses:
  1. Macro-level analysis of differences, in the structure of cooperation across different fields of research. Are there significant differences in the scatter plot distribution of behaviors? If so, to what extent are these differences associated with different types of outcomes across those fields? And if so, is there a plausible causal relationship that could be explored and even tested?  
  2. Micro-level analysis of differences, in the behavior of individual researchers within a given field. Do individuals tend to stick to one type of cooperation behavior (as categorised above). Or is their behavior more variable over time? If the latter , there any relatively common trajectory? What are the implications for these micro-level behaviors for the balance of exploration and exploitation taking place in a particular field?






Thursday, January 28, 2021

Connecting Scenario Planning and Theories of Change


This blog posting was prompted by Tom Aston’s recent comment at the end of an article about theories of change and their difficulties.  There he said “I do think that there are opportunities to combine Theories Of Change with scenario planning. In particular, context monitoring and assumption monitoring are intimately connected. So, there’s an area for further exploration”

Scenario planning, in its various forms, typically generates multiple narratives about what might happen in the future. A Theory Of Change does something similar but in a different way.  It is usually in a more diagrammatic rather than narrative form. Often it is simply about one particular view of how change might happen i.e., particular causal pathway or package thereof.  But in more complex network representations Theories Of Change do implicitly present multiple views of the future, in as much as there are multiple causal pathways that can work through these networks.

ParEvo is a participatory approach to scenario planning which I have developed and which has some relevance to discussion of the relationship between scenario planning and Theories Of Change.  ParEvo is different from many scenario planning methods in that it typically generates a larger number of alternative narratives about the future, and these narratives proceed rather than follow a more abstract analysis of causal processes that might be at work generating those narratives. My notion is that this narrative–first approach involves less cognitive demands on the participants, and is an easier activity to get participants engaged in from the beginning. Another point worth noting about the narratives is that they are collectively constructed, by different self-identified combinations of (anonymised) participants.

At the end of a ParEvo exercise participants are asked to rate all the surviving storylines in terms of their likelihood of happening in real life and their desirability.  These ratings can then be displayed in a scatterplot, of the kind shown in the two examples below.  The numbered points in the scatterplot are IDs for specific storylines generated in the same ParEvo exercise. Each of the two scatterplot represents a different ParEvo exercise.

 



The location of particular storylines in a scatterplot has consequences. I would argue that storylines which are in the likely but undesirable quadrant of the scatterplot deserve the most immediate attention.  They constitute risks which, if at all possible, need to be forfended, or at least responded to appropriately when they do take place. The storylines in the unlikely but desirable quadrant problem justify the next lot of attention.  This is the territory of opportunity. The focus here would be on identifying ways of enabling aspects of those developments to take place.  

Then attention could move to the likely and desirable quadrant.  Here attention could be given to the relationship between what is anticipated in the storylines and any pre-existing Theory Of Change.  The narratives in this quadrant may suggest necessary revisions to the Theory Of Change.  Or, the Theory of Change may highlight what is missing or misconceived in the narratives. The early reflections on the risk and opportunity quadrants might also have implications for revisions to the Theory Of Change.

The fourth quadrant contains those storylines which are seen as unlikely and undesirable.  Perhaps the appropriate response here is simply to periodically to check and update the judgements about likelihood and undesirability.

These four views can be likened to the different views seen from within a car.  There is the front view, which is concerned about likely and desirable events, our expected an intended direction of change.  Then there are two peripheral views, to the right and left, which are concerned with risks and opportunities, present in the desirable but unlikely, and undesirable but likely quadrants. Then there is the rear view, out the back, looking at undesirable and unlikely events.

In this explanation I have talked about storylines in different quadrants, but in the actual scatterplots develop so far the picture is a bit more complex.  Some storylines are way out in the corners of the scatterplot and clearly need attention, but others are more muted and mixed in the position characteristics, so prioritising which of these to give attention to first versus later could be a challenge.

There is also a less visible third dimension to this scatterplot. Some of the participants judgements about likelihood and desirability were not unanimous. These are the red dots in the scatterplot above. In these instances some resolution of differences of opinion about the storylines would need to be the first priority. However it is likely that some of these differences will not be resolvable, so these particular storylines will fall into the category of "Knightian uncertainties", where probabilities are simply unknown. These types of developments can't be planned for in the same way as the others where some judgements about likelihood could be made. This is the territory where bet hedging strategies are appropriate, a strategy seen both in evolutionary biology and in human affairs.  Bet hedging is a response which will be functional in most situations but optimal in none. For example the accumulation of capital reserves in a company, which provides insurance against unexpected shocks, but which is at the cost of efficient use of capital..

There are some other opportunities for connecting thinking about Theories Of Change and the multiple alternative futures that can be identified through a ParEvo process.  These relate to systems type modelling that can be done by extracting keywords from the narratives and mapping their cooccurrence in the paragraphs that make up these narratives, using social network analysis visualisation software.  I will describe these in more detail in the near future, hopefully.


Tuesday, December 15, 2020

The implications of complex program designs: Six proposals worth exploring?

Last week I was involved in a seminar discussion of a draft CEDIL paper reviewing methods that can be used to evaluate complex interventions. That discussion prompted me to the following speculations, which could have practical implications for the evaluation of complex interventions.

Caveat: As might be expected, any discussion in this area will hinge upon the definition of complexity. My provisional definition of complexity is based on a network perspective, something I've advocated for almost two decades now (Davies, 2003). That is, the degree of complexity depends on the number of nodes (e.g. people, objects or events), and the density and diversity of types of interactions between them. Some might object and say what you have described here is simply something which is complicated rather than complex. But I think I can be fairly confident in saying that as you move along this scale of increasing complexity (as I have defined it here) the behaviour of the network will become more unpredictable. I think unpredictability, or at least difficulty of prediction, is a fairly widely recognised characteristic of complex systems (But see Footnote).

The proposals:

Proposal 1. As the complexity of an intervention increases, the task of model development (e.g. a Theory of Change), especially model specification,  becomes increasingly important relative to that of model testing. This is because there are more and more parameters that could make a difference/ be "wrongly" specified

Proposal 2. When the confident specification of model parameters becomes more difficult then perhaps model testing will then become more like an exploratory search of a combinatorial space rather than more focused hypothesis testing.This probably has some implications for the types of methods that can be used. For example, more attention to the use of simulations, or predictive analytics.

Proposal 3. In this situation where more exploration is needed, where will all the relevant empirical data come from, to test the effects of different specifications? Might it be that as complexity increases there is more and more need for monitoring (/time-series data, relative to evaluation / once-off type data?

Proposal 4. And if a complex intervention may lead to complex effects – in terms of behaviour over time – then the timing of any collection of relevant data becomes important. A once-off data collection would capture the state of the intervention+context system at one point in an impact trajectory that could actually take many different shapes (e.g. linear, sinusoidal, exponential, etc. The conclusions drawn could be seriously misleading.

Proposal 5. And going back to model specification, what sort of impact trajectory is the intervention aiming for? One where change happens then plateaus, or one where there is an ongoing increase. This needs specification because it will affect the timing and type of data collection needed.

Proposal 6. And there may be implications for the process of model building. As the intervention gets more complex – in terms of nodes in the network –, there will be more actors involved, each of which will have a view on how the parts and perhaps the w0hole package is and should be working, and the role of their particular part in that process. Participatory, or at least consultative, design approaches would seem to become more necessary

Are there any other implications that can be identified? Please use the Comment facility below.

Footnote: Yes, I know you can also find complex (as in difficult to predict) behaviour in relatively simple systems, like a logistic equation that describes the interaction between predator and prey populations.  And there may be some quite complex systems (by my definition) that are relatively stable. My definition of complexity is more probabilistic than determinist

Friday, December 11, 2020

"If you want to think outside of the box, you first need to find the box" - some practical evaluative thinking about Futures Literacy




Over the last two days, I have participated in a Futures Literacy Lab, run by Riel Miller and organised as part of UNESCO's Futures Literacy Summit. Here are some off-the-cuff reflections.

Firstly the definition of futures literacy. I could not find a decent one, but my search was brief so I expect readers of this blog posting will quickly come up with a decent one. Until then this is my provisional interpretation. Futures literacy includes two types of skills, both of which need to be mastered, although some people will be better at one type than the other:


1. The ability to generate many different alternative views of what might happen in the future.


2. The ability to evaluate a diversity of alternative views of the future, using a range of potentially relevant criteria.

There is probably also a third skill, i.e. the ability to extract useful implications for action from the above two activities,  

The process that I took part in highlighted to me (perhaps not surprising because I'm an evaluator) the importance of the second type of skill above - evaluation. There are two reasons I can think of for taking this view:


1. The ability to critically evaluate one's ideas (e.g. multiple different views of the possible future) is a metacognitive skill which is essential. There is no value in being able to to generate many imagined futures if one is then incapable of sorting the "wheat from the chaff" - however that may be defined.


2. The ability to evaluate a diversity of alternative views of the future, can actually have a useful feedback effect, enabling us to improve the way we search for other imagined futures


Here is my argument for the second claim. In the first part of the exercise yesterday each participant was asked to imagine a possible future development in the way that evaluation will be done, and the role of evaluators, in the year 2050. We were asked to place these ideas on Post-It Notes on an online whiteboard, on a linear scale that ranged between Optimistic and Pessimistic. 

Then a second and orthogonal scale was introduced, which ranged from "I can make a difference" to I can't make a difference". When that second axis was introduced we were asked to adjust our Post-It Notes into a new position that represented our view of its possibility and our ability to make a difference to that event.  These two steps can be seen as a form of self-evaluation of our own imagined futures. Here is the result (don't bother try to read the note details).


Later on, as the process proceeded we were encouraged to 'think out of the box" But how do you do that, ...how do you know what is "out of the box"? Unless you deliberately go to extremes, with the associated risk that whatever you come up with be less useful (however defined)

Looking back at that task now, it strikes me that what the above scatterplot does is show you where the box is, so to speak.  And by contrast, where outside the box also is located.  "Inside the box" is the part of scatterplot where the biggest concentration of posts is located.  The emptiest area and thus most "out of the box" area is the top right quadrant.  There is only one Post-it Note there. So, if more out of the box thinking is needed in this particular exercise setting then perhaps we should start brainstorming about "Optimistic future possibilities and of a kind where I think "I can't make a difference"  - now there is a challenge!

The above example can be considered as a kind of toy model, a simple version of a larger and more complex range of possible applications. That is, that any combination of evaluative dimensions will generate a combinatorial space, which will be densely populated with ideas about possible futures in some areas and empty in others To explore those kinds of areas we will need to do some imaginative thinking at a higher level of abstraction, i.e. of the different kinds of evaluative dimensions that might be relevant. My impression is that this meta-territory has not yet been explored very much. When you look at the futures/foresight literature the most common evaluative dimensions are those of "possibility" and "desirability" (and ones I have used myself, within the ParEvo app). But there must be others that are also relevant in various circumstances.

Postscript 2020 12 11: This afternoon we had a meeting to review the Futures Literacy Lab experience. In that meeting one of the facilitators produced this definition of Futures Literacy, which I have visibly edited, to improve it :-)



 Lots more to be discussed here, for example:

1. Different search strategies that can be used to find interesting alternate futures. For example, random search, and "the adjacent possible" searches are two that come to mind

2. Ways of getting more value from the alternate futures already identified e.g. by recombination 

3. Ways of mapping the diversity of alternate futures that have already been identified e.g using network maps of kind I discussed earlier on this blog (Evaluating Innovation)

4. The potential worth of getting independent third parties to review/evaluate the (a) contents generated by participants, and (b) participants' self-evaluations of their content


For an earlier discussion of mine that might be of interest, see 

"Evaluating the Future"Podcast and paper prepared with and for the EU Evaluation Support Services Unit, 2020