Thursday, August 26, 2010

Meta-narratives, evaluation and complexity

A meta-narrative is a story about stories. Some evaluations take this form, especially those using participatory approaches to obtain qualitative data from a diversity of sources. Even more conventional expert-led evaluations have an element of storytelling to them as they attempt to weave information obtained from various sources, often opportunistically, into a coherent and plausible overall picture of what happened, and what might happen in future.


Recently I have come across two examples of evaluations that were very much about creating a story about stories. They raised interesting questions about method: how can it be done well? 


Stories about Culture


The first evaluation was of a multiplicity of small arts projects in developing countries, funded by DOEN, a Dutch funding agency. Claudia Fontes used the Most Significant Change technique to elicit and analyse 95 stories from a sample of different kinds of participants in these projects. The aim was to identify what DOEN’s cultural intervention meant to the primary stakeholders. What particularly interested me was one part of the MSC process, which can be a useful step when faced with a large number of stories. This involved the participants categorising the stories into different groupings, according to their commonalities. It was from each of these groupings that the participants then went on to select, through intensive discussion, what they saw as the most significant changes of all. In one country five categories of stories were identified: Personal Development and Growth, Professional Development, Exposure, Change Of Perception And Attitude Towards Art And Artists, and Validation Of Self-Expression. Later on, at the report writing stage, Claudia looked at the contents of these groupings, especially the MSC stories within each, and produced an interpretation of how these groups of stories linked together. In other words, a meta-narrative. 


“For the primary stakeholders in XXXX these categories of change relate to each other in that the personal and professional development of artists and other professionals who support the artists’ work results in a validation of the self-expression of direct (artists) and indirect (public in general) users. This process of affirmation and recovery of ownership of self-expression contributes in turn to a change in society’s perception of art and artists with the potential to make the whole cycle of change sustainable for the sector. Strategies of exposure have a key role in contributing across these changes, and towards the profiling of the sector in general” (italics added) 

In commenting on the report I suggested that in future it might be possible and useful to take a participatory approach to the same task of producing a meta-narrative. Faced with the five groupings (and knowledge of their contents) each participant could be asked to identify expected causal connections between the different groupings, and give some explanation of these views. This can be done through a simple card sorting exercise. The results from multiple participants can then be aggregated, and the result will take the form of a network of relationships between groupings, some being stronger than others (stronger in the sense of more participants’ highlighting a particular causal linkage). This emergent structure can then be visualized using network software. Once visualised in this manner, the structure could be the subject of discussion, and perhaps some revisions.  One important virtue of this kind of process is that it will not necessarily produce a single dominant narrative. Minority and majority views will be discernable. And using network visualization software, the potential complexity would be manageable. Network views can be filtered on multiple variables, such as strength of the causal linkages.


Stories about Conflict


The second evaluation was done by Lundy and McGovern, of Community –based approaches to Post-Conflict “Truth telling” in Northern Ireland” I was sent this and other related papers by Ken Bush, who is exploring methods for evaluating story-telling as a peace building methodology. His draft conceptual framework notes that a “survey of the literature highlighted the lack of an agreed and effective evaluation tool for story-telling in peace-building despite the near universality of the practice and the huge monetary investment by the EU and others in story-telling projects.”

Lundy and McGovern’s paper is a good read, because it explores the many important complications of storytelling in a conflicted society. Not only important issues like appropriate sampling of story tellers, but how the story telling project intentions were framed, and the how the results were presented. The primary product of the project was a publication called “Ardoyne: The Untold Truth”, containing testimonies based on 300 interviews.  The purpose of Lundy and McGovern’s assessment of the project was “to assess the impacts and benefits of community based “truth telling”. This was done by interviewing 50 people from five different stakeholder groups. The results were then written up in their paper.


What we have here is daunting in its complexity: (a) There are the “original” stories, as compiled in the book, (b) then the stories of people’s reactions to these stories and how they were collected and disseminated, (c) and then the authors’ own story about how they collected these stories and  their interpretation of them as a whole. And of course behind all this we have the complex (as colloquially used) context of Northern Ireland!

When reading what might be called Lundy and McGovern’s meta-meta-narrative (i.e. the interpreted results of the interviews) I looked for information on how sources were cited. These are the sorts of phrases I found: “according to respondents”, “many”, ”there was evidence”, “most”, “the vast majority”, “It was felt”, “respondents”, “in the main”, “for many”, “many people”, “there was a very strong opinion”, “it was felt”, “there was a consensus”. “for the majority of participants”, “without exception”, “many interviews”, “overwhelmingly”, “for others”, “some”, “for these respondents”, “one of the most frequently mentioned”, “it was further suggested”, “most respondents”, “the view”, “it was further suggested”, in general respondents were of the view that”, “the experience of those involved…would seem to suggest”, “some respondents”, “the overwhelming majority”, “responses from Union representatives were”, “for some”, “a representative of the community sector”, “that said, others were”, “by another interviewee”. “it was”, and “a significant section of mainly nationalist interviewees”.


I list these here with some hesitation, knowing how often during evaluations I have resorted to using the same vocabulary, when faced with making sense of many different comments by different sources, in a limited period of time. However there are important issues here, made even more important by how often we have to deal with situations like this. How people see things, like their reactions to the Ardoyn stories, matters. How many people see things in a given way matters, who those groups of people are matters, and how the views of different groups overlap also matters. In Lundy and McGovern’s paper we only get glimpses of this kind of underlying social structure. We sometimes get a sense of majority or minority and occasionally which particular group holds a view, and sometimes that a group sharing one view also thinks that…


How could it be done differently? The views of a set of respondents can be summarised in a "two-mode" matrix, with respondents listed in the rows and the descriptions of views listed in columns, and cell values indicating what is known about a person’s views on a listed issue. For example, agreement/disagreement, degree of agreement, or not known. By itself this data is not easy to analyse, other than through frequency counts (e.g. # of people supporting x view, or # views expressed by x person). But it is possible to convert this data into two different kinds of one-mode matrix, showing: (a) how different people are connected to each other (by their shared views) and, (b) how different views are connected to each other (by the same people holding those views). The networks structure of the data in these matrices can be seen and further manipulated using network visualization software

As in many evaluations, Lundy and McGovern were constrained by a confidentiality commitment. Individuals can be anonymised by being categorized into types of people, but this may have its limits if the number of respondents is small and the identity of participants is known to others (if not their specific views). This means the potential to make use of the first kind of network visualization (i.e. a) may be limited, even if the network visualization showed the relationships between types of respondents. However, the second type (i.e. b) should remain an option. To recap, this would show a network of opinions, some strongly linked to many others because they were often shared by the other respondents, others with weaker links to fewer others because they were shared with few, if any, other respondents. The next step would be the development of a narrative, commentary explaining the highlights of the network structure. This would usefully focus on the contents of the different clusters of opinions, and the nature of any bridges between them, especially of the clusters expressed contrasting views.


There are two significant hurdles in front of this approach. Typically not all respondents will express views on all topics, and the number who do will vary across topics. One option would be to filter out the views with the least number of respondents. The other, which I have never tried, would be interesting to explore. That would be to build in a supplementary question in interviews, along the lines of  “…and how many people do you think would feel the same way as you on this issue?”. Their answers would be important in themselves, possibly affecting how the same people might act on their own views. But the same answers could also provide a weighting mechanism for views in an otherwise small sub-sample.

The second hurdle is that the network description of the relationships between the participants views is a snapshot in time. But an evaluation usually requires comparison, with a prior state. This is a problem if the  questions asked by Lundy and McGovern were about current opinions. but if they were about changes in people's views it would not be.

Lets return to the layer below, the stories collected in the original “Ardoyne: The Untold Truth” publications. Stories beget stories. The telling of one can prompt the telling of another. If stories can be seen as linked in this way, then as the number number of stories recounted grows we could end up with a network of stories. Some stories  in that network may be told more often than others, because they are connected to many others, in the minds of the storytellers. These stories might be what complexity science people call "attractors" Although storytellers may start off telling various different stories, their is a likelihood many of them will end up telling this particular story, because of its connectedness, its position in the network.  If these stories are negative, in the sense of provoking antipathy towards others in the same community, then this type of structure may be of concern. Ideally the attractors, the highly connected stories in the network would be positive stories, encouraging peace and cooperation with others. This network structure of stories could be explored by an evaluator asking questions like "What other stories does this story most remind you off? or, "Which of these stories does that story most remind you of?" Or versions thereof. When comparing changes over time the evaluator's focus would then be on the changing contents of the strongly connected versus weakly connected stories.


In this discussion above I have outlined how a network approach could help us construct various types of aggregated (network) views of multiple stories. Because they are built up out of the views of individuals, it would be possible to see where there were varying degrees of agreement within those structures. They would not be biased towards a single (excluding others) narrative, a concern of many people using story-telling approaches including some of the originators of the term meta-narrative

And complex histories


My final comments relate to another form of story-telling, that is grand narrative as done by historians. Yesterday I read with interest Niall Ferguson’s Complexity and Collapse: Empires on the Edge of Chaos(originally in Foreign Affairs). In this article Niall describes the ways some historians have sought to explain the rise and fall in empires, in terms of sequences of events taking place over long periods. In his view they suffer from what Nassim Taleb calls "the narrative fallacy": they construct psychologically satisfying stories on the principle of post hoc, ergo propter hoc ("after this, therefore because of this”). That is, the propensity to over-explain major historical events, to create a long and coherent story where in fact there was none. His alternate view is couched in terms of complexity theory ideas. Given the complexity of modern societies “In reality, the proximate triggers of a crisis are often sufficient to explain the sudden shift from a good equilibrium to a bad mess.” He then qualifies the notion of equilibrium: “a complex economy is characterized by the interaction of dispersed agents, a lack of central control, multiple levels of organization, continual adaptation, incessant creation of new market niches, and the absence of general equilibrium.” Within those systems small changes can have catastrophic (i.e. non-linear) effects, because of the nature of the connectivity involved. Ferguson then goes onto recount examples of the rapidity of decline in some major empires.

One point which he does not make, but which  I think is implicit in his discription of how change can happen in complex systems is that more than one type of small change can trigger the same kind of large scale change. Consider the assissination of Archduke Franz Ferdinand of Austria in Sarajevo in June 1914. Would World War 1 not have happened if that event took place? Not speaking as a historian..my guess is that there are quiet a few other events that could have triggered the start of a war thereafter.


Niall Ferguson complexity based view is in a sense a technocrat’s objection to grand narratives, but perhaps also another kind of grand narrative in its own right. Nevertheless his view does seem to have practical relevance to the writing of evaluation stories: it highlights the need for caution about excessive internal coherence in any story of change and its causes. A network view of causal relationships between types of events, constructed by participants with differing views, might help mitigate against this risk, when it needs to be reduced to a text description.

PS: "In recent years, however, advancements in cognitive neuroscience have suggested that memories unfold across multiple areas of the cortex simultaneously, like a richly interconnected network of stories, rather than an archive of static files." in The Fully Immersive Mind of Oliver Sacks

PS 25 October 2010. Please also see  Networks of self-categorised stories

Friday, August 20, 2010

Cynefin Framework versus Stacey Matrix versus network perspectives

  
Cynefin

Lots of people seem to like the Cynefin Framework. Jess Dart and Patricia Rogers are some of my friends and colleagues of mine who have expressed a liking for it. It was one of the subjects of discussion in the recent Evaluation Revisited conference in Utrecht in May. Why don’t I like it? There are three reasons...

Usually matrix classifications of possible states are based on the intersection of two dimensions. They can provide good value because combining two dimensions to generate four (or more) possible states is a compact and efficient way of describing things. Matrix classifications have parsimony.

But whenever I look at descriptions of the Cynefin Framework I can never see, or identify, what the two dimensions are which give the framework its 2 x 2 structure, and from which the four states are generated. If they were more evident I might be able to use them to identify which of the four states best described the particular conditions I was facing at a given time. But up to now I just have to make a best guess, based on the description of each state. PS: I have been told by someone recently that Dave Snowden says this is not a 2x2 matrix, but if so, why is presented like one?

My second concern is the nature of the connection between this fourfold classification and other research on complexity, beyond the field of management studies and consultancy work. IMHO, I don’t think there is much in the way of a theoretical or empirical basis for it, especially when Dave’s fifth state of “disorder”, is placed in the centre. This may be the reason why the two axes of the matrix I mentioned above have not been specified, ...because they have not yet been found.

My third concern is that I don’t think the fourfold classification has much discriminatory power. Most the situations I face, as an evaluator, could probably be described as complex. I don’t see many really chaotic ones, like gyrating stock markets or changeable weather patterns, nor do I see many that could be described as simple, or just complicated. Except perhaps when dealing with single person’s task, not involving interactions with others. Given the prevalence of complex situations, I would prefer to see a matrix that helped me discriminate between different forms of complexity, and their possible consequences.

Stacey


This brings me to Stacey's matrix, which does have two identifiable dimensions shown above: certainty (i.e. the predictability of events) and the degree of agreement over those events. Years before I had heard of "Stacey's matrix"" I had found the same kind of 2 x 2 matrix a useful means of describing four different kinds of possible development outcomes which had different  implications for what sort of M&E tools would be most relevant. For example, by definition you cannot use predefined indicators to monitor unpredictable outcomes (regardless of whether we agree or disagree on their significance). However methods like MSC can be used to monitor these kinds of change. And a good case could be made for more attention to the use of historian's skills, especially to respond to unexpected events that are of dispute meaning. More recently I argued that weighted checklists are probably the most suitable for tracking outcomes that are predictable but where there is not necessary any agreement about their significance. A quote from Patton could be hijacked and used here "These distinctions help with situation recognition  so that an evaluation approach can be selected that is appropriate to a particular situation and intervention, thereby increasing the likely utility -and actual use- of the evaluation" (page 85, Developmental Evaluation)

Post script: Here is an example of how I have used it for this kind of purpose, in  a posting on MandE NEWS about weighted checklists


From what I have read I think Ralph Stacey also produced the following more detailed version of his matrix:


This has then been simplified by Brenda Zimmerman, as follows


In this version simple, complicated complex and anarchy (chaos) are in effect part of a continuum, involving different mixes of agreement and certainty. Interestingly, from my point of view, the category taking up the most space in the matrix is that of complexity, echoing my gut level feeling expressed above. This feeling was supported when I read Patton's three examples of simple, complicated and complex (page92, ibid), based on Zimmerman. The simple and complicated examples were both about making materials do what you wanted (cake mix and rocket components), whereas the complex example was about child rearing i.e. getting people to do what you wanted. More interesting still, the complex example was raising a couple of children in  family, in other words a small group of people.So anything involving more people is probably going to be a whole lot more complex. PS: And interestingly along the same lines, the difference between simple and complicated was a physical task involving one person (following a recipe) and one involving large numbers of people (sending a rocket into space)

Another take on this is given by Chris Rodgers comments on Stacey’s views:
Although the framework, which Stacey had developed in the mid-1990s, regularly crops up in blogs, on websites and during presentations, he no longer sees it as valid and useful.  His comment explains why this is the case, and the implications that this has for his current view of complexity and organizational dynamics.  In essence, he argues that
  • life is complex all the time, not just on those occasions which can be characterized as being “far from certainty” and “far from agreement” …
  • this is because change and stability are inextricably intertwined in the everyday conversational life of the organization …
  • which means that, even in the most ordinary of situations, something unexpected might happen that generates far-reaching and unexpected outcomes …
  • and so, from this perspective, there are no “levels of complexity” …
  • nor levels in human action that might usefully be thought of as a “system”.
Well maybe,… but this is beginning to sound a bit too much like the utterances of a Zen master to me :-) Like Rodgers, I hope we can still make some kind of useful distinctions re complexity.

Back to Snowden

Which brings me back to a more recent statement by Dave Snowden, which to me seems more useful than his earlier Cynefin Framework. In his presentation at the Gurteen Knowledge Cafe, in early 2009, as reported by Conrad Taylor, "Dave presented three system models: ordered, chaotic and complex. By ‘system’ he means networks that have coherence, though that need not imply sharp boundaries. ‘Agents’ are defined as anything which acts within a system. An agent could be an individual person,or a grouping; an idea can also be an agent, for example the myth-structures which largely determine how we make decisions within the communities and societies within which we live."
  • "Ordered systems are ones in which the actions of agents are constrained by the system, making the behavior of the agents predictable. Most management theory is predicated on this view of the organisation."
  • Chaotic systems are ones in which the agents are unconstrained and independent of each other. This is the domain of statistical analysis and probability. We have tended to assume that markets are chaotic; but this has been a simplistic view."
  • "Complex systems are ones in which the agents are lightly constrained by the system, and through their mutual interactions with each other and with the system environment, the agents also modify the system. As a result, the system and its agents ‘co-evolve’. This, in fact, is a better model for understanding markets, and organisations.”

This conceptualization is simpler (i.e. has more economy) and seems more connected with prior research on complexity. My favorite relevant quote here is Stuart  Kauffman’s book: At home in the Universe: The search for the laws of complexity (p86-92) where he describes the behavior of electronic models of networks of actors (with on/off behavior states for each actor) moving from simple to complex to chaotic patterns, depending on the number of connections between them. As I read it, few connections generate ordered (stable) network behavior, many connections generate chaotic (apparently unrepeating) behavior, and medium numbers (where N actors = N connections) generate complex cyclical behavior. (See more on Boolean networks).

This relates back to conversation that I had with Dave Snowden in 2009 about the value of a network perspective on complexity, in which he said (as I remember) that relationships within networks can be seen as constraints. So, as I see it, in order to differentiate forms of complexity we should be looking at the nature of the specific networks in which actors are involved: Their number, the structure of relationships, and perhaps the extent to which the actors have own individual autonomy i.e. responses which are not specific to particular relationships (an attribute not granted to “actors” in the electronic model described).

My feeling is that with this approach it might even be possible to link this kind of analysis back to Stacey’s 2x2 matrix. Predictability might be primarily a function of connectedness, and therefore more problematic in larger networks where the number of possible connections is much higher. The possibility of agreement, Stacey’s second dimension, might be further dependent the extent to which actors’ have some individual autonomy within a given network structure.

To be continued…

PS 1:Michael Quinn Patton's book on Developmental Evaluation has a whole chapter on "Distinguishing Simple, Complicated, and Complex". However, I was surprised to find that despite the book's focus on complexity, there was not a single reference in the Index to "networks". There was one example of a network model (Exhibit 5.3) , contrasted with a Linear Program Logic Model..." (Exhibit 5.2) in the chapter on Systems Thinking and Complexity Concepts. [I will elaborate further here]

Regarding the simple, complicated and complex, on page 95 Michael describes these as "sensitising concepts, not operational measurements" This worried me a bit, but it is an idea with a history (Look here for other views on this idea). But he then says "The purpose of making such distinctions is driven by the utility of situation recognition and responsiveness. For evaluation this means matching the evaluation to the nature of the situation" That makes sense to me, and is how have I tried to use the simple version of the Stacey Matrix (using dimensions only). However, Michael then goes on to provide, perhaps unintentionally, evidence of how useless these distinctions are in this respect, at least in their current form. He describes working with a group of 20 experienced teachers to design an evaluation of an innovative reading program. "They disagreed intensely about the state of knowledge concerning how children learn to read..Different preferences for evaluation flowed from different definitions of the situation. We ultimately agreed on a mixed methods design that incorporated aspects of both sets of preferences". Further on in the same chapter, Bob Williams is quoted reporting the same kind of result (i.e conflicting interpretations), in a discussion with health sector workers. PS 25/8/2010 - Perhaps I need to clarify here - in both cases participants could not agree on whether the situation under discussion was simple, complicated or complex, and thus these distinctions could not inform their choices of what to do. As I read it, in the first case the mixed method choice was a compromise, not an informed choice.

PS 2: I have also just pulled Melanie Mitchell's "Complexity: A Guided Tour" off the shelf, and re-scanned her Chapter 7 on "Defining and  Measuring Complexity". She notes that about 40 different measures of complexity have been proposed by different people. Her conclusion, 17 pages later, is that "The diversity of measures that have been proposed indicates that the notions of complexity that we're trying to get at have many different interacting dimensions and probably cant be captured by a single measurement scale" This is not a very helpful conclusion. But I noticed that she does cite earlier what seem to be three categories of measures that cover many of the 40 or so measures: These are: 1. how hard the object or process is to describe?, 2. How had it is to create?, and 3. What is its degree of organisation?

PS 3: I have followed up John Caddell's advice to read a blog post by Cynthia Kurtz (a co-author of the IBM Systems Journal paper on Cynefin) recalling some of the early work around the framework. In that post was the following version of the Cynefin Framework included in the oft-mentioned "The new dynamics of strategy: Sense-making in a complex and complicated world" published in the IBM SYSTEMS JOURNAL, VOL 42, NO 3, 2003.
In her explanation of the origins of this version she says it had two axes: "the degree of imposed order" and "the degree of self-organization." This I found interesting because these dimension have the potential to be measurable. If they are measurable, then the actual behavior of four identified systems could be compared. And we could then ask "Does their behavior differ in ways that have consequences for managers or evaluators?" I have previously speculated that there might be network measures that could describe these two measures: network density and network centrality. Network centrality could be the x axis, being low on the left and high on the right, and network density could be the y axis, low on the bottom and high on the top. How well the differences in these four types of network structures might capture our day-to-day notion of complexity is not yet clear to me. As mentioned way above, density does seem to be linked to differences between simple, complex and chaotic behavior. Maybe differences in centrality moderate/magnify the consequences of different levels of network density?

PS 4 (April 2015: For more reading on this subject that may be of interest, see Diversity and Complexity by Scott E Page, Princeton, 2011

PS 5 (June 2020): Please view Andy Stirlings video'd take on risk, uncertainty, ambiguity and ignorance, a slightly different taken on the two dimensions also present in the Stacey matrix

PLEASE NOTE: To make a Comment, or to read the Comments already made on this post, click on the Leave a Comment  link below, or directly here