From my point of view, one of the most interesting and important challenges is how to create useful representations of large, complex, dynamic structures, especially as seen by participants in those structures. For example, multi stakeholder processes in operation at national and international levels. Behind this view is an assumption, that if we have better representations then this will provide us with more informed choices about how to respond to that complexity. Note that the key word here is respond to, not manage. The scale of ambition is more modest. Management of complexity only seems feasible when it is on a small scale, such as the children’s play group example cited by Dave Snowden (DS).
I have had a long standing interest in one particular set of tools that can be used for producing representations of complex structures. These are social network analysis (SNA) methods and associated software. During the workshop Steve Waddell provided a good introduction to SNA and related tools.
DS’s presentations on the sense-making approach provided a useful complementary perspective. This was all about making use of large sets of qualitative data, of a kind that cannot be easily used by SNA tools. Many of this data was about people’s voices, values and concerns, all in the form of fairly unstructured and impromptu responses to questions asked by their peers (who were trained to do so). These are called “micro-narratives” (MNs).
DS’s sense-making process (and associated software) is innovative in at least three respects. Firstly, in terms of the huge scale. Up to 30,000 items of text collected and analysed in one application.In many cases this would be more like a census than a sample survey. I have never heard of qualitative data being collected on this scale before. Nor as promptly, including the time spent on analysis, in the case of the Pakistan example. Secondly, and related to this, is the sophistication and apparent user friendliness of the bespoke software and hardware that was used.
More interesting, and more important, was the decision to ask respondents to “self-signify” the qualitative information they had provided. This was done by asking the respondents to describe their own MNs by using two different kinds of scales, to rate the presence of different attributes already identified by the researchers as being of concern. The consequence of respondents providing this meta-data was that all the MNs could be given a location in a three dimensional space. In fact a number of different kinds of three dimensional spaces, if many self-signifiers were used. Within that space it was then possible for the researcher to look for clusters of MNs. Of special interest were clusters of MNs that were outliers, i.e. those that were not part of the centre of the overall distribution of MNs.
There are echoes here of the expectation that the collection and analysis of Most Significant Change (MSC) stories will help organisations identify “the edges of experience”, which they wanted to see more examples of in future (if positive), or less (if negative). The difference is DS's use of quantitative data to make these outliers more identifiable, in a transparent manner.
As far as I understand it, an additional purpose of using self-signifiers to identify clusters of MNs is to prevent premature completion of the process of interpretation by the researcher, and thus to strengthen the trustworthiness of the analysis that is made.
On the first day of the workshop I had two reservations about the approach that had been described. The first was about the “fitness landscape” that was drawn within the three dimensional space. How was it constructed, and why it was needed, this was unclear to me. My understanding now is that this surface is a mathematical projection from the 30,000 data points in that 3-D space (in the Pakistan example). A bit like a regression line in a 2D graph. One advantage of this constructed landscape is that it enable observers to have a clearer understanding of how these numerous MNs relate to each other on the three dimensions. When they are simply dots hanging in space this is much more difficult to do so.
I also wondered why “peak” locations were designated as peaks, and not troughs, and vice versa. This seems to be a matter of researcher choice. This seems okay, if the landscape has no more significance than a visual aid, as suggested above. But in some complexity studies peaks in landscapes are presented as unstable locations, and troughs as stable points, acting as “attractors”. Is it likely that any pole of any of the self-signifying scales will show this type of behaviour? If not, might it be better not to talk about fitness landscapes, or at least be very careful about not giving them more apparent significance than they merit? A related claim seems to have been made when DS said “Fitness landscapes show people where change is possible”. But is this really the case? I can’t see how it can be, unless desirable/undesirable attributes are built into the self-signifying scales chosen to create the 3D space. There is a risk that the technical language that is being used imputes more independent analytic capacity than the software has in reality.
The other concern I had was about who chooses the scales used to self-signify? I should say that I do think it is okay to derive these from a relevant academic field, or from the concerns of the client for the research. But might it provide an even more independent structuring of the MN data, if these scales were somehow also derived from the respondents themselves? On reflection, there seems to be no way of doing this when the sense- maker approach is applied on a large scale.
But on a much smaller scale I think there may be ways of doing this, by using a reiterated process of inquiry, rather than a once off process. I can provide an example by using data borrowed from a stakeholder consultation process held in rural Australia a few years ago. In the first stage respondents generated the equivalent of MNs. In this case they were short statements about how they expected a new fire prevention programme to help them and their community. These statements were in effect informal “objectives”, written in ordinary day-to-day language, on small filing cards. In the next stage the same individual stakeholders were each asked to sort these statements into a number of groups (of their own choosing), each group describing a different kind of expectation. Each of these groups was then labelled by the respondent who created it. The data from these card-sorting exercises was then aggregated into a single cards x cards matrix, where each cell value described how often the row card had been placed in the same group as the column card.
Here the card sorting exercise was in effect another means of self-signifying. It was generating meta-data, statements (group labels) about the statements (individual expectations). Unlike the tripolar and bipolar scales used in David’s sense-making approach, it did not enable a 3D space to be generated where all the 30 statements could be given a specific location. However, the cards x cards matrix was a data set that many SNA software tools can easily use to construct a network diagram, which is a 2D presentation of complex structures. The structure that was generated is shown below. Each node is a card, each link between two cards represents the fact that those two cards were placed in the same group one or more times (shown by line thickness). Clusters of cards all linked to each other were all placed in the same group one or more times.When using one software package (Visualyzer), a “mouseover” on any node can be used to show not only the original card contents (the expectation), but also the labels of the one or more groups that the card was later placed in.In this adapted use of self-signifiers the process of grouping cards helps add additional qualitative information and meaning to that already there in the card contents.
As well as being able to identify respondent defined clusters of statements, we can also sometimes see links between these clusters. The links are like a more skeletal version of the landscape surface discussed above. The “peaks” of that landscape are the nodes connected by strong links (i.e. the two cards were placed the same groups multiple times). These can be made easier to identify by applying a filter to screen out the weaker links. This is the metaphorical equivalent of raising the sea level, and covering the lower levels of the landscape.
The virtue of this network approach to analysing MNs is its very participative nature. Its limitation is its modest scalability. The literature on sorting methods suggests an upper limit of between 50 or so cards (I will investigate this further).While this is much less than 30,000, many structured stakeholder consultation processes can involve a smaller numbers of participants than this.
Key: Numbers represent the IDs of each card. Links indicate that the two cards were placed in the same group, one or more times. Thicker links = placed in the same group more often. Yellow nodes = most conspicuous cliques of cards (all often co-occuring).This image shows the strongest links only(i.e. above the average number. The mouseover function is not available for this image copy.
My final set of comments are about some of the risks and possible limitations of DS’s sense-making approach. The first concern is about transparency of method. To newcomers, the complexity terminology that is used when introducing the method was challenging, to say the least. At worst I wonder whether it is an unnecessary obstruction, and whether a shorter route to understanding the method would exist, if less complexity sciences terminology was used. The proprietary nature of the associated software is also a related concern to me, though I have been told that there is an intention to make an open source version available. Open source means open to critique and open to improvement, through collective effort, which is what the progress of science is ideally all about. The extensive use of complexity science terms also seems to make the approach vulnerable to corruption and possible ridicule, as people decided to “pick and mix” the bits and pieces of complexity ideas they are interested in, without understanding the basics of the whole idea of complexity.
Another issue is commensurate benefits. After seeing the scale of the data gathering involved, and the sophistication of the software used, both of which are impressive, I did also wonder whether the benefits obtained from the analysis were commensurate with the costs and efforts that had been invested, at least in the examples we were told about. Other concerns are not exclusive to the sense-making approach. What about the stories not told? Perhaps with almost census like coverage of some groups of concern this is less of a concern than with other large scale ethnographic inquiries. What about unexpected stories? Is the search for outliers leading to the discovery of views which are a surprise to clients of the research, and of possible consequence to their plans on how to relate to the respondents in the future? And are these surprises enough in number, or are they dramatic enough, to counterbalance the resource invested to find them?
------------------
“At the heart of all major discoveries in the physical sciences is the discovery of novel methods of representation” Steven Toulmin
Very interesting post, Rick. Where the filters come from is a key question. I and colleagues have been experimenting with a three-stage process using the Cognitive Edge techniques to derive filters: (1) "pump-priming" narrative collection (2) run a series of workshops to identify patterns in the narratives, from which filters can be derived - the MSC selection process would be an equivalent stage I suppose (3) cycle back through the narrative base to apply the filters or start collecting self-signified narratives using the filters. We're currently mid-way through an open project on how organisations leverage their expertise, using this approach (http://usingexpertise.blogspot.com and http://usingexpertise.wikispaces.com).
ReplyDeleteSee Dave Snowden's initial response on his blog Cognitive Edge, called
ReplyDeleteMost Significant Chance: interesting possibilities dated 2 December 2009
Ian Abbott-Donnelly
ReplyDeleteE-mail abbottia@uk.ibm.com
made this Comment on another blog of mine:
Comment: See http://www.mefeedia.com/watch/24063573 You might like this connexion between aid monitoring and sensmaking tools. An idea that emerged at the World Water Forum with akvo.org after a UN monitoring workshop. Best Regards