Saturday, May 18, 2019

Evaluating innovation...



Earlier this week I sat in on a very interesting UKES 2019 Evaluation Conference presentation "Evaluating grand challenges and innovation" by Clarissa Poulson and Katherine May (of IPE Tripleline).

The difficulty of measuring and evaluating innovation reminded me of similar issues I struggled with many decades ago when doing the Honours year of my Psychology degree, at ANU. I had a substantial essay to write on the measurement of creativity! My faint memory of this paper is that I did not make much progress on the topic.

After the conference, I did a quick search to find how innovation is defined and measured. One distinction that is often made is between invention and innovation. It seems that innovation = invention + use.  The measurement of the use of an invention seems relatively unproblematic. But if the essence of the invention aspect of innovation is newness or difference, then how do you measure that?

While listening to the conference presentation I thought there were some ideas that could be usefully borrowed from work I am currently doing on the evaluation and analysis of scenario planning exercises. I made a presentation on that work in this year's UKES conference (PowerPoint here).

In that presentation, I explained how participants' text contributions to developing scenarios (developed in the form of branching storylines) could be analyzed in terms of their diversity. More specifically, three dimensions of diversity, as conceptualised by Stirling (1998):
  • Variety: Numbers of types of things 
  • Balance: Numbers of cases of each type 
  • Disparity: Degree of difference between each type 
Disparity seemed to be the hardest to measure, but there are measures used within the field of Social Network Analysis (SNA) that can help. In SNA distance between actors or other kinds of nodes in a network, is measured in terms of "geodesic", i.e. the number of links between any two nodes of interest. There are various forms of distance measure but one simple one is "Closeness", which is the sum of geodesic distances from a node in a network and all other nodes in that network (Borgatti et al, 2018). This suggested to me one possible way forward in the measurement of the newness aspect of an innovation.

Perhaps counter-intuitively, one would ask the inventor/owner of an innovation to identify what other product, in a particular population of products, that their product was most similar to. All other unnamed products would be, by definition, more different. Repeating this question for all owners of the products in the population would generate what SNA people call an  "adjacency matrix", where a cell value (1 or 0) tells us whether or not a specific row item is seen as most similar to a specific column item. Such a matrix can then be visualised as a network structure, and closeness values can be calculated for all nodes in that network using SNA software (I use UCINET/Netdraw) . Some nodes will be less close to other nodes, than others. That measure is a measure of their difference or "disparity"

Here is a simulated example, generated using UCINET. The blue nodes are the products. Larger blue nodes are more distant i.e. more different, from all the other nodes. Node 7 has the largest Closeness measure (28) i.e. is the most different, whereas node 6 has the smallest Closeness measure (18) i.e. is the least different.

There are two other advantages to this kind of network perspective. The first is that it is possible to identify the level of diversity in the population as a whole.  SNA software can calculate the average closeness of all nodes in a network, to all others.  Here is an example, of a network where nodes are much more distant from each other than the example above


The second advantage is that a network visualisation, like the first one above, makes it possible to identify any clusters of products. i.e. products that are each most similar to each other. No example is shown here, but you can imagine one!.

So, three advantages of this measurement approach:
1. Identification of how relatively different a given product or process is
2. Identification of diversity in a whole population of products
3. Identification of types of differences (clusters of self-similar products) within that population.

Having identified a means of measuring degrees of newness or difference (and perhaps categorising types of these), the correlation between these and different forms of product usage could then be explored.

PS: I will add a few related papers of interest here:

Measuring multidimensional novelty


Sometimes the new entity may be novel in multiple respects but in each respect only when compared to a different entity. For example, I have recently reviewed how my participatory scenario planning app ParEvo is innovative, in respect to (a) its background theory, (b) how it is implemented, (c) how the results are represented. In each area, there was a different "most similar" comparator practice.

The same network visualisation approach can be taken as above. The difference is the new entity will have links to multiple existing entities, not one, and the link to each entity will have varying "weight", reflecting the number of shared attributes it has with that entity. The aggregate value of the link weights for novel new entities will be less than those of other existing entities.  

Information on the nature of the shared attributes can be identified in at least two ways:
(a) content analysis of the entities, if they are bodies of text (as in my own recent examples)
(b) card/pile sorting of the entities by multiple respondents

In both cases, this will generate a matrix of data, known as a two-mode network. Rows will represent entities and columns will represent their attributes (as via content analysis) or pile membership.

Novelty and Most Significant Change

The Most Significant Change (MSC) technique is a participatory approach to impact monitoring and evaluation, described in detail in the 2005 MSC Guide. The core of the approach is a question that asks "In your opinion, what was the most significant change that took place in ...[location]...over the last ...[time period]?" This is then followed up by questions seeking both descriptive details and an explanation of why the respondent thinks the change is most significant to them.

A common (but not essential) part of MSC use is a subsequent content analysis of the collected MSC stories of change. This involves the identification of different themes running through the stories of change, then the coding of the presence of these themes in each MSC story. One of the outputs will be a matrix, where rows = MSC stories and columns = different themes and cell values = the presence or absence of a particular column theme in a particular row story.

Such matrices can be easily imported into network analysis and visualisation software (e.g. Ucinet&Netdraw) and displayed as a network structure. Here the individual nodes represent individual MSC stories and individual themes. Links show which story has which theme present (= a two-mode matrix). The matrix can also be converted into two different types of one-mode matrix, where (a) stories are connected to stories by N number of common themes, and (b) themes are connected to themes by N number of common stories.

Returning to the focus on novelty, with each of the one-mode networks, our attention should be on (a) story nodes on the periphery of the network, and (b) on story nodes with a low total number of shared themes with other nodes (found by adding their link values). Network software usually enables filtering by multiple means, including link values, so this will help focus on nodes that have both characteristics.

I think this kind of analysis could add a lot of value to the use of MSC as a means of searching for significant forms of change, in addition to the participatory analytic process already built into the MSC process.




















2 comments:

  1. See the following article that I published in Evaluation Matters, the New Zealand evaluation society journal: https://pdfs.semanticscholar.org/85d5/ce2038c5c3fed796dcc4a62be407afdd1991.pdf
    It challenges the single narrative concerning innovation.
    Bob Picciotto

    ReplyDelete
  2. Thanks very much for a copy of your paper, which I'm reading with interest right now. The UK ES conference paper which I gave earlier this year was all about a web app called ParEvo which I have developed and which I have described as a "web-assisted participatory scenario planning process".See https://mscinnovations.wordpress.com/ This is all about the participatory development of multiple alternative narratives about what has happened or about what could happen. The app includes an evaluation opportunity, where participants identify which of the various storylines that have been developed, in their opinion, are (a) most likely to happen, (B) least likely to happen, (C) most desirable, (D) least desirable. So the app has two parts: (a) a participatory search process, (B) participate re-evaluation opportunity....I will now keepreading your paper.. Can I post it on the MandE NEWS website, and if so how should it be referenced?

    ReplyDelete