Saturday, May 18, 2019

Evaluating innovation...



Earlier this week I sat in on a very interesting UKES 2019 Evaluation Conference presentation "Evaluating grand challenges and innovation" by Clarissa Poulson and Katherine May (of IPE Tripleline).

The difficulty of measuring and evaluating innovation reminded me of similar issues I struggled with many decades ago when doing the Honours year of my Psychology degree, at ANU. I had a substantial essay to write on the measurement of creativity! My faint memory of this paper is that I did not make much progress on the topic.

After the conference, I did a quick search to find how innovation is defined and measured. One distinction that is often made is between invention and innovation. It seems that innovation = invention + use.  The measurement of the use of an invention seems relatively unproblematic. But if the essence of the invention aspect of innovation is newness or difference, then how do you measure that?

While listening to the conference presentation I thought there were some ideas that could be usefully borrowed from work I am currently doing on the evaluation and analysis of scenario planning exercises. I made a presentation on that work in this year's UKES conference (PowerPoint here).

In that presentation, I explained how participants' text contributions to developing scenarios (developed in the form of branching storylines) could be analyzed in terms of their diversity. More specifically, three dimensions of diversity, as conceptualised by Stirling (1998):
  • Variety: Numbers of types of things 
  • Balance: Numbers of cases of each type 
  • Disparity: Degree of difference between each type 
Disparity seemed to be the hardest to measure, but there are measures used within the field of Social Network Analysis (SNA) that can help. In SNA distance between actors or other kinds of nodes in a network, is measured in terms of "geodesic", i.e. the number of links between any two nodes of interest. There are various forms of distance measure but one simple one is "Closeness", which is the sum of geodesic distances from a node in a network and all other nodes in that network (Borgatti et al, 2018). This suggested to me one possible way forward in the measurement of the newness aspect of an innovation.

Perhaps counter-intuitively, one would ask the inventor/owner of an innovation to identify what other product, in a particular population of products, that their product was most similar to. All other unnamed products would be, by definition, more different. Repeating this question for all owners of the products in the population would generate what SNA people call an  "adjacency matrix", where a cell value (1 or 0) tells us whether or not a specific row item is seen as most similar to a specific column item. Such a matrix can then be visualised as a network structure, and closeness values can be calculated for all nodes in that network using SNA software (I use UCINET/Netdraw) . Some nodes will be less close to other nodes, than others. That measure is a measure of their difference or "disparity"

Here is a simulated example, generated using UCINET. The blue nodes are the products. Larger blue nodes are more distant i.e. more different, from all the other nodes. Node 7 has the largest Closeness measure (28) i.e. is the most different, whereas node 6 has the smallest Closeness measure (18) i.e. is the least different.

There are two other advantages to this kind of network perspective. The first is that it is possible to identify the level of diversity in the population as a whole.  SNA software can calculate the average closeness of all nodes in a network, to all others.  Here is an example, of a network where nodes are much more distant from each other than the example above


The second advantage is that a network visualisation, like the first one above, makes it possible to identify any clusters of products. i.e. products that are each most similar to each other. No example is shown here, but you can imagine one!.

So, three advantages of this measurement approach:
1. Identification of how relatively different a given product or process is
2. Identification of diversity in a whole population of products
3. Identification of types of differences (clusters of self-similar products) within that population.

Having identified a means of measuring degrees of newness or difference (and perhaps categorising types of these), the correlation between these and different forms of product usage could then be explored.

PS: I will add a few related papers of interest here:

Measuring multidimensional novelty


Sometimes the new entity may be novel in multiple respects but in each respect only when compared to a different entity. For example, I have recently reviewed how my participatory scenario planning app ParEvo is innovative, in respect to (a) its background theory, (b) how it is implemented, (c) how the results are represented. In each area, there was a different "most similar" comparator practice.

The same network visualisation approach can be taken as above. The difference is the new entity will have links to multiple existing entities, not one, and the link to each entity will have varying "weight", reflecting the number of shared attributes it has with that entity. The aggregate value of the link weights for novel new entities will be less than those of other existing entities.  

Information on the nature of the shared attributes can be identified in at least two ways:
(a) content analysis of the entities, if they are bodies of text (as in my own recent examples)
(b) card/pile sorting of the entities by multiple respondents

In both cases, this will generate a matrix of data, known as a two-mode network. Rows will represent entities and columns will represent their attributes (as via content analysis) or pile membership.

Novelty and Most Significant Change

The Most Significant Change (MSC) technique is a participatory approach to impact monitoring and evaluation, described in detail in the 2005 MSC Guide. The core of the approach is a question that asks "In your opinion, what was the most significant change that took place in ...[location]...over the last ...[time period]?" This is then followed up by questions seeking both descriptive details and an explanation of why the respondent thinks the change is most significant to them.

A common (but not essential) part of MSC use is a subsequent content analysis of the collected MSC stories of change. This involves the identification of different themes running through the stories of change, then the coding of the presence of these themes in each MSC story. One of the outputs will be a matrix, where rows = MSC stories and columns = different themes and cell values = the presence or absence of a particular column theme in a particular row story.

Such matrices can be easily imported into network analysis and visualisation software (e.g. Ucinet&Netdraw) and displayed as a network structure. Here the individual nodes represent individual MSC stories and individual themes. Links show which story has which theme present (= a two-mode matrix). The matrix can also be converted into two different types of one-mode matrix, where (a) stories are connected to stories by N number of common themes, and (b) themes are connected to themes by N number of common stories.

Returning to the focus on novelty, with each of the one-mode networks, our attention should be on (a) story nodes on the periphery of the network, and (b) on story nodes with a low total number of shared themes with other nodes (found by adding their link values). Network software usually enables filtering by multiple means, including link values, so this will help focus on nodes that have both characteristics.

I think this kind of analysis could add a lot of value to the use of MSC as a means of searching for significant forms of change, in addition to the participatory analytic process already built into the MSC process.




















Thursday, March 28, 2019

Where there is no (decent / usable) Theory of Change...



I have been reviewing a draft evaluation report in which two key points are made about the relevant Theory of Change:

  • A comprehensive assessment of the extent to which expected outcomes were achieved (effectiveness) was not carried out, as the xxx TOC defines these only in broad terms.
  •  ...this assessment was also hindered by the lack of a consistent outcome monitoring system.
I am sure this situation is not unique to this program. 

Later on the same report, I read about the evaluation's sampling strategy. As with many other evaluations I have seen, the aim was to sample a diverse range of locations in such a way that was maximally representative of the diversity of how and where the program was working. This is quite a common approach and a reasonable one at that.

But it did strike me later on that this intentionally diverse sample was an underexploited resource. If 15 different locations were chosen, one could imagine a 15 x 15 matrix. Each of the cells in the matrix could be used to describe how a row location compared to a column location. In practice, only half the matrix would be needed, because each relationship would be mentioned twice e.g. Row location A and its relation to Column location J would also be covered by Row location J and its relation to Column location A.

What sort of information would go in such cells? Obviously, there could be a lot to choose from. But one option would be to ask key stakeholders, especially those funding and/or managing any two compared locations. I would suggest they be asked something like this:
  • "What do you think is the most significant difference between these two locations/projects, in the ways they are working?"
And then ask a follow-up question...
  • "What difference do you think this difference will make?"
The answers are potential (if...then...) hypotheses, worth testing by an evaluation team. In a matrix generated by a sample of 15 locations, this exercise could generate ((15*15)-15))/2 = 105 potentially useful hypotheses, which could then be subject to a prioritisation / filtering exercise, which should include considerations of their evaluability (Davies, 2013). More specifically, how they relate to any Theory of Change, whether there is relevant data available, and whether any stakeholders are interested in the answers.

Doing so might also help address a more general problem, which I have noted elsewhere (Davies, 2018). And which was also a characteristic of the evaluation mentioned above. That is the prevalence in evaluation ToRs of open-ended evaluation questions, rather than hypothesis testing questions: 
" While they may refer to the occurrence of specific outcomes or interventions, their phrasings do not include expectations about the particular causal pathways that are involved. In effect these open-ended questions imply either that those posting the questions either know nothing, or they are not willing to put what they think they know on the table as testable propositions. Either way this is bad news, especially if the stakeholders have any form of programme funding or programme management responsibilities. While programme managers are typically accountable for programme implementation it seems they and their donors are not being held accountable for accumulating testable knowledge about how these programmes actually work. Given the decades-old arguments for more adaptive programme management, it’s about time this changed (Rondinelli, 1993; DFID, 2018).  (Davies, 2018)



Saturday, March 09, 2019

On using clustering algorithms to help with sampling decisions



I have spent the last two days in a training workshop run by BigML, a company that provides very impressive, coding-free, online machine learning services. One of the sessions was on the use of clustering algorithms, an area I have some interest in, but have not done much with, over the last year or so. The whole two days were very much centered around data and the kinds of analyses that could be done using different algorithms, and with more aggregated workflow processes.

Independently, over the previous two weeks, I have had meetings with the staff of two agencies in two different countries, both at different stages of carrying out an evaluation of a large set of their funded projects. By large, I mean 1000+ projects. One is at the early planning stage, the other is now in the inception stage. In both evaluations, the question of what sort of sampling strategy to use was a real concern.

My most immediate inclination was to think of using a stratified sampling process, where the first unit of analysis would be the country, then the projects within each country. In one of the two agencies, the projects were all governance related, so an initial country level sampling process seemed to make a lot of sense. Otherwise, the governance projects would risk being decontextualized. There were already some clear distinctions between countries in terms of how these projects were being put to work, within the agency's country strategy. These differences could have consequences. The articulation of any expected consequences could provide some evaluable hypotheses, giving the evaluation a useful focus, beyond the usual endless list of open-ended questions typical of so many evaluation Terms of Reference.

This led me to speculate on other ways of generating such hypotheses. Such as getting key staff managing these projects to do pile/card sorting exercises to sort countries, then projects, into pairs of groups, separated by a difference that might make a difference. These distinctions could reflect ideas embedded in an overarching theory of change, or more tacit and informal theories in the heads of such staff, which may nevertheless still be influential because they were operating (but perhaps untested) assumptions. They would provide other sources of what could be evaluable hypotheses.

However, regardless of whether it was a result of a systematic project document review or pile sorting exercises, you could easily end up with many different attributes that could be used to describe projects and then use as the basis of a stratified sampling process. One evaluation team seemed to be facing this challenge right now, of struggling to decide what attributes to choose. (PS: this problem can arise either from having too many theories or no theory at all)

This is where clustering algorithms, like K-means clustering, could come in handy. On the BigML website you can upload a data set (e.g. projects with their attributes) then do a one-click cluster analysis. This will find clusters of projects that have a number of interesting features: (a) Similarity within clusters is maximised, (b) Dissimilarity between clusters is maximised and visualised, (c) It is possible to identify what are called "centroids" i.e. the specific attributes which are most central to the identity of a cluster.

These features are relevant to sampling decisions. A sample from within a cluster will have a high level of generalisability within that cluster because all cases within that cluster are maximally similar. Secondly, other clusters can be found which range in their degree of difference from that cluster. This is useful if you want to find two contrasting clusters that might capture a difference that makes a difference.

I can imagine two types of analysis that might be interesting here:
1. Find a maximally different cluster (A and B) and see if a set of attributes found to be associated with an outcome of interest in A is also present in B. This might be indicative of how robust that association is
2, Find a maximally similar set of clusters (A and C) and see if incremental alterations to a set of attributes associated with an outcome in A means the outcome is not found associated in C. This might be indicative of how significant each attribute is.

These two strategies could be read as (1) Vary the context, (2) Vary the intervention

For more information, check out this BigML video tutorial on cluster analysis. I found it very useful

PS: I have also been exploring BigMLs Association Rule facility. This could be very helpful as another means of analysing the contents of a given cluster of cases. This analysis will generate a list of attribute associations, ranked by different measures of their significance. Examining such a list could help evaluators widen their view of the possible causal configurations that are present.



Saturday, July 14, 2018

Two versions of the Design Triangle - for choosing evaluation methods


Here is one version, based on Stern et al (2012) BROADENING THE RANGE
OF DESIGNS AND METHODS FOR IMPACT EVALUATIONS


A year later, in a review of the literature on the use of evaluability assessments, I proposed a similar but different version:



In this diagram "Evaluation Questions" are subsumed within the wider category of "Stakeholder demands". "Programme Attributes" have been disaggregated into "Project Design" (especially Theory of Change) and "Data Availability". "Available Designs" in effect disappears into the background, and if there was a 3D version, behind Evaluation Design.

Wednesday, July 19, 2017

Transparent Analysis Plans


Over the past years, I have read quite a few guidance documents on how to do M&E. Looking back at this literature, one thing that strikes me is how little attention is given to data analysis, relative to data collection. There are gaps, both in (a) guidance on "how to do it"  and (b) how to be transparent and accountable for what you planned to do and then actually did. In this blog, I want to provide some suggestions that might help fill that gap.

But first a story, to provide some background. In 2015 I did some data analysis for a UK consultancy firm. They had been managing a "Challenge Fund" a grant making facility funded by DFID, for the previous five years, and in the process had accumulated lots of data. When I looked at the data I found sapproximately170 fields. There were many different analyses that could be made from this data, even bearing mind one approach we had discussed and agreed on - the development of some predictive models, concerning the outcomes of the funded projects.

I resolved this by developing a "data analysis matrix", seen below. The categories on the left column and top row referred to different sub-groups of fields in the data set. The cells referred to the possibility of analyzing the relationship between the row sub-group of data and the column sub-group of data. The colored cells are those data relationships the stakeholders decided would be analyzed, and the initials in the cells referred to the stakeholder wanting that analysis. Equally importantly, the blank cells indicate what will not be analyzed.

We added a summary row at the bottom and a summary column to the right. The cells in the summary row signal the relative importance given to the events in each column. The cells in the summary column signal the relative confidence in the quality of data available in the row sub-groups. Other forms of meta-data could also have been provided in such summary rows and columns, which could help inform stakeholders choice of what relationships between the data should be analyzed.



A more general version of the same kind of matrix can be used to show the different kinds of analysis that can be carried out with any set of data. In the matrices below, the row and column letters refer to different variables / attributes / fields in a data set. There are three main types of analysis illustrated in these matrices, and three sub-types:
  • Univariate - looking at one measure only
  • Bivariate - looking at the relationships between two measures
  • Multivariate - looking at the relationship between multiple measures
But within the multivariate option there three alternatives, to look at:
    • Many to one relationships
    • One to many relationships
    • Many to many relationships

On the right side of each matrix below, I have listed some of the forms of each kind of analysis.

What I am proposing is that studies or evaluations that involve data collection and analysis should develop a transparent analysis plan, using a "data analysis matrix" of the kind shown above. At a minimum, cells should contain data about which relationships will be investigated.  This does not mean investigators can't change their mind later on as the study or evaluation progresses.  But it does mean that both original intentions and final choices will be more visible and accountable.


Postscript: For details of the study mentioned above, see LEARNING FROM THE CIVIL SOCIETY CHALLENGE FUND: PREDICTIVE MODELLING Briefing Paper. September 2015