Tuesday, June 23, 2015

Is QCA it's own worst enemy?

[As you may have read elsewhere on this blog] QCA stands for Qualitative Comparative Analysis. It is a method that is finding increased use as an evaluation tool, especially for exploring claims about the causal role of interventions of various kinds. What I like about it is its ability to recognize and analyse complex causal configurations, which have some fit with the complexity of the real world as we know it.

What I don't like about it is its complexity, it can sometimes be annoyingly obscure and excessively complicated. This is a serious problem if you want to see the method being used more widely and if you want the results to be effectively communicated and properly understood. I have seen instances recently where this has been such a problem that it threatened to derail an ongoing evaluation.

In this blog post I want to highlight where the QCA methodology is unnecessary complex and and suggest some ways to avoid this type of problem. In fact I will start with the simple solution, then explain how QCA manages to make it more complex.

Let me start with a relatively simple perspective. QCA analyses fall in to the broad category of "classifiers". These include a variety of algorithmic processes for deciding what category various instances belongs to. For example which types of projects were successful or not in achieving their objectives.

I will start with a two by two table, a Truth Table, showing the various possible results that can be found, by QCA and other methods. Configuration X here is a particular combination of conditions that an analysis has found to be associated with the presence of an outcome. The Truth Table helps us identify just how good that association is, by comparing the incidences where the configuration is present or absent with the incidences where the outcome is present or absent.


As I have explained in an earlier blog, one way of assessing the adequately of the result shown in such a matrix is by using a statistical test such as Chi Square, to see if the distribution is significantly different from what a chance distribution would look like.There are only two possible results when the outcome is present: the association is statistically significant or it is not.

However, if you import the ideas of Necessary and/or Sufficient causes the range of interesting results increases. The matrix can now show four possible types of results when the outcome is present:

  1. The configuration of conditions is Necessary and Sufficient for the outcome to be present. Here cells C and B would be empty of cases
  2. The configuration of conditions is Necessary but Insufficient for the outcome to be present. Here cell C would be empty of cases
  3. The configuration of conditions is Unnecessary but Sufficient for the outcome to be present. Here cell  B would be empty of cases
  4. The configuration of conditions is Unnecessary and Insufficient for the outcome to be present. Here no cells would be empty of cases
The interesting thing about the first three options is that they are easy to disprove. There only needs to be one case found in the cell(s) meant to be empty, for that claim to be falsified.

And we can provide a lot more nuance to the type 4 results, by looking at the proportion of cases found in cells B and C, relative to cell A. The proportion of A/(A+B) tells us about the consistency of the results, in the simple sense of consistency of results found via an examination of a QCA Truth Table. The proportion of A/(A+B) tells us about the coverage of the results, as in the proportion of all present outcomes that exist that were identified by the configuration. 

So how does QCA deal with all this? Well, as far as I can see, it does so in a way makes it more complex than necessary. Here I am basing my understanding mainly on Schneider and Wagemann's account of QCA.
  1. Firstly, they leave aside the simplest notions of Necessity and Sufficiency as described above, which are based on a categorical notion of Necessity and Sufficiency i..e a configuration either is or is not Sufficient etc. One of the arguments I have seen for doing this is these types of results are rare and part of this may be due to measurement error, so we should take  more generous/less demanding view of what constitutes Necessity and Sufficiency
  2. Instead they focus on Truth Tables with results as shown below (classed as 4. Unnecessary and Insufficient above). They then propose ways of analyzing these in terms of having degrees of Necessity and Sufficiency conditions. This involves two counter-intuitive mirror-opposite ways of measuring the consistency and coverage of the results, according to whether the focus is on analyzing the extent of Sufficiency or Necessity conditions (see Chapter 5 for details)
  3. Further complicating the analysis is the introduction of a minimum thresholds for the consistency of Necessity and Sufficiency conditions (because the more basic categorical idea has been put aside). There is no straightforward basis for defining these levels. It is suggested that they depend on the nature of the problem being identified.

  Configuration X contains conditions which are neither Necessary or Sufficient 

Using my strict interpretation of Sufficiency and Necessity there is no need for a consistency measure where a condition (or configuration) is found to be Sufficient but Unnecessary, because there will be no cases in cell B. Likewise, there is no need for a coverage measure where a condition (or configuration) is found to be Necessary but Insufficient, because there will be no cells in cell C,

We do need to know the consistency where a condition (or configuration) is Necessary but Insufficient, and the the coverage, where where a condition (or configuration) is found to be Sufficient but Unnecessary.

Monday, May 25, 2015

Characterising purposive samples



In some situations it is not possible to develop a random sample of cases to examine for evaluation purposes. There may be more immediate challenges, such as finding enough cases with sufficient information and sufficient quality of information.

The problem then is knowing to what extent, if at all, the findings from this purposive sample can be generalised, even in the more informal sense of speculating on the relevance of findings to other cases in the same general population.

One way this process can be facilitated is by "characterising" the sample, a term I have taken from elsewhere. It means to describe the distinctive features of something. This could best be done using attributes or measures that can, and probably already have been, used to describe the wider population where the sample came from. For example, the sample of people could be described as being of average age of 35 versus 25 in the whole population, and 35% women versus 55% in the wider population. This seems a rather basic idea, but it is not always applied.

Another more holistic way of doing so is to measure the diversity of the sample. This is relatively easy to do when the data set associated with the sample is in binary form, as for example is used in QCA analysis (i.e. cases are rows, columns are attributes and cell values of 0 or 1 indicate if the attributes was absent or present)

As noted in earlier blog postings,Simpsons Reciprocal Index is a useful measure of diversity. This takes into account two aspects of diversity: (a) richness, which in a data set could be seen in the number of unique configurations of attributes found across all the cases( think metaphorically of organisms - cases, chromosomes-configurations and genes-attributes) and (b) evenness, which could be seen in the relative number of cases having particular configurations. When the number of cases is evenly distributed across all configurations this is seen as being more diverse than when the number of cases per configuration varies.

The degree of diversity in a data base can have consequences. Where a data set that has little diversity in terms of "richness" there is a possibility that configurations that are identified by QCA or other algorithmic based methods, will have limited external validity, because they may easily be contradicted by cases outside the sample data set that are different from already encountered configurations. A simple way of measuring this form of diversity is to calculate the original number of unique configurations in the sample data set as a percentage of the total number possible, given the number of binary attributes in the sample data set (which is 2 to the power of the number of attributes). The higher the percentage, the less risk that the findings will be contradicted by configurations found in new sets of data (all other things being constant).

Where a data set has little diversity in terms of "balance" it will be more difficult to assess the consistency of any configuration's association with an outcome, compared to others, because there will be more cases associated with some configurations than others. Where there are more cases of a given configuration there will be more opportunities for its consistency of association with an outcome to be challenged by contrary cases.

My suggestion therefore is that when results are published from the analysis of purposive samples there should be adequate characterisation of the sample, both in terms of: (a) simple descriptive statistics available on the sample and wider population, and (b) the internal diversity of the sample, relative to the maximum scores possible on the two aspects of diversity.



Wednesday, May 20, 2015

Evaluating the performance of binary predictions


(Updated 2015 06 06)

Background: This blog posting has its origins in a recent review of a QCA oriented evaluation, in which a number of hypotheses were proposed and then tested using a QCA type data set. In these data set cases (projects) are listed row by row and the attributes of these projects are listed in columns. Additional columns to the right describe associated outcomes of interest. The attributes of the projects may include features of the context as well as the interventions involved. The cell values in the data sets were binary (1=attribute present, 0= not present), though there are other options.

When such a data set is available a search can be made for configurations of conditions that are strongly associated with an outcome of interest. This can be done inductively or deductively. Inductive searches involve the uses of systematic search processes (aka algorithms), of which there are a number available. QCA uses the Quine–McCluskey algorithm. Deductive searches involve the development of specific hypotheses from a body of theory, for example about the relationship between the context, intervention and outcome.

Regardless of which approach is used, the resulting claims of association need evaluation. There are a number of different approaches to doing this that I know of, and probably more. All involve, in the simplest form, the analysis of a truth table in this form:


In this truth table the cell values refer to the number of cases that have each combination of configuration and outcome. For further reference below I will label each cell as A and B (top row) and C and D (bottom row)

The first approach to testing is a statistical approach. I am sure that there are a number of ways of doing this, but the one I am most familiar with is the widely used Chi-Square test. Results will be seen as most statistically significant when all cases  are in the A and D cells. They will be least significant when they are equally distributed across all four cells.

The second approach to testing is the one used by QCA. There are two performance measures. One is Consistency, which is the proportion of all cases where the configuration is present and the outcome is also present (=A/(A+B)). The other is Coverage, which is the proportion of all outcomes that are associated with the configuration (=(A/(A+C)).

When some of the cells have 0 values three categorical judgements can also be made. If only cell B is empty then it can be said that the configuration is Sufficient but not Necessary. Because there are still values in cell C this means there are other ways of achieving the outcome in addition to this configuration.

If only cell C is empty then it can be said that the configuration is Necessary but not Sufficient. Because there are still values in cell B this means there are other additional conditions that are needed to ensure the outcome.

If cells B and C are empty then it can be said that the configuration is both Necessary and Sufficient

In all three situations there only needs to be one case to be found in a previously empty cell(s) to disprove the standing proposition. This is a logical test, not a statistical test.

The third approach is one used in the field of machine learning, where the above matrix is known as a Confusion Matrix. Here there is a profusion of performance measures available (at least 13). Some of the more immediately useful measures are:
  • Accuracy: (A+D)/(A+B+C+D), which is similar to but different from the Chi Square measure above
  • True Positives: A/(A+B), also called Precision, which corresponds to QCA consistency
  • True Negatives: D/(B+D)
  • False Positives: C/(A+C)
  • False Negatives: B/(B+D)
  • Positive predictive value: A/(A+C), also called Recall, which corresponds to QCA coverage
  • Negative predictive value: D/(C+D)
In addition to these three types of tests there are three other criteria that are worth taking into account as well: simplicity, diversity and similarity

Simplicity: Essentially the same idea as that captured in Occam's Razor. This is that simpler configurations are preferable, all other things being equal. For example: A+F+J leads to D is a simpler hypothesis than A+X+Y+W+F leads to D. Complex configurations can have a better fit with the data, but at the cost of being poor at generalising to other contexts. In Decision Tree modelling this is called "over-fitting" and solution is "pruning", i.e. cutting back on the complexity of the configuration.  Simplicity has practical value, when it comes to applying tested hypotheses in real life programmes. They are easier to communicate and to implement. Simplicity can be measured at two levels: (a) the number of attributes in a configuration that is associated with an outcome, (b) and the number of configurations needed to account for an outcome.

Diversity: The diversity of configurations is simply the number of different specific configurations in a data set. It can be made into a comparable measure by calculating it as a percentage of the total number possible. The total number possible is 2 to the power of A where A = number of kinds of attributes in the data set. A bigger percentage = more diversity.

If you want to find how "robust" a hypothesis is, you could calculate the diversity present in the configurations of all the cases covered by the hypothesis (i.e. not just the attributes specified by the hypotheses, which will be all the same). If that percentage is large this suggests the hypothesis works in a greater diversity of circumstances, a feature that could be of real practical value.

This notion of diversity is to some extent implicit in the Coverage measure. More coverage implies more diversity of circumstances. But two hypotheses with the same coverage (i.e. proportion of cases they apply to) could be working in circumstances with quite different degrees of diversity (i.e. the cases covered were much more diverse in their overall configurations).

Similarity: Each row in a QCA like data set is a string of binary values. The similarity of these configurations of attributes can be measured in a number of ways:
  • Jaccard index, the proportion of all instances in two configurations where the binary value 1 is present in the same position i.e. the same attribute is present.
  • Hamming distance, the number of positions at which the corresponding values in two configurations are different. This includes the values 0 and 1, whereas Jaccard only looks at 1 values
These measures are relevant in two ways, which are discussed in more detail further down this post:
  • If you want to find a "representative" case in a  data set, you would look for the case with the lowest average Hamming distance in the whole data set
  • If you wanted to compare the two most similar cases, you would look for the pair of cases with the lowest Hamming distance.
Similarity can be seen as a third facet of diversity, a measure of the distance between any two types of cases. Stirling (2007) used the term disparity to describe the same thing.

Choosing relevant criteria: It is important to note that the relevance of these different association tests and criteria will depend on the context. A surgeon would want a very high level of consistency, even if it was at the cost of low coverage (i.e. applicable only in a limited range of situations). However, a stock market investor would be happy with a consistency of 0.55 (i.e 55%), especially if it had wide coverage. Even more so if that wide coverage contained a high level of diversity. Returning to the medical example, a false positive might have different consequences to false negatives e.g. unnecessary surgery versus unnecessary deaths. In other non-medical circumstances, false positives may be more expensive mistakes than false negatives.

Applying the criteria: My immediate interest is in the use of these kinds of tests for two evaluation purposes. The first is selective screening of hypotheses about causal configurations that are worth more time intensive investigations, an issue raised in a recent blog.
  • Configurations that are Sufficient and not Necessary or Necessary but not Sufficient. 
    • Among these, configurations which were Sufficient but not Necessary, and with high coverage should be selected, 
    • And configurations which were Necessary but not Sufficient, and with high consistency, should also be selected. 
  • Plus all configurations that were Sufficient and Necessary (which are likely to be less common)
The second purpose is to identify implications for more time consuming within-case investigations. These are essential, in order to identify casual mechanism at work that connect the conditions that are associated in a given configuration. As I have argued elsewhere, associations are a necessary but insufficient basis for a strong claim of causation. Evidence of mechanisms is like muscles on the bones of a body, enabling it to move.

Having done the filtering suggested above, the following kinds of within-case investigations would seem useful:
  • Are there any common casual mechanisms underlying all the cases  found to be Necessary and Sufficient, i.e those within cell A? 
    • A good starting point would be a case within this set of cases that had the lowest average Hamming distance, i.e. one with the highest level of similarity with all the other cases. 
    • Once one or more plausible mechanism were discovered in that case a check could be made to see if they are present in other cases in that set, this could be done in two ways: (a) incrementally, by examining adjacent cases, i.e cases with the lowest Hamming distance from the representative case, (b) by partitioning the rest of the cases, and examining a case with a median level Hamming distance, i.e. half way between being the most similar and most different cases.
  • Where the configuration is Necessary but not Sufficient, how do the cases in cell B differ from those in cell A, in ways that might shed more light on how the same configuration leads to different outcomes? This is what has been called a MostSimilarDifferentOutcome (MSDO) comparison,
    • If there are many cases this could be quite a challenge, because the cases could differ on many dimensions (i.e. on many attributes). But using the Hamming distance measure we could make this problem more manageable by selecting a case from cell A and B that had the lowest possible Hamming distance. Then a within-case investigation could find additional undocumented differences that account for some or all of the difference in outcomes. 
      • That difference could then be incorporated into the current hypothesis (and data set) enabling more cases from cell B to now be found in cell A i..e Consistency would be improved
  • Where the configuration is Sufficient but not Necessary,in what ways are the cases in cell C the same as those in cell A, in ways that might shed more light on how the same outcome is achieved by different configurations? This is what has been called a MostDifferentSimilarOutcome (MDSO) comparison,
    • As above, if there are many cases this could be quite a challenge. Here I am less clear, but de Meur et al (page 72) say the correct approach is "...one has to look for similarities in the characteristics of initiatives that differ the most from each other; firstly the identification of the most differing pair of cases and secondly the identification of similarities between those two cases" The within-case investigation should look for undocumented similarities that account for some of the similar outcomes. 
      • That difference could then be incorporated into the current hypothesis (and data set) enabling more cases from cell C to now be found in cell A i..e Coverage would be improved


Tuesday, May 19, 2015

How to select which hypotheses to test?



I have been reviewing an evaluation that has made use of QCA (Qualitative Comparative Analysis). An important part of the report is the section on findings, which lists a number of hypotheses that have been tested and the results of those tests. All of these are fairly complex, involving a configuration of different contexts and interventions, as you might expect in a QCA oriented evaluation.  There were three main hypotheses, which in the results section were dis-aggregated into six more specific hypotheses. The question for me, which has  much wider relevance, is how do you select hypotheses for testing, given limited time and resources available in any evaluation?

The evaluation team have developed three different data sets, each will 11 cases, and with 6, 6 and 9 attributes of these cases (shown in columns), known as "conditions" in QCA jargon. This means there are 26 + 26  + 29 = 640  possible combinations of these conditions that could be associated with and cause the outcome of interest. Each of the hypotheses being explored by the evaluation team represents one of these configurations. In this type of situation, the task of choosing an appropriate hypotheses seems a little like looking for a needle in a haystack

It seems there are at least three options, which could be combined. The first is to review the literature and find what claims (supported by evidence) are made there about "what works" and select from these those that are worth testing e.g. one that seems to have wide practical use, and/or one that could have different and significant program design implications if it is right or wrong. This seems to be the approach that the evaluation team has taken, though I am not so sure to what extent they have used the programming implications as an associated filter.

The second approach is to look for constituencies of interest among the staff of the client who has contracted the evaluation.There have been consultations, but it is not clear what sort of constituencies each of the tested hypotheses have. There were some early intimations that some of the hypotheses that were selected are not very understandable. That is clearly an important issue, potentially limiting the usage of the evaluation findings.

The third approach is an inductive search, using QCA or other software, for configurations of conditions associated with an outcome that have both high level of consistency (i.e. they are always associated with the presence (or the absence ) of an outcome) and  coverage (i.e. they apply to a large proportion of the outcomes of interest). In their barest form these configurations can be be considered as hypotheses. I was surprised to find that this approach had not been used, or at least reported on, in the evaluation report I read. If it had been used but no potentially useful configurations found then this should have been reported (as a fact, not a fault).

Somewhat incidentally, I have been playing around with the design of an Excel worksheet and managed to build in a set of formula for automatically testing how well different configurations of conditions of particular interest (aka hypotheses) account for a set of outcomes of interest, for a given data set. The tests involve measures taken from QCA (consistency and coverage, as above) and from machine learning practice (known as a Confusion Matrix). This set-up provides an opportunity to do some quick filtering of a larger number of hypotheses than an evaluation team might initially be willing to consider (i.e. the 6 above). It would not be as efficient a search as the QCA algorithm, but it would however be a search that could be directed according to specific interest. Ideally this directed search process would identify configurations that are both necessary and sufficient (for more than a small minority of outcomes). A second best result would be those that are necessary but insufficient, or vice versa. (I will elaborate on these possibilities and their measurement in another blog posting)

The wider point to make here is that with the availability of a quick screening capacity the evaluation team, in its consultations with the client, should then be able to broaden the focus of useful discussions away from what are currently quite specific hypotheses,  and towards the contents of a menu of a limited number of conditions that can not only make up these hypotheses but also other alternative versions. It is the choice of these particular conditions that will really make the difference, to the scale and usability of the results of a QCA oriented evaluation. More optimistically, the search facility could even be made available online, for continued use by those interested in the evaluation results, and their possible variants

The Excel file for quick hypotheses testing is here: http://wp.me/afibj-1ux




Monday, April 20, 2015

In defense of the (careful) use of algorithms and the need for dialogue between tacit (expertise) and explicit (rules) forms of knowledge



This blog posting is a response to the following paper now available online
Greenhalgh, T., Howick, J., Maskrey, N., 2014. Evidence based medicine: a movement in crisis? BMJ 348, http://www.bmj.com/content/348/bmj.g3725
Background: Chris Roche passed this very interesting paper on to me, received via "Kate", who posted a comment on  Chris's posting on "What has cancer taught me about the links between medicine and development? which can be found on Duncan Green's "From Poverty to Power" blog. 

The paper is interesting in the first instance because both the debate and practice about evidence based policy and practice seems to be much further ahead in the field of medicine than it is in the field of development aid (...broad generalisation that this is...).

It is also of interest to reflect on the problems and solutions copied below and to think how many of these kinds of issues can also be seen in development aid programs.

 According to the paper, the problems with the current version of evidence based medicine include:

  1. Distortion of the evidence based brand ("The first problem is that the evidence based “quality mark” has been misappropriated and distorted by vested interests. In particular, the drug and medical devices industries increasingly set the research agenda. They define what counts as disease ... They also decide which tests and treatments will  be compared in empirical studies and choose (often surrogate) outcome measures for establishing “efficacy.”
  2. Too much evidence:  The second aspect of evidence based medicine’s crisis (and yet, ironically, also a measure of its success) is the sheer volume of evidence available. In particular, the number of clinical guidelines is now both unmanageable and unfathomable. One 2005 audit of a 24 hour medical take in an acute hospital, for example, included 18 patients with 44 diagnoses and identified 3679 pages of national guidelines (an estimated 122 hours ofreading) relevant to their immediate care"
  3. Marginal gains and a shift from disease to risk: "Large trials designed to achieve marginal gains in a near saturated therapeutic field typically overestimate potential benefits (because trial samples are unrepresentative and, if the trial is overpowered, effects may be statistically but not clinically significant) and underestimate harms (because adverse events tend to be under detected or under reported)."
  4. Overemphasis on following algorithmic rules: "Well intentioned efforts to automate use of evidence through computerised decision support systems, structured templates, and point of care prompts can crowd out the local,individualised, and patient initiated elements of the clinical consultation"
  5. Poor fit for multi-morbidity. "Multi-morbidity (a single condition only in name) affects every person differently and seems to defy efforts to produce or apply objective scores, metrics, interventions, or guidelines"
The paper's proposed solutions or ways forward include:
  1. Individualised for the patient: Real evidence based medicine has the care of individual patients as its top priority, asking, “what is the best course of action for this patient, in these circumstances, at this point in their illness or condition?” It consciously and reflexively refuses to let process (doing tests, prescribing medicines) dominate outcomes (the agreed goal of management in an individual case). 
  2. Judgment not rules. Real evidence based medicine is not bound by rules.  
  3. Aligned with professional, relationship based care.  Research evidence may still be key to making the right decision—but it does not determine that decision. Clinicians may provide information, but they are also trained to make ethical and technical judgments, and they hold a socially recognised role to care, comfort, and bear witness to suffering.
  4. Public health dimension . Although we have focused on individual clinical care, there is also an important evidence base relating to population level interventions aimed at improving public health (such as pricing and labelling of consumables, fluoridation of water, and sex education). These are often complex, multifaceted programmes with important ethical and practical dimensions, but the same principles apply as in clinical care. 
  5. Delivering real evidence based medicine. To deliver real evidence based medicine, the movement’s stakeholders must be proactive and persistent. Patients (for whose care the movement exists) must demand better evidence, better presented, better explained, and applied in a more personalised way with sensitivity to context and individual goals.
  6. Training must be reoriented from rule following Critical appraisal skills—including basic numeracy, electronic database searching, and the ability systematically to ask questions of a research study—are prerequisites for competence in evidence based medicine. But clinicians need to be able to apply them to real case examples.
  7. Evidence must be usable as well as robust. Another precondition for real evidence based medicine is that those who produce and summarise research evidence must attend more closely to the needs of those who might use it
  8. Publishers must raise the bar. This raises an imperative for publishing standards. Just as journal editors shifted the expression of probability from potentially misleading P values to more meaningful confidence intervals by requiring them in publication standards, so they should now raise the bar for authors to improve the usability of evidence, and especially to require that research findings are presented in a way that informs individualised conversations.
  9. ...and more
While many of these complaints and claims that make a lot of sense, I think there is also a risk"throwing the baby out with the bathwater" if care is not taken with some. I will focus on a couple of ideas that run through the paper.

The risk lies in seeing two alternative modes of practice as exclusive choices. One is rule based, focused on average affects when trying to meet common needs in populations and the other is expertise focused on the specific and often unique needs of individuals. Parallels could be drawn between different type of aid programs, e.g. centrally planned and nationally rolled out services meeting basic needs like water supply or education and much more person centered participatory rural development programs

Alternatively, one can see these two approaches as having complementary roles that can help and enrich each other. The authors describe one theory of learning which probably applies in many fields, including medicine: The first stage " ...beginning with the novice who learns the basic rules and applies them mechanically with no attention to context. The next two stages involve increasing depth of knowledge and sensitivity to context when applying rules. In the fourth and fifth stages, rule following gives way to expert judgments, characterised by rapid, intuitive reasoning informed by imagination, common sense, and judiciously selected research evidence and other rules"  During this process a lot of explicit knowledge become tacit, and almost automated, with conscious attention left for the more case specific features of a situation. It is an economic use of human cognitive powers. Michael Polanyi wrote about this process years ago (1966, The Tacit Dimension).

The other side of this process is when tacit knowledge gets converted into explicit knowledge. That's what some anthropologists and ethnographers do. They seek to get into the inner world of their subjects and to make it accessible to others. One practitioner whose work interests me in particular is Christina Gladwin, who wrote a book on Ethnographic Decision Trees in 1989. This was all about eliciting how people, like small farmers in west Africa, made decisions about what crops to plant. The result was a decision tree model, that summarised all the key choices farmers could make, and the final outcomes those different choices would lead to. This was not  a model of how they actually thought, but a model of how different combinations of choices were associated with different outcomes of interest. These decision trees are not so far removed from those used in medical practice today.

A new farmer coming into the same location could arguably make use of such a decision tree to decide what to crops to plant. Alternatively they could work with one of the farmers for a number of seasons, which then might cover all the eventualities in the decision tree, and learn from that direct experience. But this would take much more time. In this type of setting explicit rule based knowledge is an  easier and quicker means of transferring knowledge between people. Rule based knowledge that can be quickly and reliably communicated is also testable knowledge.  Following the same pattern of rules may or may not always lead to the expected outcome in another context.

And now a word about algorithms. An algorithm is a clearly defined sequence of steps that will lead to a desired end, sometimes involving some iteration until that end state gets closer. A sequence of choices in a decision tree is an algorithm. At each choice point the answer will dictate what choices to be made next. These are the rules mentioned in the paper above. There are also algorithms for constructing such algorithms. On this blog I have made a number of postings about QCA and (automated) Decision Tree models, both of which are means of constructing testable causal models. Both involve computerised processes for finding rules that best predict outcomes of interest. I think they have a lot of potential in the field of development aid.

But returning to the problems of evidence based medicine, it is very important to note that algorithms are means of achieving specific goals. Deciding which goals need to be pursued remains a very human choice. Even within the use of both QCA and (automated) Decision tree modeling users have to decide the extent to which they want to focus on finding rules that are very accurate or those which are less accurate but which apply to a wider range of circumstances (usually simple rather than complex rules).

So, in summary, in any move towards evidence based practice, we need to ensure that tacit and explicit forms of knowledge build upon each other rather than getting separated as different and competing forms of knowledge. And while we should develop, test and use good algorithms, we should remember they are always means to an end, and we remain responsible for choosing the ends we are trying to achieve.

Postscript 2015 05 04: Please also read this recent cautionary analysis of the use of algorithms for the purposes of public policy implementation. The author points out that algorithms can embody and perpetuate cultural biases. How is that possible? It is possible because all evidence-based algorithms are developed using historical data i.e. data sets of what has happened in the past. Those data sets, e.g. of arrest and conviction data in a given city reflect historical practice by human institutions in that city, with all their biases, conscious and not so conscious. They don't reflect ideal practice, simply the actual practice at the time. Where an algorithm is not based on analysis of historical data then it may have its origins in a more ethnographic study of the practice of human experts in the domain of interest. Their practice, and their interpretations of their practice, are also equally subject to cultural biases. The analysis by Virginia Eubanks include four useful suggestions to counter these risks, one of which is that "We need to learn more about how policy algorithms work" by demanding more transparency about the design of a given algorithm and its decisions. But this may not be possible, or in some cases publicly desirable. One alternative method of interest is the algorithmic audit.