Tuesday, December 15, 2020

The implications of complex program designs: Six proposals worth exploring?

Last week I was involved in a seminar discussion of a draft CEDIL paper reviewing methods that can be used to evaluate complex interventions. That discussion prompted me to the following speculations, which could have practical implications for the evaluation of complex interventions.

Caveat: As might be expected, any discussion in this area will hinge upon the definition of complexity. My provisional definition of complexity is based on a network perspective, something I've advocated for almost two decades now (Davies, 2003). That is, the degree of complexity depends on the number of nodes (e.g. people, objects or events), and the density and diversity of types of interactions between them. Some might object and say what you have described here is simply something which is complicated rather than complex. But I think I can be fairly confident in saying that as you move along this scale of increasing complexity (as I have defined it here) the behaviour of the network will become more unpredictable. I think unpredictability, or at least difficulty of prediction, is a fairly widely recognised characteristic of complex systems (But see Footnote).

The proposals:

Proposal 1. As the complexity of an intervention increases, the task of model development (e.g. a Theory of Change), especially model specification,  becomes increasingly important relative to that of model testing. This is because there are more and more parameters that could make a difference/ be "wrongly" specified

Proposal 2. When the confident specification of model parameters becomes more difficult then perhaps model testing will then become more like an exploratory search of a combinatorial space rather than more focused hypothesis testing.This probably has some implications for the types of methods that can be used. For example, more attention to the use of simulations, or predictive analytics.

Proposal 3. In this situation where more exploration is needed, where will all the relevant empirical data come from, to test the effects of different specifications? Might it be that as complexity increases there is more and more need for monitoring (/time-series data, relative to evaluation / once-off type data?

Proposal 4. And if a complex intervention may lead to complex effects – in terms of behaviour over time – then the timing of any collection of relevant data becomes important. A once-off data collection would capture the state of the intervention+context system at one point in an impact trajectory that could actually take many different shapes (e.g. linear, sinusoidal, exponential, etc. The conclusions drawn could be seriously misleading.

Proposal 5. And going back to model specification, what sort of impact trajectory is the intervention aiming for? One where change happens then plateaus, or one where there is an ongoing increase. This needs specification because it will affect the timing and type of data collection needed.

Proposal 6. And there may be implications for the process of model building. As the intervention gets more complex – in terms of nodes in the network –, there will be more actors involved, each of which will have a view on how the parts and perhaps the w0hole package is and should be working, and the role of their particular part in that process. Participatory, or at least consultative, design approaches would seem to become more necessary

Are there any other implications that can be identified? Please use the Comment facility below.

Footnote: Yes, I know you can also find complex (as in difficult to predict) behaviour in relatively simple systems, like a logistic equation that describes the interaction between predator and prey populations.  And there may be some quite complex systems (by my definition) that are relatively stable. My definition of complexity is more probabilistic than determinist

No comments:

Post a Comment