Two recent documents have prompted me to do some thinking on this subject
- Thomas Aston. (2020). Quality of Evidence Rubrics. Focusing on single cases and their internal validity
- Puttick, R. (2018). Mapping the Standards of Evidence used in UK social policy. Alliance for Useful Evidence. Looking at 18 different standards from a wide range of fields
If we view Most Significant Change (MSC) stories as evidence of change (and what people think about those changes) what should we look for in terms of quality - what are the attributes of quality we should look for?
Some suggestions that others might like to edit or add to, or even delete...
1. There is clear ownership of an MSC story and the reasons for its selection by the storyteller. Without this, there is no possibility of clarification of any elements of the story and its meaning, let alone more detailed investigation/verification
2. There was some protection against random/impulsive choice. The person who told the story was asked to identify a range of changes that had happened, before being asked to identify the one which was most significant
3. There was some protection against interpreter/observer error. If another person recorded the story, did they read back their version to the storyteller, to enable them to make any necessary corrections?
4. There has been no violation of ethical standards: Confidentiality has been offered and then respected. Care has been taken not only with the interests of the storyteller but also of those mentioned in a story.
5. Have any intended sources of bias been identified and explained? Sometimes it may be appropriate to ask about " most significant changes caused by....xx..." or "most significant changes of ...x ...type"
6. Have any unintended sources of bias been anticipated and responded to? For example, by also asking about "most significant negative changes " or "any other changes that are most significant"?
7. There is transparency of sources. If stories were solicited from a number of people, we know how these people were identified and who was excluded and why so. If respondents were self-selected we know how they compare to those that did not self-select.
8. There is transparency of selection process: If multiple stories were initially collected then the most significant of these have been selected then reported and used elsewhere, the details of the selection process should be available, including (a) who was involved, (b) how choices were made, and (c) the reasons given for the final choice(s) made
9. Fidelity: Has the written account of why a selection panel chose a story as most significant done the participants' discussion justice? Was it sufficiently detailed, as well as being truthful?
10. Have potential biases in the selection processes been considered? Do most of the finally selected most significant change stories come from people of one kind versus another e.g. men rather than women, one ethnic or religious group versus others? In other words, is the membership of the selection panel transparent? (thanks to Maleeha below).
11. your thoughts here on.. (using the Comment facility below).
Please note
1. That in focusing here on "quality of evidence" I am not suggesting that the only use of MSC stories is to serve as forms of evidence. Often the process of dialogue is immensely important and it is the clarification of values and who values what and why so, that is most important. And there are also bound to be other purposes also served
2. (Perhaps the same point, expressed in another way) The above list is intentionally focused on minimal rather than optimal criteria. As noted above, a major part of the MSC process is about the discovery of what is of value, to the participants.
For more on the MSC technique, see the resources here.
Hi Rick. These are great points. However, I do find that one must also highlight what limitations they faced in selecting the stories as story selection may not only be motivated through bias but limitations in generating true MSC stories that has specificity in relation to the specific project and context. For example, sometimes multiple projects can have the same group of beneficiaries, then generating a true MSC for your specific project can be a challenge/limitation
ReplyDeleteRe "..be motivated by bias" ...this needs unpacking. Bias is clearly a "bad thing", but what if we substituted the word "values", and then tried to ensure that the selection process makes these values as explicit as possible, though dialogue between different participants?
DeleteI would also like to know what you mean by "true MS stories" ...do you believe there is some objective independent standard of what constitutes a "true MSC story"?
Great list. I'd add transparency about the composition of the selection panel itself - if the MSC story is a biography, then the selection panel is the author, and just as important as the respondent/s!
ReplyDeleteGood point about transparency of composition of the selection panel.I will edit text above accordingly.
DeleteRe your metaphor of the selection panel as the author, I would change that to read the selection panel is the editor/sub-editor (depending on whether they made the final selection). They don't change the original text, they simply add extra commentary - i.e. why we selected this change as MS of all