tag:blogger.com,1999:blog-6719829.post1341638046959721275..comments2024-03-27T12:04:05.897+00:00Comments on Rick On the Road: Why I am sick of (some) Evaluation Questions! Rick Davieshttp://www.blogger.com/profile/07028422984421301184noreply@blogger.comBlogger18125tag:blogger.com,1999:blog-6719829.post-76500211356624387792016-03-17T11:39:07.630+00:002016-03-17T11:39:07.630+00:00Hi Jonathan
Thanks for your comment, it was good ...Hi Jonathan<br /><br />Thanks for your comment, it was good to get a DFID perspective here (while recognizing that views may vary widely even within DFID)<br /><br />regards, rickRick Davieshttps://www.blogger.com/profile/07028422984421301184noreply@blogger.comtag:blogger.com,1999:blog-6719829.post-60513249907646972192016-03-17T08:50:14.612+00:002016-03-17T08:50:14.612+00:00It is hard to disagree that multiple evaluation qu...It is hard to disagree that multiple evaluation questions covering an entire programme is a recipe for disaster (and a waste of money) and likely to result in an evaluation report impressive in equal measure for its breath and its shallowness. <br /><br />Here at DFID we have attempted to deal with the problem by trying to reduce the number of evaluation questions to a bare minimum (and commensurate with the scale and budget of the evaluation) and then verifying exactly how the evidence generated against each evaluation will be used. <br /><br />Part of the problem of course is the consultation process that evaluation terms of reference go through as they develop which involves multiple advisers, programme managers and others all wanting to add a question of interest to them. The net result can quickly become those long lists of evaluation questions that various commentators have referred to. This is not to suggest that consultations should necessarily be pared down, but rather that evaluation commissioners need to be more savvy in discarding, truncating and editing suggestions for additional evaluation questions. <br /><br />I like the idea of setting and testing hypotheses which forces the commissioner to focus on and test particular aspects of the theory of change that under pin the intervention. Anonymoushttps://www.blogger.com/profile/00610509658870788033noreply@blogger.comtag:blogger.com,1999:blog-6719829.post-66891025528008302542016-03-16T09:02:58.628+00:002016-03-16T09:02:58.628+00:00I find evaluation commissioners, especially UN age...I find evaluation commissioners, especially UN agencies I've worked with, are not only attached to the long list of evaluation questions they have posed, but also wary of any deviation or exploration around those, e.g.., asking contribution related questions on results. Gender integration in evaluation is an example, where a separate section of women's participation questions was posed in an evaluation I did recently, and all they wanted was a 'rubber stamp' on their narrow view of gender integration. Shubh K. Rangehttps://www.blogger.com/profile/06339290114598084688noreply@blogger.comtag:blogger.com,1999:blog-6719829.post-38723127147260769362016-03-15T22:47:21.646+00:002016-03-15T22:47:21.646+00:00Hi Keri
I think hybrid is the way to go. I would ...Hi Keri<br /><br />I think hybrid is the way to go. I would not throw out open ended questions but I would probably put them in as second priority, after eliciting and testing hypotheses reflecting the beliefs of the donors and implementers about how their programs are working<br /><br />In other recent contexts I have suggested that after having started with open ended questions (and perhaps being required to) the evaluation team tries to convert these into hypotheses by asking various stakeholders what they think the answers to these questions are, and then getting these questions shaped into a form that can then be tested through systematic inquiry during an evaluation<br /><br />Rick Davieshttps://www.blogger.com/profile/07028422984421301184noreply@blogger.comtag:blogger.com,1999:blog-6719829.post-66104462426312731752016-03-15T22:28:45.658+00:002016-03-15T22:28:45.658+00:00Hi Jim,
We've gotten traction with the need f...Hi Jim,<br /><br />We've gotten traction with the need for fewer questions working with the USAID Mission in Colombia. As a small number of questions is recommended in USAID guidance now, we have something to back up our assertion that this is a wise way to go. See: 1.usa.gov/1QVORdL, 1.usa.gov/2532Y8m, 1.usa.gov/1UerpvD. (Okay, sometimes they go to six EQs. But we're 16 task orders in, and three years, and I can think of only one that went above that.) However, I have to say I'm intrigued with Rick's idea that a hypothesis makes a bolder starting point than some of these wide-ranging, open-ended questions. I might put it in front of our open-minded USAID client and see if we can try a different angle - maybe hybrid to start. Keri Culverhttp://www.wednesdaymissives.comnoreply@blogger.comtag:blogger.com,1999:blog-6719829.post-78801503253385864722016-03-11T19:16:40.145+00:002016-03-11T19:16:40.145+00:00This resonates with my experience as well. I work ...This resonates with my experience as well. I work for an INGO that both commissions and conducts evaluations. Reflecting on why we stretch ourselves – and the evaluators we hire – so thin, I often feel like the OECD-DAC evaluation criteria play role. The 5 criteria (relevance, effectiveness, etc) – plus several others for evaluations of humanitarian interventions (coherence, connectedness, etc) – lead to dozens of evaluation questions. I believe some donors’ evaluation policies state that the DAC criteria should be followed in all (final) evaluations. But even when we have more flexibility to define our own evaluation questions, I find program managers default to including most or all the DAC criteria in their evaluation SOWs. ALNAP makes it clear in their guidance to “only use the criteria that relate to the questions you want answered”. I’d like to see a similar warning label for the DAC criteria on OECD’s materials. Jon Kurtznoreply@blogger.comtag:blogger.com,1999:blog-6719829.post-51065835249176653442016-03-10T22:55:27.852+00:002016-03-10T22:55:27.852+00:00Apologies. The authorship of this post above was i...Apologies. The authorship of this post above was intended to be made visible. It was made by<br /><br />Ricardo Wilson-Grau Consultoria em Gestão Empresarial Ltda<br /> Evaluation | Outcome Harvesting<br />Rua Marechal Marques Porto 2/402, Tijuca, Rio de Janeiro, CEP 20270-260, Brasil<br />Telephone: 55 21 2284 6889; Skype: ricardowilsongrau<br />Direct via VOIP, dialing locally from USA or Canada: 1 347 404 5379Rick Davieshttps://www.blogger.com/profile/07028422984421301184noreply@blogger.comtag:blogger.com,1999:blog-6719829.post-10944726840601762502016-03-10T13:31:26.326+00:002016-03-10T13:31:26.326+00:00Rick: we avoid this problem by narrowing down who ...Rick: we avoid this problem by narrowing down who will use and own the evaluation, then challenging them to define PURPOSES that are specific; and for each purpose we have them draft Key Evaluation Questions (all of this is part of Utilization-focused Evaluation). We spend a lot of time with them editing the Key Evaluation Questions. More on this at http://evaluationandcommunicationinpractice.net Cheers, Ricardo RamirezAnonymousnoreply@blogger.comtag:blogger.com,1999:blog-6719829.post-67908166075196498042016-03-10T10:43:42.507+00:002016-03-10T10:43:42.507+00:00I certainly identify with the rant and the comment...I certainly identify with the rant and the comments, although I would not substitute hypotheses for questions. Applying the notion of evaluation utility, I find that negotiating a manageable number of USEFUL questions that, in AEA’s words, “ensure that an evaluation will serve the information needs of intended users” is the solution.<br />It raises another issue, however: the prevalent tendering madness of launching terms of reference that are non-negotiable. I propose a solution that comes from my own experience hiring staff: an evaluator-centred commissioning process.<br />1) Recruit evaluator(s) rather than call for proposals with pre-determined TORs. <br />2) Engage with the best candidates based on their potential best match for you. <br />3) Consult with references. <br />4) Hire the best qualified candidate and develop the terms of reference together. <br />Anonymoushttps://www.blogger.com/profile/10788260400386104720noreply@blogger.comtag:blogger.com,1999:blog-6719829.post-61699832325300906372016-03-10T01:29:31.624+00:002016-03-10T01:29:31.624+00:00Hi Rick, following the line of though of Jim Rugh,...Hi Rick, following the line of though of Jim Rugh, it came to my mind the mini-book "Actionable Evaluation Basics", by Jane Davidson. She states there that almost any evaluation needs around 5 main questions (two plus, or two less). Compared with the cases that are being metioned here, well... ;-) <br />http://www.amazon.com/Actionable-Evaluation-Basics-important-questions/dp/1480102695/ref=asap_bc?ie=UTF8<br /><br />Best, PabloPablo Rodriguez-Bilellahttps://www.blogger.com/profile/15330890764782333786noreply@blogger.comtag:blogger.com,1999:blog-6719829.post-48547458929393005502016-03-09T22:27:21.279+00:002016-03-09T22:27:21.279+00:00Hi Anon
Thanks for your link above, which i have ...Hi Anon<br /><br />Thanks for your link above, which i have now followed up<br /><br />regards, rickRick Davieshttps://www.blogger.com/profile/07028422984421301184noreply@blogger.comtag:blogger.com,1999:blog-6719829.post-67979612880143387912016-03-09T16:25:44.617+00:002016-03-09T16:25:44.617+00:00'Evaluation questions' are indeed useless ...'Evaluation questions' are indeed useless in providing value, as any private sector management consultant can tell you (they used the approach in the 1950s-1960s, but abandoned it because it was inefficient, made for meaningless/boring findings, and failed to yield evidence-based recommendations).<br /><br />Since then, they used a hypothesis-driven approach as you suggest (see: http://www.consultantsmind.com/2012/09/26/hypothesis-based-consulting/)<br /><br />It's imperfect, and takes more effort/skill to pull off successfully. But it's greatly superior to the 'evaluation questions' approach most evaluators still use. Why don't evaluators learn from what the private sector already learned half a century ago?Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-6719829.post-63361707948432067282016-03-09T16:16:03.035+00:002016-03-09T16:16:03.035+00:00I whole-heartedly agree. I think the use of too ma...I whole-heartedly agree. I think the use of too many, non-specific questions (often drawn from 'master lists' in the public domain, because 'that's what everyone else uses') has led to many meaningless evaluations. Often, in fact usually (!), there is not the resource envelope to accompany the huge list of questions/issues The Evaluator is asked to look at, and therefore what results is broad-based evaluation, trying to 'tick boxes' but of little depth and utilisation in terms of real learning and subsequent change. I think this issue of better Evaluation Questions ties in really closely with recent debates about over-ambitious evaluations and inadequate resources to implement them. I also worry that we are not using institutional memory or expertise/experience adequately - that if previous M&E and reflection/evaluation hasn't fitted in with the current 'modes of favour' then they are not robust enough to be worthy of consideration. Thanks for your thoughts, I think rants can be both cathartic and helpful to others as they realise they are not alone in struggling or being frustrated with certain aspects of practice.Lisa M Howesnoreply@blogger.comtag:blogger.com,1999:blog-6719829.post-83114897044783583802016-03-09T10:29:41.183+00:002016-03-09T10:29:41.183+00:00Thank you Rick for raising fundamental questions h...Thank you Rick for raising fundamental questions here I hope we will have a rich debate that may shape the way evaluation questions are constructed. In my opinion it is time evaluation commissioners becomes specific on what they want to measure. I will proposed that at this point indicators set out in the initial project should be turn to specific questions...eg it is time we begin to ask ourselves ..were we able to achieve this or that based on the indicator we set out to. Most evaluation question are set on the objectives but the objective had measurable indicators. So these indicators should form the basis of the evaluation questions and we can in addition to answering these questions we can add issues of spillover...end of ranting.Anonymoushttps://www.blogger.com/profile/06080826674334462095noreply@blogger.comtag:blogger.com,1999:blog-6719829.post-40850319540614943052016-03-09T10:14:30.779+00:002016-03-09T10:14:30.779+00:00On a related rant -- having to do with the NUMBER ...On a related rant -- having to do with the NUMBER of questions in an evaluation ToR, much less the purpose of the questions themselves -- I was helping develop training materials for an agency. We agreed that ideally an evaluation should be focused on no more than 5-or-so MAIN questions. We tried to find actual examples of evaluation ToRs to use as case studies. Hard to find any that had fewer than 30 or more questions; some as many as 125 questions (like the pages of questions David McDonald referred to above). You are right, Rick, in your main point: There is something wrong if they (the agencies/implementers) didn't already have pretty good answers to some of their questions. And they need to be both realistic and strategic in identifying questions for which the evaluative effort should really be focused on.Jim Rughhttps://www.blogger.com/profile/18173993598926862641noreply@blogger.comtag:blogger.com,1999:blog-6719829.post-23553152813392879112016-03-09T08:03:16.195+00:002016-03-09T08:03:16.195+00:00maiwada zubairu has left a new comment on your pos...maiwada zubairu has left a new comment on your post "Why I am sick of Evaluation Questions!": <br /><br />I agree with David - the title is misleading. One would have thought we are throwing away evaluation questions and alternatives are offered. Agree - what is needed is re-phrasing the evaluation questions to meet the two fundamentals of evaluations- Accountability and Learning <br />Rick Davieshttps://www.blogger.com/profile/07028422984421301184noreply@blogger.comtag:blogger.com,1999:blog-6719829.post-18345535452572348882016-03-09T07:42:33.864+00:002016-03-09T07:42:33.864+00:00Totally sensible view. Sometimes these questions l...Totally sensible view. Sometimes these questions listed go on for pages and they still expect the report within 20 pages...That's completely nonsensical..Time some hue & cry is made and glad to see you are leading the charge!Rajan Alexandernoreply@blogger.comtag:blogger.com,1999:blog-6719829.post-31706335533640490622016-03-08T04:36:24.454+00:002016-03-08T04:36:24.454+00:00What a delightful rant, Rick!
Tho perhaps you cou...What a delightful rant, Rick!<br /><br />Tho perhaps you could have titled it 'Why I am sick of particular types of evaluation questions' or 'Why I am sick of particular types of evaluation commissioners'.<br /><br />The evaluations that I have done lately have commenced with the commissioners and me sitting down together to develop the evaluation protocol, including the eval questions. What a fantastic improvement over when I used to do evaluations for UN agencies in which I would have foisted on me pages (literally!) of evaluation questions, most of which were to do with largely meaningless minutiae that missed the mark.David McDonaldhttps://www.blogger.com/profile/17319842911865753031noreply@blogger.com