Late last year, a new CECAN syllabus was put together, as a training module for policy analysts and evaluators.The syllabus outlines 10, two hour sessions, which can be incorporated into a broader post graduate programme.
In development studies, one wants an evaluation team to face up to the challenge of combining surveys with semi-structured interview data. In this seminar, Wendy explained and demonstrated how these linkages are made with concrete data about villages in Bangladesh and India. The problem is that complex data can overwhelm the interpreting team. The solutions offered by Wendy are methodologically sound.
Professionals in evaluation pay a lot of attention to how to evaluate. We are less thoughtful about exactly what to evaluate, why, and how we define ‘success’. Stakeholders’ interests and questions tend to determine what will be useful to focus on at a particular point in time. Or we use a predetermined set of criteria, usually the so-called DAC criteria - relevance, efficiency, effectiveness, impact and sustainability - to direct and focus evaluations. Yet this widely accepted practice can prompt us to neglect important issues that should be on our agenda if we are serious about the role of evaluation in supporting sustainable development, and understanding whether we are on track to achieve the SDGs. Failure to focus our evaluations appropriately can lull us into dangerous complacency about accomplishment and success.