Dr Barbara Befani, CECAN Research Fellow, has expanded and partially redeveloped the CAEM tool, originally published by BOND, to include, amongst else, more complexity-appropriate methodologies.
Click here to read the full report; ‘Choosing Appropriate Evaluation Methods – A Tool for Assessment and Selection (Version Two)’, published November 2020.
Click here to download the excel-based tool.
The report focuses on the logic and practice of appropriate methodological choice for policy evaluation, addressing how this process is grounded in a more general theory of choice and how methodological appropriateness is only one of several dimensions of evaluation quality.
The paper is also a user guide for the excel-based tool, which now includes the following methods:
- Experimental and Non-Experimental: Randomised Controlled Trials, Difference-in-Difference, Statistical Matching, Instrumental Variables
- Systems-Based and Complexity-Appropriate: Outcome Mapping, Most Significant Change, Causal Loop-Diagrams, Soft Systems Modelling, Participatory Systems Mapping, Agent-Based Modelling, Bayesian Belief Networks
- Explanatory and Mechanisms-Based: Realist Evaluation, Qualitative Comparative Analysis, (Bayesian) Process Tracing, and Contribution Analysis.
The tool provides insight on how this range of methods can match the characteristics of a specific evaluation process, where the user expresses a range of preferences in terms of answering evaluation questions and reaching other goals with the evaluation; and where the user also provides information on how the evaluation process can meet a series of conditions that constitute requirements for one or more of the methods included.
After the user has input the information, the tool returns three rankings for the 15 methods, on the basis of various appropriateness scores and considerations for the three dimensions of choice (preference for questions, preference for other goals, and opportunities and constraints in terms of meeting methodological requirements).
The aim of the tool is to develop a greater understanding of a wide range of evaluation methods and their characteristics; to help inform the choice of evaluation methods for a specific intervention; and to help inform the design of an intervention to increase its “evaluability”.
Just like its predecessor, the tool has been developed under a Creative Commons license with the intention that it can be further developed and improved upon by anyone. Please send comments and feedback to Barbara Befani via email@example.com.