Complexity High on the Agenda at the EES 2016 Biannual Conference

EES conference

Dione Hills, Tavistock Institute

Maastricht was the location of this year’s European Evaluation Society (EES) conference over a sunny week in late September. At the end of first day, we were treated to a civic reception in the building in which the Maastricht treaty was signed in 1992, bringing up mixed emotions for some of us.

700 participants from across the globe attended over 200 presentations and workshops that spanned high quality discussions about the current state of evaluation theory to practical accounts of interesting – and innovative - evaluations ‘on the ground’.

With complexity and its challenges a theme that ran through many of the sessions (even when not specifically mentioned in the title), I could only attend a small proportion of those which might have relevance to CECAN’s work. However, the several events on this topic that I did attend explored various refinements of some of the more established evaluation approaches used over the last 20 years or so when evaluating complex interventions: theory of change and realistic evaluation, contribution analysis and use of systems theory. Some explored ways of making the process of logic or theory of change mapping more ‘complexity sensitive’ while others proposed methods of honing evaluation activities down on specific aspects of a complex situation, without losing sight of the bigger picture. I particularly enjoyed a panel discussion on the final day, on ‘Dealing with complexity in theory based evaluation: Do’s and Don’ts: Examples from Barbara Befani’s contribution to the discussion included the importance of being open to multiple ways of estimating causality, ensuring that contribution claims and theory development is tied to closely to evidence, and keeping sight of other factors as well as the specific intervention being evaluated. I hope we can invite Barbara to present her paper again, nearer to home.

Newer approaches to complexity also made an appearance with discussions about the use of Development Evaluation (Michael Quinn Patton joining the conference from the USA by skype), and Big Data. Also presented throughout the conference were a proliferation of new methods and tools (or refinements of older ones): qualitative impact protocols, adaptation pathways, an emergent realist approach, most significant change, theory of action tracking and a keystone note approach, the use of rubrics, pattern spotting tools and causal loop diagrams, to mention a few.

There were however, to my knowledge, few contributions to the conference that were closely informed by the work of academics working in the complexity field such as we have gathered together in CECAN, and little discussion of the role that approaches such as agent based modelling, currently being explored in CECAN might play in evaluation. There were also relatively few presentations from those working in the environmental sector.

There were also sessions on governance, ethics and professionalism (evaluator capacity building) and on ways of communicating evaluation results that help support change – all issues which we have begun to touch on in terms of CECAN’s work going forward (although unfortunately, given the number of parallel sessions, I was unable to attend many of these). However, there is a searchable copy of the programme on the conference website http://ees2016.eu  for anyone interesting in finding out more about the presentations on any of these topics.

A key ‘take away’ from the conference for me is a renewed appreciation of the breadth and depth of activity taking place around the issue of complexity in evaluation, either from outside of the UK, and from within other (non CECAN) sectors such as international development, health and social care. EES’s own thematic working group on sustainable development regularly discusses the issue of  complexity, and it is worth keeping an eye on other developments such as in the  USA: a new book published by Michael Bamberger and colleagues on Dealing with Complexity in Development Evaluation, in Canada: an evaluation centre for complex health interventions, in Australia: a recent conference on complexity in evaluation and New Zealand: interesting work evaluating complex sustainable farming and waste minimisation programmes.

However, in light of some of the gaps identified earlier, I felt that there was real potential for  CECAN to engage in a valuable  dialogue  with this wider (global) community of evaluators and evaluators from other sectors, bringing our own specific set of expertise to the table.  One good way to do this would be for more of us to attend – and hopefully present papers or run workshops - at the next EES conference in September 2018, in whichever country this happens to be.