Home / posts / Blog / One Researcher’s Anecdote is Another Researcher’s Data

One Researcher’s Anecdote is Another Researcher’s Data

Mar 9, 2017 | Blog

Frances Rowe, Newcastle University

A couple of days ago, a DEFRA policy official told me that the uncertainty over EU exit was creating a fertile environment for evaluation, as champions try to ensure their favoured policies have a place in the forthcoming landscape, post Brexit. This struck me as interesting, and I made a note of it. While some may call this anecdote, for a qualitative researcher this is data: incomplete, uncorroborated, yes, but data nonetheless.  It might in a future analysis about evaluation uptake prove to be gold dust, an insight that unlocks others, a necessary factor in assessing evaluation effectiveness. Who knows?

The point I’m trying to make here is the fact that one person’s data is another’s anecdote cuts to the heart of a ‘problem’ of qualitative methodologies in evaluation research.  Compared with quantitative methodologies, with their insistence on generalizability, replicability and reliability – factors that have become increasingly important in an era of evidence based policy making –   qualitative methodologies can be seen as methods of last resort, used only when other data are unavailable and of dubious utility.  And it is fair to say that in fields such as the arts qualitative approaches have tended to be seen as uniquely able to capture the nuances of artistic experience and impacts, and yet are not regarded as the types of ‘hard’ evidence that most economists and other more science-based analysts often favour in weighing up priorities for government spending.

Setting aside for a moment the politics of evidence based policy making and what counts as evidence (a topic for a future blog perhaps) a group of us* working on the CECAN project felt it was time to rehabilitate qualitative research methodologies by asking how can they help with evaluating complexity in the Nexus in new and different ways?  Our aim is to create a dialogue over the importance – and necessity – of making sure that qualitative approaches are part and parcel of the complex evaluation methodological repertoire – as proposed by CECAN.

At this point it’s probably helpful to make a couple of distinctions. The first is between qualitative methodologies or approaches, and quantitative ones. Broadly, with qualitative approaches we’re talking about methods which capture stakeholder/user views and experiences such as:

  • Narrative accounts of experiences
  • Ethnographic field notes and observations
  • Interview transcripts
  • Focus groups
  • Surveys (where there is scope to capture open-ended responses)
  • Visual methods

Quantitative research on the other hand is generally looking to confirm or disprove prior hypothesis/theory using the tools of statistical sampling and analysis to assess evaluation outcomes.  Qualitative research typically involves data that are essentially non-numerical, using words rather than numbers, (quantitative data are the opposite). Examples might be interview transcripts, oral histories, focus group conversations or ethnographic field observations captured in a research diary.  However, and this is where it gets complicated, qualitative data may be used in certain quantitative approaches and vice versa; numerical summaries and measures can underpin some forms of qualitative methods. So, the distinctions between quantitative and qualitative research can quickly become muddy.

Perhaps a helpful way of thinking about qualitative methodologies is to distinguish between those that remain qualitative and those that start out as qualitative on their way to somewhere else analytically.

For example methodologies such as Agency Based Modelling, that try and model complex reality in order to assess policy impacts, the starting point is the nuanced descriptions of the real world that qualitative research provides.  Similarly with Qualitative Comparative Analysis that identifies sets of conditions that may lead to a particular policy outcome, the starting point is a rich qualitative data set. With both of these the analytical journey becomes increasingly sparse and quantitative, in order for statistical analysis to be undertaken.  Conversely, some surveys include open ended questions whose data can be analysed qualitatively. And it gets even more complicated when you come on to (big data) methods, that are arguably neither quants or qual – rather they are both –  e.g. some kinds of text mining are massively quantitative even though the maths behind them seeks to ‘qualify’ the ‘quantities’ to get at the meanings of the text.

In advocating a place for qualitative methodologies in  evaluating complex policy, it’s worth stressing that qualitative research has its own canon of analytical approaches (such as grounded theory analysis or thematic analysis)  that in their own way are equally as rigorous as those used in quantitative research if used  correctly. The point is, they are providing different kinds of analysis and information.

So what are our next steps? The first is to decide which, among many qualitative methodologies, could provide innovative approaches for evaluating complexity, and why. They may not be a new methodology per se, but new to use in evaluating complexity in the Nexus. One benefit of qualitative approaches is that they more easily, although not uniquely, facilitate stakeholder participation in evaluation research, and that in itself can be innovative. Secondly, there may be particular applications or new developments in qualitative methodologies that lend themselves well to complexity but are underexplored. We’ll be unpacking some of the topics raised in this blog in subsequent posts over the next few months.  Please do join in the debate!

*Adam Hejnowicz, Jeremy Phillipson, Amy Proctor, Fran Rowe, Emma Uprichard

Share This