The problem with think-tanks: Transparency

Running alongside issues of quality and independence discussed already here is transparency. This comes in to play at a number of levels for the think-tank but, in a broad sense, is the outside world’s way of establishing the veracity of the work.

It helps us to understand the contribution and usefulness of the work done and also the position taken in terms of independence and bias. It is important to be upfront and honest about why research is being undertaken; who commissioned it and why.It is equally important to be clear and open about how data has been collected, not just from where (and who) but how the data was derived. Issues of method and analysis are important to us understanding what research is trying to say.

Publicly funded academic research usually requires the datasets to be published in an online repository. How many think-tanks do this too, even when their research has been publicly funded? Some do, but more should consider it.

It might be as simple as publishing raw survey data in Excel or SPSS file formats for other researchers to use.

This can also be useful for checking the veracity of the findings – this is not something to be concerned about if you have followed good principles; just because I’ve re-analysed your data and come to a different conclusion it doesn’t mean your own analysis is wrong, it just means I’ve interpreted it differently.

Surely this is a good thing as it adds to the intellectual debate?

Remember, research is subjective. This is where we see the researcher’s bias and the importance of disclosing method and philosophical or analytical lens becomes important; show me your data, tell me how you approached it, justify your conclusions.

I don’t have to agree with them, although I might. What think-tank’s do should contribute to a wider intellectual policy debate beyond narrow ideology and to do this the source of the work must not be obfuscated.

I’m also going to make a slightly tangential argument here about the over-reliance on quantitative research. It’s useful, of course, but there is a an overly strong bias towards statistical analyses and so-called scientific method that can obstruct good qualitative research.

This is unfortunate. Take the Hansard Society’s Audit of Political Engagement, for example. This is a valuable annual survey series, largely quantitative in nature (and the qualitative parts are not analysed in any depth or with any methodological rigour).

It tells us what people think about UK politics but is absolutely useless at helping us to understand why they think it (and makes no claim to). This is where an in-depth qualitative research project (which is now being done) comes in, it helps us to understand not just what or how many, but why and how.

For policy development the implications of failing to broaden the research scope should be clear. It is not enough to know ‘what’, we must discover the ‘why’ too, even though this can be much, much harder to do.

About Andy Williamson

Dr Andy Williamson is a Digital Strategist and commentator. His work focuses on digital engagement, e-campaigning and the strategic use of social media. A former advisor to the New Zealand Government, his work influences digital policy and practice in a number of continents, including the UK.
This entry was posted in Observance and tagged , . Bookmark the permalink.

Comments are closed.