//
you're reading...

Uncategorized

Questioning the specious quantification of cultural impacts by Eric Jensen

The problem of arts impact evaluation is one dear to my heart. My first ever published article (which was reworked from my MA dissertation and it is still my most cited paper) was a critique of the methodological approach to impact measurement that was prevalent in the late 1990s. A few years later, I wrote another piece lamenting the empty celebratory rhetoric of socio-economic impact, its disconnection to any robust evidence, and the systematic and intentional misuse of statistics that dominated policy rhetoric, which I dubbed ‘statisticulation’ – following statistician Darrell Huff, author of the classic ‘How to Lie with Statistics’ (1954). My critiques were developed from a Humanities perspective, but I have always felt that when methodological flaws in impact evaluation are so apparent and significant that even a non-specialist can spot them, we have a serious problem. This post by my colleague Dr Eric Jensen, of the Sociology Department at Warwick University is therefore important because it offers a critique of the recent DCMS report ‘Quantifying the Social Impacts of Culture and Sport’  that is accessible yet rooted in a rigorous understanding of social science research methods and their correct application. There is much to learn from Eric’s post, although it is rather depressing for me to observe the extent to which statisticulation is *still* alive and well in cultural policy discourse.

***

A report entitled ‘Quantifying the Social Impacts of Culture and Sport’ was just published by the UK’s Department for Culture, Media & Sport (DCMS). The DCMS commissioned a set of economics-oriented researchers at the LSE to conduct secondary analysis on survey data from the ‘Understanding Society’ large-scale population survey.

The report focuses on ‘the social and wellbeing impacts of cultural engagement and sport participation’ (p. 6). This brief commentary offers a critical reflection on this report, followed by a discussion of possible solutions to the challenging issue of evaluating the impacts of arts and culture participation.

 

Critical Reflection on the ‘Quantifying Impacts’ report

While the authors of this report preface it with a reasonably good disclaimer (essentially saying this is the best they could manage with this kind of data), I think it is worth underlining the point that this is not in fact an impact study as it claims on the tin. In my judgement as a social scientist specialising in this field, the findings should not be used to make claims about the impacts of the arts or sports participation. That is my headline point, meant as a helpful warning to those who may be tempted to pull quotes from this report. Those willing to wade through some details about the report’s core limitation could read on…

The report’s findings are based on data from a cross-sectional (one point in time) survey. The statistical analyses that are conducted throughout are regression analyses, which are based (in statistical terms) on correlation. While the authors largely acknowledge the data’s limitations, they don’t fully acknowledge that the analyses in this report cannot legitimately support causal inferences (other than to say there is a correlation between two variables). That is, any relationships reported between two variables (e.g. arts attendance and health) could involve causation going in either direction. Despite this, the use of the term ‘impact’ throughout the report implies causation going in the direction of arts/sport participation having impacts (i.e. having an effect) on health and other outcome variables. Statements such as ‘attendance at arts events has an effect on health’ could be misleading to readers who did not appreciate that the report is referring to a statistical effect (i.e. there is a measurable relationship between variables that is unlikely to be due purely to chance) not to a ‘cause-effect’ relationship between these two variables. In fact, the more plausible explanation for some of the correlations reported is probably in the other causal direction, for example, for relationships between variables such as arts event attendance and self-reported health outcomes. That is, whether you feel in good health could plausibly have an effect on whether you go to an arts event. The causal pathway for the opposite (going to an arts event affecting whether you feel in good health) is arguably less plausible.

The authors give away the game in this passage, where they report their result showing a negative correlation between arts participation and health outcomes (i.e. as arts participation increases, self-reported health outcomes decrease). They explain this discrepant finding away by suggesting the causal direction may be the opposite:

‘Participation in art was actually found to have a negative impact on health, although this may be explained to some extent by reverse causality: that is, unhealthy people may be more likely to engage in arts’.

Clearly, this explanation could also be offered for the positive correlations that are reported as well.

In sum, given that it is perfectly plausible that those who feel in better health would attend arts events and participate in sport at a higher level of frequency than those who feel in poor health (etc.), there is no legitimate basis for drawing a causal inference from the data adduced for this report. The authors are using statistical language that could be confusing for readers without a statistics background. There are plenty of other problems in the report. For example, some of the assumptions for the ‘indicative value’ calculations are ludicrously optimistic. The authors acknowledged that someone self-reporting they would attend further education is a poor predictor of actual participation in further education.

We had to assume that all people who reported being ‘very likely’ to go onto further education did in fact go on to university education, when we know that intentions are in fact a notoriously poor predictor of behaviour (explaining only about 3% of the variance in health behaviour change, for example).

(Fujiwara et al., 2014, p. 7)

Yet, the calculations of the value of arts participation not only assumed self-reports would be followed through, but also rounded up to assume they would be doing a full undergraduate degree because there are established figures for the wage premium of an undergraduate education.

We need to assume that people who say they are ‘very likely’ to go actually do go. Second, further education could be non-university/non-degree education, but we will look at the returns to degree education, for which some evidence exists. Therefore, we will assume here that people who say they will go on to further education go on to degree education. (p. 22)

While there are many enormous leaps here, one factor making the assumption that someone attending arts activities could benefit from the added wage premium of a degree is the highly skewed levels of educational qualifications amongst arts audiences to begin with. A recent study I conducted of visitors to the National Gallery found that ‘76% (n=642) of the overall sample had at least a bachelor’s degree’. Obviously then, calculations of the economic value of arts participation based on such unsound assumptions are far from believable.

However, the cause-effect problem I discussed is the most fundamental: the edifices constructed in the rest of the report rest on this very weak foundation. Thus, those making specious impact claims based on this report do so at their peril.

 

Impact Evaluation in the Arts and Culture Sector

Clearly there is a wider issue here than this one report. The most helpful intervention I have seen in this space in recent years is the AHRC Cultural Value project, which is facing up to the challenges honestly and exploring a wide range of possible answers.

I am aware that the vast majority of people working in this field (including as consultants) will not have had the social scientific research methods training required to produce valid impact evaluations or be a critical consumer of visitor research and impact evaluations conducted by others. This no doubt helps to explain the ubiquity of very poor quality visitor research and evaluation in this field (which the recent Evaluating Evaluation project encountered when they went looking for ‘nuggets’ or ‘golden rules’ within this grey literature), including in the published empirical museum studies literature. The most basic principles of survey and research design are routinely violated in evaluations by (and commissioned by) institutions large and small.

A couple years back, I tried out one option to help the situation by offering a series of training seminars in the area of impact evaluation of public engagement and informal learning with support from the Wellcome Trust and the British Science Association.

While a lot of people came to these sessions, I don’t think it was a solution (mainly because it takes time to learn how to design and then to actually conduct valid data collection and analysis, and many practitioners are just too pressed with other priorities). So most recently, I have explored an alternative that could be a real solution: Using the latest technologies to enable the design of evaluation systems that could be fully automated so they could be used by institutions/practitioners without any skills in social scientific analysis required. This was explored through the Qualia project (funded by the Digital R & D Fund for the Arts), which has just finished. The goal was to build a high quality open source system that could be used by arts and culture institutions across the UK with a bare minimum of customisation required to deliver automated evaluation results.

I subjected the different technologies to rigorous testing (it was piloted at each of the Cheltenham Festivals last year) and in the end I do think it is a workable system. If this kind of system is refined and widely adopted over time, then it would be feasible to build up sector-wide (and even cross-sector) data on the impacts of arts and culture experiences that are valid at the individual level. This could also be achieved using conventional social scientific methods (indeed, I just completed such research at the National Gallery, the Cheltenham Literature Festival and the University of Cambridge Festival of Ideas), but at a much higher on-going cost. Ideally, if the cost burden of on-going evaluation reporting for normal audiences could be removed, the sector would then be able to focus on strategic investment in some in-depth rigorous research on visitor impacts conducted with diverse audiences (including current non-visitors) – for example, perhaps pooling together resources across institutions to do more interesting, valid and robust evaluations. There is also theoretical work to be done on the causal pathway from individual arts and culture experiences to longer term impact and social change. I have begun this work in a recently published book entitled Culture & Social Change: Transforming Society through the Power of Ideas.

Robust evidence of impact on those who participate (and do not participate) in arts and culture activities is needed before we can validly build up to making claims about society-wide economic, social and health impacts. At the same time, it would be nice to see funders invest in a large-scale longitudinal study that tracks cohorts of individuals from diverse backgrounds from childhood through their first encounters with arts and culture etc. A subset of the cohort could be randomly selected for outreach interventions that provide them with a range of arts and culture experiences they would not otherwise have had in order to evaluate impacts. Such a study would be very expensive and may only yield really useful results after 5-10 years, but it is precisely the kind of thing arts funders should have invested in many years ago.

Ultimately, I would hold funders responsible for the current state of evaluation in the arts and culture sector. They clearly have not sufficiently prioritised methodological rigour, supported good research training or otherwise gotten out in front of this problem. Now there is a very challenging political and funding climate for arts and culture, which makes discussions of impact all the more fraught.

 

BIO

Dr Eric Jensen is Associate Professor (Senior Lecturer) in the Department of Sociology at the University of Warwick. Jensen is a widely published researcher in the field of public engagement, and is an expert in public engagement and impact evaluation methodology. His research in this domain has included studies of the impacts of museums, galleries, universities, research groups and festivals seeking to engage publics with particular ideas. He has conducted research commissioned by the National Gallery, Natural History Museum, University of Cambridge Museums, ZSL London Zoo and many other institutions. Recent projects include ‘Public Engagement with Research Online’ (funded by JISC), the Qualia project developing an evaluation and feedback app for the arts and culture sector (funded by the Digital R & D Fund for the Arts: Nesta, Arts & Humanities Research Council and the Arts Council), a project on ‘The role of technology in evaluating the non-economic impacts of arts and culture’ (funded by the Arts & Humanities Research Council) and another AHRC project starting in August 2014 entitled: ‘Using Social Media to Identify and Leverage Engagement (SMILE) with Arts and Culture’. In terms of theory, Eric has linked these research interests to a new model of social change, developed in his recently published book: Culture & Social Change: Transforming Society through the Power of Ideas. Eric has a forthcoming book under contract with Cambridge University Press entitled Making the Most of Public Engagement Events and Festivals and a forthcoming research methods textbook for SAGE entitled Doing Real Research. Eric has a PhD in Sociology from the University of Cambridge. 

Discussion

One Response to “Questioning the specious quantification of cultural impacts by Eric Jensen”

  1. DCMS, with its limited budget and visibility, has taken the step to join the national conversation about wellbeing and public policy. The ONS has done it. Treasury has explored it (not sure how far or for how long). And recently Gus O’Donnell co-edited a report on the need for new ways of assessing policy, endorsing this type of methodology.

    It seems to me that instead of the very predictable “more research needs to be done” response to this report from academics, they should welcome it and then ask the more challenging question of “how are you, policy maker, using these findings to inform your work?”.

    I think it is problematic for academics in the cultural policy world to continue to engage in this debate behind the methodological barrack. We will always need more “robust evidence of impact on those who participate (and do not participate) in arts and culture activities is needed before we can validly build up to making claims about society-wide economic, social and health impacts.” Yes, that is a given. That is the line that should always be added to the case for impact for additional funding from the AHRC.

    But a different challenge should be given to policy makers: you have invested in this research, so what? What are you doing with this information? How are you using it to inform policy making? How are you letting your policy colleagues know about this work?

    Fundamental structural changes are taking place in society in Europe (e.g., rapid demographic shifts which are leading to very different perceptions about what government is all about). They are re-shaping the way government works and funds. Academics supporting government to make better policy decisions to face these challenges will help. Lobbying for more research funding will not.

    Posted by Javier Stanziola | May 7, 2014, 12:39 pm

Leave a Reply to Javier Stanziola Cancel reply

The #culturalvalue Initiative Archive