Emotional Contagion, Big Data and Research Ethics

by | 15 Jul 2014 | The Blog | 0 comments

The results of a recent psychology experiment on ‘emotional contagion’ last week were heavily reported in the mainstream media and, rather unusually, the discussion was focused on methodology and research ethics. The experiment involved manipulation of the ‘News Feed’ on around 700,000 Facebook profiles. Given widespread concern about the privacy of online life since Snowden’s revelations about the NSA’s spying programmes, it is unsurprising that Facebook’s use of its users as unwitting research subjects has been met with anger. For scholars it may be more puzzling that such research would have been conducted at all, given the ethical review processes now so seemingly ubiquitous in the academy. So, what might we learn from this episode?

First, it is worth noting that the research published from the Facebook experiment [1. Kramer et al. (2014). Experimental evidence of massive-scale emotional contagion through social networks. Proceedings of the National Academy of Sciences, doi:10.1073] could be of interest to social movement scholars. By manipulating the emotional tenor of Facebook News Feeds there were small but statistically significant changes in the viewer’s inclination to post either positive or negatively charged updates of their own. Given that movement scholars are currently often interested in both the emotional content of movement activity and movements’ uses of social media websites, there’s clearly something to think about here.  Twitter, Facebook and so on are often written about as conduits for spreading information, but here’s evidence that feelings can spread too, without face-to-face interaction, and regardless of the informational content of the emotionally charged posts. The experiment was simplified to ‘positive’ or ‘negative’ posts, since an automated text interpreter was being used, but one can imagine how emotional contagion might be one way of thinking about what’s actually occurring when we witness a ‘widespread feeling of anger’ contributing to the mobilisation of a movement. Even without the social media angle, the phenomenon of emotional contagion makes for an interesting potential link between social psychology and other disciplines interested in movements[2. Klandermans et al. (2004). The demand and supply of participation: Social-psychological correlates of participation in social movements. The Blackwell Companion to Social Movements, 360–379.].

Nevertheless, the research ethics lesson here is simply that this finding does not justify the methods used in discovering it. This experiment was carried out without anything approaching an adequate level of informed consent. Under most sensible research ethics guidance, informed consent may only be circumvented when there is very good reason – there must be no other way of researching a topic and that topic must be of profound significance. In relation to the first condition, it is arguable that the effect of knowing that one was participating in this experiment would have contaminated the results – though, frankly, so much social science has to live with this possibility, and try to find ways to control for it, that I don’t think this is enough to let the researchers off the ethical hook. More importantly, perhaps, we might ask about the second condition – how significant is this finding? As the authors note in their literature review, emotional contagion is already a well-established psychological phenomenon but had not yet been observed without face-to-face interaction. There is something original in this finding, then, but it is hardly startlingly unexpected and neither is it likely to have huge consequences. It is difficult to know where the line is that allows originality to justify carrying out research covertly (either observational or experimental), but this particular topic is nowhere near it. This experiment involved the covert manipulation of people’s emotional state – the effect might have been small but the whole purpose was to demonstrate that it exists across a large population. If the general public is to be manipulated then this could only really be justified where the research findings could have a kind of significance that can easily be explained to and appreciated by that same public. In this case, Charlie Brooker’s scathing assessment of significance is probably telling:

“Emotional contagion is what we used to call “empathy”.
In other words, the fine folk at Facebook are so hopelessly disconnected from ground-level emotional reality they have to employ a team of scientists to run clandestine experiments on hundreds of thousands of their “customers” to discover that human beings get upset when other human beings they care about are unhappy.”

It is not simply the manipulative aspects of this particular study that are troubling, however. There is a more general lesson here that the current trend for academics to seek ‘big data’ is complicating the research ethics landscape and this could lead to significant doubts over the legitimacy and integrity of academic research in general. Many of the news articles covering this story saw it as a Facebook privacy issue, which fits in nicely with the ongoing privacy narrative I hinted at above. As far as Facebook are concerned, this experiment was communicated badly, but covered by its Terms & Conditions, which include mention of the possibility of research being done on users’ accounts. That might be legally satisfactory for the corporation but such a general statement of conditions – that a user might have thoughtlessly signed over a decade previously – is not acceptably informed consent in the eyes of any professional scholar. The Editor in Chief at the Proceedings of the National Academy of the Sciences has now noted an ‘Editorial Expression of Concern’, which worries ‘that the collection of the data by Facebook may have involved practices that were not fully consistent with the principles of obtaining informed consent’ (Verma, 2014). That statement essentially blames Facebook and thus absolves itself of any wrongdoing in publication. To be fair, while the publishing journal does have a duty to ensure accuracy via peer review, it is not currently a journal’s role to carry out ethical review since this should be done by a funding body or by the University employing the authors. But in this case the lead author was affiliated with the ‘Core Data Science Team, Facebook Inc.’. The other two authors are from Cornell University, but Cornell performed no institutional review because this was seen as Facebook’s ‘internal’ experiment, with the academics stepping in to analyse the data created as a result.

This shouldn’t be considered a one-off issue, or indeed just a problem for Facebook. ‘Big data’ is creating ‘huge excitement’ (not to mention ‘massive bandwagon’) in the social sciences because, let’s face it, if you’ve got millions of data points you definitely ought to be able to discover some interesting patterns. It’s not just the online world that is subject to constant data gathering either – supermarket loyalty cards and health interactions are just two other areas raising excitement about possible research applications. Getting access to that data always involves working with insiders, on projects that they approve of – after all, ‘big data’ is collected for a reason and is owned by the corporations who collect it, who recognise its financial value, and are likely to want to control the uses to which it is put. Moreover, the big tech firms have the hardware and the know-how to analyse these huge datasets, and are obviously essential if we want to actively manipulate people’s online interactions as is the case in this experiment. The expertise these firms have should be a very welcome addition to the research environment not least because, as both Twitter’s Data Grants project and Facebook’s in-house social scientists have shown, some individuals within those companies seem to recognise genuine social benefits in the academic study of the data they control. But the very idea of owning the ‘data’ that describes millions of people’s personal interactions signals the culture clash between corporate and academic standards. And academia has no leverage over private corporations when it comes to the ethical use of their customer’s data. Nevertheless, as scholars we must attempt to uphold the highest possible standards. The integrity of our professional practices ought to be one of the defining characteristics of Universities. We claim to serve purposes that are higher than profit, and we are disappointed whenever our managers contradict that ideal. We demand exacting standards of professional honesty, and we ostracize those (like  plagiarists) who do not fulfill them. Academic research often relies on the good will of research participants (as well as editors, peer reviewers and many others) and that good will in turn relies on the fact that Universities are, by and large, well respected and highly trusted institutions. That is all far too important to squander for quick access to dirty data.

Final plug: if you’re interested in the research ethics of studying social movements check out our special issue at Social Movement Studies, soon to be published in library friendly book format by Routledge[3. Gillan & Pickerill, eds, forthcoming, Research Ethics and Social Movements: Scholarship, Activism & Knowledge Production.].

Endnotes

0 Comments

Kevin GillanKevin Gillan

Kevin Gillan is a lecturer in sociology at the University of Manchester and Editor in Chief at the journal Social Movement Studies. He has interests that overlap between social movements scholarship and economic sociology.