Dianthus Medical Blog Archive

Rules are for the little people

One of the most fundamental principles of the ethics of research involving human subjects is that of informed consent. This applies in clinical research and in social science research. If you are going to experiment on people, you need their permission first.

And note the word "informed" in the phrase "informed consent". It's not enough that someone ticks a box on a form: for consent to be ethical, it must be properly informed. The people giving consent to be experimented on must know exactly what they are letting themselves in for.

So I was pretty shocked when I saw a study in the respected journal PNAS of a huge social science experiment in which Facebook experimented onĀ 689,003 of their users without first having obtained informed consent. The experiment aimed to discover whether emotions could be transmitted through social networks in a manner analogous to contagion. The paper claimed that they could, though the effect sizes were absolutely tiny, and could no doubt not have been detected at all in a study that didn't have a sample size in the hundreds of thousands. And whether they were actually measuring emotions is open to question.

Now, I was going to write quite a detailed blogpost about this, but I see David Gorski has already done a pretty good job of that, so I'm just going to confine myself to a few points here. Go and read David's post if you want some more detail.

The authors of the paper claim that they had informed consent on the grounds that all users had agreed to Facebook's Data Use Policy. This is, to use a technical term, utter bollocks. I don't know what percentage of Facebook users have actually read their Data Use Policy, but it must surely be a very small percentage. So even if the policy says that you will be entered into a randomised experiment, the consent can hardly be considered informed.

But the policy certainly does not make it clear that you are going to be experimented on if you sign up to Facebook. The relevant part of the policy seems to be the statement that your data may be used "for internal operations, including troubleshooting, data analysis, testing, research and service improvement."

Well, I would question whether a research study that's published in a major journal could be considered an "internal operation", but the context of this statement makes it looks like it is only routinely collected data that will be used. Generating specific data by experimentally manipulating how you interact with Facebook is quite another thing, to which users most definitely did not consent.

It is also unclear whether there was any ethical oversight of the study. None was mentioned in the paper. That, in itself, is pretty shocking. A paper describing research in human subjects should always describe the process for ethical review. From reading these articles on Forbes and The Atlantic, it seems that the study may have had approval from the Cornell Institutional Review Board, but only after the data had been collected.

This all seems very murky.

It might be argued that it was inappropriate to obtain consent, as by doing so participants would have behaved less naturally and would have skewed the results. I don't buy that argument. It is reasonably well accepted in psychology research that sometimes you don't tell participants everything at the start of an experiment, so that you don't cause them to behave differently. Sometimes it's even OK to deceive participants, for example telling them that research is for one purpose when it's actually for another.

However, it's not OK not to tell people that they are entering an experiment in the first place. That's a completely different thing. I can't see why it wouldn't have been possible to invite users to take part in an experiment in which their news feed would be manipulated (without specifying the details of how) and give them the opportunity to accept or decline.

It's also an important principle in psychology research that, particularly when participants may not be given all the information at the start of a study, they receive a full debriefing at the end of it. This doesn't seem to have happened here, adding another nail in the coffin of this study's claim to have followed ethical standards.

But so what, I hear you cry? We're not testing powerful drugs on people, we're just playing with their Facebook feed? Well, it would be wrong to assume that this carries a zero risk of harm, as is explained neatly on this blogpost.

I have no doubt that this study fell well below commonly accepted ethical standards. This is a big deal. A respected journal like PNAS really should not be publishing this sort of stuff.

What worries me here is that the research was conducted by Facebook, a hugely powerful organisation. Facebook has an annual turnover of about $8 billion, roughly equivalent to the GDP of a small country such as Malta.

Perhaps Facebook believe that rules are for the little people, and if you are a huge and powerful mega-corporation, you can ignore them.

What scares me is that they are probably right.

 

← Self-driving cars In the pay of Big Pharma →