A study was just published in PNAS assessing the effect of the emotion of posts displayed in a user’s news feed on the emotion in that user’s own posts.
Get off Facebook. Get your family off Facebook. If you work there, quit. They're fucking awful.— Erin Kissane (@kissane) June 28, 2014
I do human subjects research every day as part of my dissertation work, so I wanted to comment on the ethics of this study from a scientist’s prospective.
Whenever human subjects research is conducted, potential participants are supposed to go through a process called informed consent. There’s a lot of history behind this that I won’t go into; suffice it to say that it’s a key part of any research involving humans that was created because of past abuse of study participants by researchers.
There is a list of what you are supposed to tell a potential participant before enrolling them into a study. Informed consent means that the participant both understands this information and agrees to participate.
This brings us to what this study did. Kramer et al. write:
…no text was seen by the researchers. As such, it was consistent with Facebook’s Data Use Policy, to which all users agree prior to creating an account on Facebook, constituting informed consent for this research.
I have no idea what Facebook’s Data Use Policy says. Maybe it has all the elements of informed consent, maybe it doesn’t.
But it doesn’t matter. No one actually reads the Data Use Policy. Therefore, any consent was not informed.
Based on the information in the PNAS paper, I don’t think these researchers met this ethical obligation.
Ethical review boards
The silly thing is, this study probably did not need informed consent. There are exceptions where informed consent is not necessary because participating in the study has minimal risk, or the study could not feasibly be conducted with informed consent.1
Realistically, what are the risks to humans subjects posed by this study? The release of confidential data as a result of the study seems unlikely (it’s already on Facebook after all). The other risk I can think of would be related to the emotional manipulation of users.
What if facebook's emotional manipulation had led to harm of others or self? http://t.co/S1zYXJJam4— Keith Vertrees (@thepriceisright) June 28, 2014
This seems pretty unlikely to me. Fortunately, as a researcher I don’t have to make decisions about what constitutes a risk to my participants. Ethical review boards (commonly referred to as IRBs) exist specifically to review studies involving human subjects. They set the parameters for what is required in terms of informed consent, and can exempt studies from informed consent if the risk to human subjects is minimal.
The researchers should have gone to an IRB with their study protocol. It would probably have been exempt, they could have mentioned this in their paper, and they would avoid this sticky ethical situation.2
Update: Apparently a PNAS editor told The Atlantic that the study was reviewed by the authors’ “local institutional review board”. It’s common practice to provide this information in the body of any manuscript reporting results of a study involving human subjects, including the location of the IRB (was it “local” to Facebook or part of a university?). I’m actually surprised the paper was published without this.
Using a different study design
John Coxon (via Marco Arment) suggested that Facebook should have used a different study design to avoid manipulating users’ news feeds. I agree with the spirit of this idea, but it is scientifically problematic for two reasons.
First, a different study design doesn’t release the researchers from their ethical obligation to obtain informed consent or have their study exempted by an IRB.
Second, changing the study design would make the findings substantially less valid. The published study was a randomized controlled trial, where the “treatment” was randomly assigned and the participants were unaware of which treatment they received (and unaware they were in a study at all in this case).
The design suggested by John is an observational study. Observational studies can yield convincing results if conducted properly, are often cheaper and easier to conduct than RTCs, and are more appropriate in some situations (e.g. you cannot randomize participants to smoke cigarettes to see if smoking is associated with lung cancer).
The problem with observational studies is that extraneous factors can influence the association between the exposure and outcome. For example, time of day could “confound” the association between the emotional content of a user’s news feed and the emotional content of their posts.
Confounding can be controlled for in a variety of different ways, but doing this complicates the analysis of the data and it is impossible to control for everything. Uncontrolled confounding is one reason why observational studies generally only identify associations, not causation.
In this case, I think it would be difficult to conduct a properly controlled observational study.
If there’s not a substantial difference in risk to human subjects between two study designs, the design that will produce more convincing findings should be used to “make the most” of the risk participants are taking on. Again, an ethical review board can (and should) help navigate these decisions.
The latter exception is pretty rare. It’s generally used for studies involving emergency medicine, where it’s logistically impossible to get consent. In these cases, researchers will sometimes get the blessing of community leaders as a surrogate for consent. Ethical review boards will help researchers navigate this. ↩
Facebook’s users would probably be equally pissed with or without IRB review though. ↩