Facebook Announces New Research Policies

6856181503_8d8e73208b_z

By Michelle Meyer

A WSJ reporter just tipped me off to this news release by Facebook regarding the changes it has made in its research practices in response to public outrage about its emotional contagion experiment, published in PNAS. I had a brief window of time in which to respond with my comments, so these are rushed and a first reaction, but for what they’re worth, here’s what I told her (plus links and less a couple of typos):

There’s a lot to like in this announcement. I’m delighted that, despite the backlash it received, Facebook will continue to publish at least some of their research in peer-reviewed journals and to post reprints of that research on their website, where everyone can benefit from it. It’s also encouraging that the company acknowledges the importance of user trust and that it has expressed a commitment to better communicate its research goals and results.

As for Facebook’s promise to subject future research to more extensive review by a wider and more senior group of people within the company, with an enhanced review process for research that concerns, say, minors or sensitive topics, it’s impossible to assess whether this is ethically good or bad without knowing a lot more about both the people who comprise the panel and their review process (including but not limited to Facebook’s policy on when, if ever, the default requirements of informed consent may be modified or waived). It’s tempting to conclude that more review is always better. But research ethics committees (IRBs) can and do make mistakes in both directions – by approving research that should not have gone forward and by unreasonably thwarting important research. Do Facebook’s law, privacy, and policy people have any training in research ethics? Is there any sort of appeal process for Facebook’s data scientists if the panel arbitrarily rejects their proposal? These are the tip of the iceberg of challenges that the academic IRBs continue to face, and I fear that we are unthinkingly exporting an unhealthy system into the corporate world. Discussion is just beginning among academic scientists, corporate data scientists, and ethicists about the ethics of mass-scale digital experimentation (see, ahem, here and here). It’s theoretically possible, but unlikely, that in its new, but unclear, guidelines and review process Facebook has struck the optimal balance among the competing values and interests that this work involves. 

Most alarming is Facebook’s suggestion that it retreat from experimental methods in favor of what are often second-best methods resorted to only when randomized, controlled studies are impossible. Academics, including those Facebook’s statement references in its announcement, often have to resort to non-experimental methods in studying social media because they lack access to corporate data and algorithms. “Manipulation” has a negative connotation outside of science but it is the heart of the scientific method and the best way of inferring causation. Studies have found that people perceive research to be more risky when it is described by words like “experiment” or “manipulation” rather than “study,” but it’s not always the case that randomized, controlled studies pose more risk than do observational studies. The incremental risk that a study — of whatever type — imposes on users is clearly ethically relevant, and that’s what we should focus on, not this crude proxy for risk. I would rather see Facebook and other companies engage in ethical experiments than retreat from the scientific method.

It’s also unclear to me why guidelines require more extensive review if the work involves a collaboration with someone in the academic community.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.