Two Cheers for Corporate Experimentation

Rubin's vase2By Michelle Meyer

I have a new law review article out, Two Cheers for Corporate Experimentation: The A/B Illusion and the Virtues of Data-Driven Innovation, arising out of last year’s terrific Silicon Flatirons annual tech/privacy conference at Colorado Law, the theme of which was “When Companies Study Their Customers.”

This article builds on, but goes well beyond, my prior work on the Facebook experiment in Wired (mostly a wonky regulatory explainer of the Common Rule and OHRP engagement guidance as applied to the Facebook-Cornell experiment, albeit with hints of things to come in later work) and Nature (a brief mostly-defense of the ethics of the experiment co-authored with 5 ethicists and signed by an additional 28, which was necessarily limited in breadth and depth by both space constraints and the need to achieve overlapping consensus).

Although I once again turn to the Facebook experiment as a case study (and also to new discussions of the OkCupid matching algorithm experiment and of 401(k) experiments), the new article aims at answering a much broader question than whether any particular experiment was legal or ethical. Here is how the abstract begins:

“Practitioners”—whether business managers, lawmakers, clinicians, or other actors—are constantly innovating, in the broad sense of introducing new products, services, policies, or practices. In some cases (e.g., new drugs and medical devices), we’ve decided that the risks of such innovations require that they be carefully introduced into small populations, and their safety and efficacy measured, before they’re introduced into the general population. But for the vast majority of innovations, ex ante regulation requiring evidence of safety and efficacy neither does—nor feasibly could—exist. In these cases, how should practitioners responsibly innovate?

My short answer to this question is that responsible innovators should inculcate a culture of continuous testing of their products, services, policies, and practices, and that it is a kind of moral-cognitive mistake (which I dub the “A/B illusion”) for the rest of us to respond to these laudable (and sometimes morally obligatory) experimental efforts by viewing it as more morally suspicious for innovators to randomize users to one of two (or more) conditions than to simply roll out one of those conditions, untested, for everybody. The long answer, of course, is in the article. (The full abstract, incidentally, explains the relevance of the image that accompanies this post.)

Thanks to Paul Ohm and conference co-sponsor Ryan Calo for inviting me to participate, to the editors of the Colorado Technology Law Journal, and to James Grimmelmann for being a worthy interlocutor over the past almost-year and for generously unfailingly tweeting my work on Facebook despite our sometimes divergent perspectives. James’s contribution to the symposium issue is here; I don’t know how many other conference participants chose to write, but issue 13.2 will appear fully online here at some point.

If you would rather hear, than read, me drone on about the Facebook and OkCupid experiments (and some other recent digital research, including Apple’s ResearchKit and the University of Michigan’s Facebook app-based GWAS, “Genes for Good,” as well as learning healthcare systems and the future of human subjects research) you may do so by listening to episode 9 of Nic Terry’s and Frank Pasquale terrific new weekly podcast, This Week in Health Law.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.