Of Data Challenges

By Leslie Francis

Cross-posted from the HealthLawProfs blog.

Challenges designed to spur innovative uses of data are springing up frequently. These are contests, sponsored by a mix of government agencies, industry, foundations, a variety of not-for-profit groups, or even individuals. They offer prize money or other incentives for people or teams to come up with solutions to a wide range of problems. In addition to grand prizes, they often offer many smaller prizes or networking opportunities. The latest such challenge to come to my attention was announced August 19 by the Knight Foundation: $2 million for answers to the question “how can we harnass data and information for the health of communities?” Companion prizes, of up to $200,000, are also being offered by the Robert Wood Johnson Foundation and the California Healthcare Foundation.

Such challenges are also a favorite of the Obama administration. From promoting Obamacare among younger Americans (over 100 prizes of up to $30,000)–now entered by Karl Rove’s Crossroads group–to arms control and identification of sewer overflows, the federal government has gone in for challenges big time. Check out challenge.gov to see the impressive list. Use of information and technological innovation feature prominently in the challenges, but there is also a challenge for “innovative communications strategies to target individuals who experience high levels of involuntary breaks (“churn”) in health insurance coverage” (from SAMHSA), a challenge to design posters to educate kids about concussions (from CDC), a challenge to develop a robot that can retrieve samples (from NASA), and a challenge to use technology for atrocity prevention (from USAID and Humanity United). All in all, some 285 challenges sponsored by the federal government are currently active, although for some the submission period has closed. Read More

Thoughts on Myriad

By Ryan Abbott

While awaiting the torrent of academic commentary on this case that is no doubt forthcoming, for now I thought I’d highlight a few interesting aspects of today’s unanimous Supreme Court decision in Association for Molecular Pathology v. Myriad Genetics, 569 U. S. ____ (2013).

Briefly, this case concerned whether genes can be patented. The company Myriad Genetics held several patents related to two genes: BRCA1 and BRCA2. When mutations of these genes are present, it may indicate that a woman is at a high risk for getting breast or ovarian cancer. Myriad also held patents on a proprietary test to evaluate for the presence of BRCA gene mutations that costs over $3,000. This screening has been in the news recently due to Angelina Jolie’s decision to undergo preventive double mastectomy after testing positive for BRCA mutations.

In today’s case, Myriad’s patents were being challenged because they limited competition from other companies and researchers that could have independently tested for the same gene mutations. The outcome of this case has been critically anticipated for years because of its impact on patient access to medicines and funding medical research and development. Thousands of human genes have been patented in the U.S. over the past 30 years.

Before reaching the Supreme Court, a U.S. District judge in New York invalidated Myriad’s patents in 2010, ruling that genes were ineligible for patent protection as “products of nature.” However, the Court of Appeals for the Federal Circuit disagreed, holding that genes were eligible for patent protection because DNA isolated from the body is “markedly different” in chemical structure than DNA as it exists inside the body. The Supreme Court remanded the decision back to the Federal Circuit in light of its decision in Prometheus, and the Federal Circuit affirmed its decision that DNA was patent eligible.

Read More

Public Policy Considerations for Recent Re-Identification Demonstration Attacks on Genomic Data Sets: Part 1 (Re-Identification Symposium)

By Michelle Meyer

This post is part of Bill of Health‘s symposium on the Law, Ethics, and Science and Re-Identification Demonstrations. We’ll have more contributions throughout the week. Background on the symposium is here. You can call up all of the symposium contributions by clicking here. —MM

Daniel C. Barth-Jones, M.P.H., Ph.D., is a HIV and Infectious Disease Epidemiologist.  His work in the area of statistical disclosure control and implementation under the HIPAA Privacy Rule provisions for de-identification is focused on the importance of properly balancing competing goals of protecting patient privacy and preserving the accuracy of scientific research and statistical analyses conducted with de-identified data. You can follow him on Twitter at @dbarthjones.

Re-identification Rain-makers

The media’s “re-identification rain-makers” have been hard at work in 2013 ceremoniously drumming up the latest anxiety-inducing media storms. In January, a new re-identification attack providing “surname inferences” from genomic data was unveiled and the popular press and bloggers thundered, rattled and raged with headlines ranging from the more staid and trusted voices of major newspapers (like the Wall Street Journal’s: “A Little Digging Unmasks DNA Donor Names. Experts Identify People by Matching Y-Chromosome Markers to Genealogy Sites, Obits; Researchers’ Privacy Promises ‘Empty’”) to near “the-sky-is-falling” hysteria in the blogosphere where headlines screamed: “Your Biggest Genetic Secrets Can Now Be Hacked, Stolen, and Used for Target Marketing” and “DNA hack could make medical privacy impossible”. (Now, we all know that editors will sometimes write sensational headlines in order to draw in readers, but I have to just say “Please, Editors… Take a deep breath and maybe a Xanax”.)

The more complicated reality is that, while this recent re-identification demonstration provided some important warning signals for future potential health privacy concerns, it was not likely to have been implemented by anyone other than an academic re-identification scientist; nor would it have been nearly so successful if it had not carefully selected targets who were particularly susceptible for re-identification.

As I’ve written elsewhere, from a public policy standpoint, it is essential that the re-identification scientists and the media accurately communicate re-identification risk research; because public opinion should, and does, play an important role in setting priorities for policy-makers. There is no “free lunch”. Considerable costs come with incorrectly evaluating the true risks of re-identification, because de-identification practice importantly impacts the scientific accuracy and quality of the healthcare decisions made based on research using de-identified data. Properly balancing disclosure risks and statistical accuracy is crucial because some popular de-identification methods can unnecessarily, and often undetectably, degrade the accuracy of de-identified data for multivariate statistical analyses. Poorly conducted de-identification may fail to protect privacy, and the overuse of de-identification methods in cases where they do not produce meaningful privacy protections can quickly lead to undetected and life threatening distortions in research and produce damaging health policy decisions.

So, what is the realistic magnitude of re-identification risk posed by the “Y-STR” surname inference re-identification attack methods developed by Yaniv Erlich’s lab? Should *everyone* really be fearful that this “DNA Hack” has now made their “medical privacy impossible”? Read More

An Open Letter From a Genomic Altruist to a Genomic Extrovert (Re-Identification Symposium)

By Michelle Meyer

This post is part of Bill of Health‘s symposium on the Law, Ethics, and Science of Re-Identification Demonstrations. You can call up all of the symposium contributions here. We’ll continue to post contributions throughout the week. —MM

Dear Misha:

In your open letter to me, you write:

No one is asking you to be silent, blasé or happy about being cloned (your clone, however, tells me she is “totally psyched”).

First things first: I have an ever-growing list of things I wish I had done differently in life, so let me know when my clone has learned how to read, and I’ll send it on over; perhaps her path in life will be sufficiently similar to mine that she’ll benefit from at least a few items on the list.

Moving on to substance, here’s the thing: some people did say that PGP participants have no right to complain about being re-identified (and, by logical extension, about any of the other risks we assumed, including the risk of being cloned). It was my intention, in that post, to articulate and respond to three arguments that I’ve encountered, each of which suggests that re-identification demonstrations raise few or no ethical issues, at least in certain cases. To review, those arguments are:

  1. Participants who are warned by data holders of the risk of re-identification thereby consent to be re-identified by third parties.
  2. Participants who agree to provide data in an open access format for anyone to do with it whatever they like thereby gave blanket consent that necessarily included consent to using their data (combined with other data) to re-identify them.
  3. Re-identification is benign in the hands of scholars, as opposed to commercial or criminal actors.

I feel confident in rejecting the first and third arguments. (As you’ll see from the comments I left on your post, however, I struggled, and continue to struggle, with how to respond to the second argument; Madeleine also has some great thoughts.) Note, however, two things. First, none of my responses to these arguments was meant to suggest that I or anyone else had been “sold a bill of goods” by the PGP. I’m sorry that I must have written my post in such a way that it leant itself to that interpretation. All I intended to say was that, in acknowledging the PGP’s warning that re-identification by third parties is possible, participants did not give third parties permission to re-identify them. I was addressing the relationship between re-identification researchers and data providers more than that between data providers and data holders.

Second, even as to re-identification researchers, it doesn’t follow from my rejection of these three arguments that re-identification demonstrations are necessarily unethical, even when conducted without participant consent. Exploring that question is the aim, in part, of my next post. What I tried to do in the first post was clear some brush and push back against the idea that under the PGP model — a model that I think we both would like to see expand — participants have given permission to be re-identified, “end of [ethical] story.” Read More

Reidentification as Basic Science (Re-Identification Symposium)

By Michelle Meyer

This post is part of Bill of Health‘s symposium on the Law, Ethics, and Science of Re-Identification Demonstrations. You can call up all of the symposium contributions here. We’ll continue to post contributions into next week. —MM

Arvind Narayanan (Ph.D. 2009) is an Assistant Professor of Computer Science at Princeton. He studies information privacy and security and has a side-interest in technology policy. His research has shown that data anonymization is broken in fundamental ways, for which he jointly received the 2008 Privacy Enhancing Technologies Award. Narayanan is one of the researchers behind the “Do Not Track” proposal. His most recent research direction is the use of Web measurement to uncover how companies are using our personal information.

Narayanan is an affiliated faculty member at the Center for Information Technology Policy at Princeton and an affiliate scholar at Stanford Law School’s Center for Internet and Society. You can follow him on Twitter at @random_walker.

By Arvind Narayanan

What really drives reidentification researchers? Do we publish these demonstrations to alert individuals to privacy risks? To shame companies? For personal glory? If our goal is to improve privacy, are we doing it in the best way possible?

In this post I’d like to discuss my own motivations as a reidentification researcher, without speaking for anyone else. Certainly I care about improving privacy outcomes, in the sense of making sure that companies, governments and others don’t get away with mathematically unsound promises about the privacy of consumers’ data. But there is a quite different goal I care about at least as much: reidentification algorithms. These algorithms are my primary object of study, and so I see reidentification research partly as basic science.

Read More

Reflections of a Re-Identification Target, Part I: Some Information Doesn’t Want To Be Free (Re-Identification Symposium)

This post is part of Bill of Health‘s symposium on the Law, Ethics, and Science of Re-Identification Demonstrations. You can call up all of the symposium contributions here. Please note that Bill of Health continues to have problems receiving some comments. If you post a comment to any symposium piece and do not see it within half an hour or so, please email your comment to me at mmeyer @ law.harvard.edu and I will post it. —MM

By Michelle N. Meyer

I wear several hats for purposes of this symposium, in addition to organizer. First, I’m trained as a lawyer and an ethicist, and one of my areas of scholarly focus is research regulation and ethics, so I see re-identification demonstrations through that lens. Second, as a member of the advisory board of the Social Science Genetics Association Consortium (SSGAC), I advise data holders about ethical and regulatory aspects of their research, including issues of re-identification. I may have occasion to reflect on this role later in the symposium. For now, however, I want to put on my third hat: that of data provider to (a.k.a. research participant in) the Personal Genome Project (PGP), the most recent target of a pair of re-identification “attacks,” as even re-identification researchers themselves seem to call them.

In this first post, I’ll briefly discuss my experience as a target of a re-identification attack. In my discussions elsewhere about the PGP demonstrations, some have suggested that re-identification requires little or no ethical justification where (1) participants have been warned about the risk of re-identification; (2) participants have given blanket consent to all research uses of the data they make publicly available; and/or (3) the re-identification researchers are scholars rather than commercial or criminal actors.

In explaining below why I think each of these arguments is mistaken, I focus on the PGP re-identification demonstrations. I choose the PGP demonstrations not to single them out, but rather for several other reasons. First, the PGP attacks are the case studies with which, for obvious reasons, I’m most familiar, and I’m fortunate to have convinced so many other stakeholders involved in those demonstrations to participate in the symposium and help me fill out the picture with their perspectives. I also focus on the PGP because some view it as an “easy” case for re-identification work, given the features I just described. Therefore, if nonconsensual re-identification attacks on PGP participants are ethically problematic, then much other nonconsensual re-identification work is likely to be as well. Finally, although today the PGP may be somewhat unusual in being so frank with participants about the risk of re-identification and in engaging in such open access data sharing, both of these features, and especially the first, shouldn’t be unusual in research. To the extent that we move towards greater frankness about re-identification risk and broader data sharing, trying to achieve clarity about what these features of a research project do — and do not — mean for the appropriateness of re-identification demonstrations will be important.

Having argued here about how not to think about the ethics of re-identification studies, in a later post, I plan to provide some affirmative thoughts about an ethical framework for how we should think about this work.

Read More

Data Sharing vs. Privacy: Cutting the Gordian Knot (Re-Identification Symposium)

PGP participants and staff at the 2013 GET Conference. Photo credit: PersonalGenomes.org, license CC-BY

This post is part of Bill of Health‘s symposium on the Law, Ethics, and Science of Re-Identification Demonstrations. You can call up all of the symposium contributions here. Please note that Bill of Health continues to have problems receiving some comments. If you post a comment to any symposium piece and do not see it within half an hour or so, please email your comment to me at mmeyer @ law.harvard.edu and I will post it. —MM

By Madeleine Ball

Scientists should share. Methods, samples, and data — sharing these is a foundational aspect of the scientific method. Sharing enables researchers to replicate, validate, and build upon the work of colleagues. As Isaac Newton famously wrote: “If I have seen further it is by standing on the shoulders of giants.”

When scientists study humans, however, this impulse to share runs into another motivating force — respect for individual privacy. Clinical research has traditionally been conducted using de-identified data, and participants have been assured privacy. As digital information and computational methods have increased the ability to re-identify participants, researchers have become correspondingly more restrictive with sharing. Solutions are proposed in an attempt to maximize research value while protecting privacy, but these can fail — and, as Gymrek et al. have recently confirmed, biological materials themselves contain highly identifying information through their genetic material alone.

When George Church proposed the Personal Genome Project in 2005, he recognized this inherent tension between privacy and data sharing. He proposed an extreme solution: cutting the Gordian knot by removing assurances of privacy:

If the study subjects are consented with the promise of permanent confidentiality of their records, then the exposure of their data could result in psychological trauma to the participants and loss of public trust in the project. On the other hand, if subjects are recruited and consented based on expectation of full public data release, then the above risks to the subjects and the project can be avoided.

Church, GM “The Personal Genome Project” Molecular Systems Biology (2005)

Thus, the first ten PGP participants — the PGP-10 — identified themselves publicly.

Read More

Re-Identification Is Not the Problem. The Delusion of De-Identification Is. (Re-Identification Symposium)

By Michelle Meyer

This is the second post in Bill of Health‘s symposium on the Law, Ethics, and Science of Re-Identification Demonstrations. We’ll have more contributions throughout the week, and extending at least into early next week. Background on the symposium is here. You can call up all of the symposium contributions by clicking here (or by clicking on the “Re-Identification Symposium” category link at the bottom of any symposium post).

Please note that Bill of Health continues to have problems receiving some comments. If you post a comment to any symposium piece and do not see it within half an hour or so, please email your comment to me at mmeyer @ law.harvard.edu and I will post it. —MM

By Jen Wagner, J.D., Ph.D.

Before I actually discuss my thoughts on the re-identification demonstrations, I think it would be useful to provide a brief background on my perspective.

Identification≠identity

My genome is an identifier. It can be used in lieu of my name, my visible appearance, or my fingerprints to describe me sufficiently for legal purposes (e.g. a “Jane Doe” search or arrest warrant specifying my genomic sequence). Nevertheless, my genome is not me. It is not the gist of who I am –past, present or future. In other words, I do not believe in genetic essentialism.

My genome is not my identity, though it contributes to my identity in varying ways (directly and indirectly; consciously and subconsciously; discretely and continuously). Not every individual defines his/her self the way I do. There are genomophobes who may shape their identity in the absence of their genomic information and even in denial of and/or contradiction to their genomic information. Likewise, there are genomophiles who may shape their identity with considerable emphasis on their genomic information, in the absence of non-genetic information and even in denial of and/or contradiction to their non-genetic information (such as genealogies and origin beliefs).

My genome can tell you probabilistic information about me, such as my superficial appearance, health conditions, and ancestry. But it won’t tell you how my phenotypes have developed over my lifetime or how they may have been altered (e.g. the health benefits I noticed when I became vegetarian, the scar I earned when I was a kid, or the dyes used to hide the grey hairs that seem proportional to time spent on the academic job market). I do not believe in genetic determinism. My genomic data is of little research value without me (i.e. a willing, able, and honest participant), my phenotypic information (e.g. anthropometric data and health status), and my environmental information (e.g. data about my residence, community, life exposures, etc). Quite simply, I make my genomic data valuable.

As a PGP participant, I did not detach my name from the genetic data I uploaded into my profile. In many ways, I feel that the value of my data is maximized and the integrity of my data is better ensured when my data is humanized.

Read More

Applying Information Privacy Norms to Re-Identification Demonstrations (Re-Identification Symposium)

This is the first post in Bill of Health‘s symposium on the Law, Ethics, and Science of Re-Identification Demonstrations. We’ll have more contributions throughout the week. Background on the symposium is here. You can call up all of the symposium contributions by clicking here (or by clicking on the “Re-Identification Symposium” category link at the bottom of any symposium post). —MM

By Stephen Wilson

I’m fascinated by the methodological intersections of technology and privacy – or rather the lack of intersection, for it appears that a great deal of technology development occurs in blissful ignorance of information privacy norms.  By “norms” in the main I mean the widely legislated OECD Data Protection  Principles (see Graham Greenleaf, Global data privacy laws: 89 countries, and accelerating, Privacy Laws & Business International Report, Issue 115, Special Supplement, February 2012).

Standard data protection and information privacy regulations world-wide are grounded by a reasonably common set of principles; these include, amongst other things, that personal information should not be collected if it is not needed for a core business function, and that personal information collected for one purpose should not be re-used for unrelated purposes without consent. These sorts of privacy formulations tend to be technology neutral; they don’t much care about the methods of collection but focus instead on the obligations of data custodians regardless of how personal information has come to be in their systems. That is, it does not matter if you collect personal information from the public domain, or from a third party, or if you synthesise it from other data sources, you are generally accountable under the Collection Limitation and Use Limitation principles in the same way as if you collect that personal information directly from the individuals concerned.

I am aware of two distinct re-identification demonstrations that have raised awareness of the issues recently.  In the first, Yaniv Erlich used what I understand are new statistical techniques to re-identify a number of subjects that had donated genetic material anonymously to the 1000 Genomes project. He did this by correlating genes in the published anonymous samples with genes in named samples available from genealogical databases. The 1000 Genomes consent form reassured participants that re-identification would be “very hard”. In the second notable demo, Latanya Sweeney re-identified volunteers in the Personal Genome Project using her previously published method of using a few demographic values (such as date or birth, sex and postal code) extracted from the otherwise anonymous records.

A great deal of the debate around these cases has focused on the consent forms and the research subjects’ expectations of anonymity. These are important matters for sure, yet for me the ethical issue in re-anonymisation demonstrations is more about the obligations of third parties doing the identification who had nothing to do with the original informed consent arrangements.  The act of recording a person’s name against erstwhile anonymous data represents a collection of personal information.  The implications for genomic data re-identification are clear.

Read More

HIPAA and the Medical Records of Deceased Nursing Home Patients

By Leslie Francis

[this is a cross post from HealthLawProf]

Warning: some of this post is HIPAA-wonky. But read on: the punch line is that HIPAA does not protect the living or the dead from blanket release of medical records to their personal representatives—unless state law provides otherwise or patients have thought to specify in advance that they do not want anyone to see the record or parts of it and state law gives them this opportunity. This means that the default position is that personal representatives may see highly sensitive health information, including mental health records or sexual or reproductive histories: veritable skeletons in family medical closets.

In an important recent decision, the 11th Circuit has held that the federal Health Insurance Portability and Accountability Act (HIPAA) preempts a Florida statute that gave spouses and other enumerated parties the right to request the medical records of deceased nursing home residents. Opis Management Resources v. Secretary, Florida Agency for Health Care Administration, 2013 U.S. App. LEXIS 7194 (April 9, 2013). The nursing homes had refused to respond to requests for records made by spouses and attorneys-in-fact, arguing that these requesters were not “personal representatives” under Florida law. The requesters filed complaints with HHS’s Office for Civil Rights, which determined that the refusals were consistent with HIPAA. The Florida Agency for Health Care Administration issued citations against the homes for violating Florida law, and the homes went to court seeking a declaratory judgment that the Florida statute was preempted by HIPAA.

Read More