Medicine doctor and stethoscope in hand touching icon medical network connection with modern virtual screen interface, medical technology network concept

Data-driven Medicine Needs a New Profession: Health Information Counseling

By Barbara Prainsack, Alena Buyx, and Amelia Fiske

Have you ever clicked ‘I agree’ to share information about yourself on a health app on your smartphone? Wondered if the results of new therapy reported on a patient community website were accurate? Considered altering a medical device to better meet your own needs, but had doubts about how the changes might affect its function?

While these kinds of decisions are increasingly routine, there is no clear path for getting information on health-related devices, advice on what data to collect, how to evaluate medical information found online, or concerns one might have around data sharing on patient platforms.

It’s not only patients who are facing these questions in the age of big data in medicine. Clinicians are also increasingly confronted with diverse forms of molecular, genetic, lifestyle, and digital data, and often the quality, meaning, and actionability of this data is unclear.

The difficulties of interpreting unstructured data, such as symptom logs recorded on personal devices, add another layer of complexity for clinicians trying to decide which course of action would best meet their duty of beneficence and enable the best possible care for patients.

Read More

Meditation? There’s an (almost FDA-approved) app for that

Headspace is paving the way for the first FDA-approved prescription meditation app.

Developers behind the mindfulness smartphone app, which has over 30 million users, are creating a new product under Headspace Health that will begin clinical trials this summer, in hopes of clearing FDA approval by 2020. The team is investigating how the app can help treat 12 mental and physical conditions.

Read More

Emergent Medical Data

By Mason Marks

In this brief essay, I describe a new type of medical information that is not protected by existing privacy laws. I call it Emergent Medical Data (EMD) because at first glance, it has no relationship to your health. Companies can derive EMD from your seemingly benign Facebook posts, a list of videos you watched on YouTube, a credit card purchase, or the contents of your e-mail. A person reading the raw data would be unaware that it conveys any health information. Machine learning algorithms must first massage the data before its health-related properties emerge.

Unlike medical information obtained by healthcare providers, which is protected by the Health Information Portability and Accountability Act (HIPAA), EMD receives little to no legal protection. A common rationale for maintaining health data privacy is that it promotes full transparency between patients and physicians. HIPAA assures patients that the sensitive conversations they have with their doctors will remain confidential. The penalties for breaching confidentiality can be steep. In 2016, the Department of Health and Human Services recorded over $20 million in fines resulting from HIPAA violations. When companies mine for EMD, they are not bound by HIPAA or subject to these penalties.

Read More

Democratized Diagnostics: Why Medical Artificial Intelligence Needs Vetting

Pancreatic cancer is one of the deadliest illnesses out there.  The five-year survival rate of patients with the disease is only about 7%.  This is, in part, because few observable symptoms appear early enough for effective treatment.  As a result, by the time many patients are diagnosed the prognosis is poor.  There is an app, however, that is attempting to change that.  BiliScreen was developed by researchers at the University of Washington, and it is designed to help users identify pancreatic cancer early with an algorithm that analyzes selfies.  Users take photos of themselves, and the app’s artificially intelligent algorithm detects slight discolorations in the skin and eyes associated with early pancreatic cancer.

Diagnostic apps like BiliScreen represent a huge step forward for preventive health care.  Imagine a world in which the vast majority of chronic diseases are caught early because each of us has the power to screen ourselves on a regular basis.  One of the big challenges for the modern primary care physician is convincing patients to get screened regularly for diseases that have relatively good prognoses when caught early.

I’ve written before about the possible impacts of artificial intelligence and algorithmic medicine, arguing that both medicine and law will have to adapt as machine-learning algorithms surpass physicians in their ability to diagnose and treat disease.  These pieces, however, primarily consider artificially intelligent algorithms licensed to and used by medical professionals in hospital or outpatient settings.  They are about the relationship between a doctor and the sophisticated tools in her diagnostic toolbox — and about how relying on algorithms could decrease the pressure physicians feel to order unnecessary tests and procedures to avoid malpractice liability.  There was an underlying assumption that these algorithms had already been evaluated and approved for use by the physician’s institution, and that the physician had experience using them.  BiliScreen does not fit this mold — the algorithm is not a piece of medical equipment used by hospitals, but rather part of an app that could be downloaded and used by anyone with a smartphone.  Accordingly, apps like BiliScreen fall into a category of “democratized” diagnostic algorithms. While this democratization has the potential to drastically improve preventive care, it also has the potential to undermine the financial sustainability of the U.S. health care system.

Read More

FitBits Be Free: General Wellness Products Are Not (Generally) Medical Devices

By Nicolas Terry

The FDA has issued a final guidance on low risk wellness devices, and it is refreshingly clear. Rather than applying regulatory discretion as we have seen in the medical app space, the agency has made a broader decision (all usual caveats about non-binding guidances aside) not to even examine large swathes of wellness products to determine whether they are Section 201(h) devices. As such, this guidance more closely resembles the 2013 guidance that declared Personal Sound Amplification Products (PSAPs) not to be medical devices (aka hearing aids).

The FDA approach to defining excluded products breaks no new ground. First, they must be intended for only general wellness use and, second, present a low risk. As to the former, FDA has evolved its approach to referencing specific diseases or conditions. Make no such reference and your product will sail through as a general wellness product. Thus, claims to promote relaxation, to boost self-esteem, to manage sleep patterns, etc., are clearly exempt. On the other hand, the agency will clearly regulate products that claim to treat or diagnose specific conditions. Read More

Legal Dimensions of Big Data in the Health and Life Sciences

By Timo Minssen

Please find below my welcome speech at last-weeks mini-symposium on “Legal dimensions of Big Data in the Health and Life Sciences From Intellectual Property Rights and Global Pandemics to Privacy and Ethics at the University of Copenhagen (UCPH).  The event was organized by our Global Genes –Local Concerns project, with support from the UCPH Excellence Programme for Interdisciplinary Research.

The symposium, which was inspired by the wonderful recent  PFC & Berkman Center Big Data conference,  featured enlightening speeches by former PFC fellows Nicholson Price on incentives for the development of black box personalized medicine and Jeff Skopek on privacy issues. In addition we were lucky to have Peter Yu speaking on “Big Data, Intellectual Property and Global Pandemics” and Michael J. Madison on Big Data and Commons Challenges”. The presentations and recordings of the session will soon be made available on our Center’s webpage.

Thanks everybody for your dedication, inspiration, great presentations and an exciting panel discussion.

“Legal Dimensions of Big Data in the Health and Life Sciences – From Intellectual Property Rights and Global Pandemics to Privacy and Ethics”

Read More

Two Cheers for Corporate Experimentation

Rubin's vase2By Michelle Meyer

I have a new law review article out, Two Cheers for Corporate Experimentation: The A/B Illusion and the Virtues of Data-Driven Innovation, arising out of last year’s terrific Silicon Flatirons annual tech/privacy conference at Colorado Law, the theme of which was “When Companies Study Their Customers.”

This article builds on, but goes well beyond, my prior work on the Facebook experiment in Wired (mostly a wonky regulatory explainer of the Common Rule and OHRP engagement guidance as applied to the Facebook-Cornell experiment, albeit with hints of things to come in later work) and Nature (a brief mostly-defense of the ethics of the experiment co-authored with 5 ethicists and signed by an additional 28, which was necessarily limited in breadth and depth by both space constraints and the need to achieve overlapping consensus).

Although I once again turn to the Facebook experiment as a case study (and also to new discussions of the OkCupid matching algorithm experiment and of 401(k) experiments), the new article aims at answering a much broader question than whether any particular experiment was legal or ethical. Read More

Facebook Rumored To Be Planning Foray Into the Online Health Space

By Michelle Meyer

Reuters broke the story on Friday, citing anonymous sources:

The company is exploring creating online “support communities” that would connect Facebook users suffering from various ailments. . . . Recently, Facebook executives have come to realize that healthcare might work as a tool to increase engagement with the site. One catalyst: the unexpected success of Facebook’s “organ-donor status initiative,” introduced in 2012. The day that Facebook altered profile pages to allow members to specify their organ donor-status, 13,054 people registered to be organ donors online in the United States, a 21 fold increase over the daily average of 616 registrations . . . . Separately, Facebook product teams noticed that people with chronic ailments such as diabetes would search the social networking site for advice, said one former Facebook insider. In addition, the proliferation of patient networks such as PatientsLikeMe demonstrate that people are increasingly comfortable sharing symptoms and treatment experiences online. . . . Facebook may already have a few ideas to alleviate privacy concerns around its health initiatives. The company is considering rolling out its first health application quietly and under a different name, a source said.

I’m quoted in this International Business Times article about Facebook’s rumored plans. After the jump is the full statement I provided to the reporter (links added).  Read More

Big Data, Predictive Analytics, Health Care, Law, and Ethics

Update: The Moore Foundation has generously paid to make my article available as open access on their website here. Today I am speaking at Health Affairs’ “Using Big Data to Transform Health Care” in DC, that will also launch its new issue devoted to the topic. I have a co-authored paper in the volume entitled “The Legal And Ethical Concerns That Arise From Using Complex Predictive Analytics In Health Care” that has just been released. Ironically the article is behind a paywall (while data wants to be free, I guess big data is different!) Here is the abstract.

Predictive analytics, or the use of electronic algorithms to forecast future events in real time, makes it possible to harness the power of big data to improve the health of patients and lower the cost of health care. However, this opportunity raises policy, ethical, and legal challenges. In this article we analyze the major challenges to implementing predictive analytics in health care settings and make broad recommendations for overcoming challenges raised in the four phases of the life cycle of a predictive analytics model: acquiring data to build the model, building and validating it, testing it in real-world settings, and disseminating and using it more broadly. For instance, we recommend that model developers implement governance structures that include patients and other stakeholders starting in the earliest phases of development. In addition, developers should be allowed to use already collected patient data without explicit consent, provided that they comply with federal regulations regarding research on human subjects and the privacy of health information.

I will also have a related paper on mobile health coming out later this summer that I will blog about when it comes out…

Patient Empowerment and 23andMe

For those interested in the FDA’s decision to regulate 23andMe’s direct-to-consumer genetic testing service, it is worth reading a recent comment in Nature by Robert Green and Nita Farahany.  The piece raises two core objections to the FDA’s decision that deserve further attention.

One objection is that the FDA’s decision runs contrary to “the historical trend of patient empowerment that brought informed-consent laws, access to medical records and now direct access to electronic personal health data.”  Green and Farahany suggest that 23andMe and manufacturers of other consumer medical products (such as mobile health apps) “democratize health care by enabling individuals to make choices that maximize their own health,” and that we must not “stifle all innovations that do not fit into the traditional model of physician-driven health care.”

While I agree with Green and Farahany that we should not be locked into physician-driven health care, I am not sure that the information provided by 23andMe and medical apps is unambiguously “patient-empowering” and “democratizing” (a framing of personalized medicine that pervades both marketing materials and academic journals). Read More