illustration of person tracking his health condition with smart bracelet, mobile application and cloud services.

Should We Regulate Direct-to-Consumer Health Apps?

By Sara Gerke and Chloe Reichel

According to one estimate, over 318,000 health apps are available in app stores, and over 200 health apps are added each day. Of these, only a fraction are regulated by the U.S. Food and Drug Administration (FDA); those classified as “medical devices,” which typically pose a moderate to high risk to user safety.

In this final installment of our In Focus Series on Direct-to-Consumer Health Apps, we asked our respondents to reflect on this largely unregulated space in health tech.

Specifically, we asked: How can/should regulators deal with the assessment of health apps? For apps not currently regulated by the FDA, should they undergo any kind of review, such as whether they are helpful for consumers?

Read their answers below, and explore the following links for their responses to other questions in the series.

Read More

Close up of a computer screen displaying code

Mitigating Bias in Direct-to-Consumer Health Apps

By Sara Gerke and Chloe Reichel

Recently, Google announced a new direct-to-consumer (DTC) health app powered by artificial intelligence (AI) to diagnose skin conditions.

The company met criticism for the app, because the AI was primarily trained on images from people with darker white skin, light brown skin, and fair skin. This means the app may end up over-or under-diagnosing conditions for people with darker skin tones.

This prompts the questions: How can we mitigate biases in AI-based health care? And how can we ensure that AI improves health care, rather than augmenting existing health disparities?

That’s what we asked of our respondents to our In Focus Series on Direct-to-Consumer Health Apps. Read their answers below, and check out their responses to the other questions in the series.

Read More

Hand holding smartphone with colorful app icons concept.

Who Owns the Data Collected by Direct-to-Consumer Health Apps?

By Sara Gerke and Chloe Reichel

Who owns the data that are collected via direct-to-consumer (DTC) health apps? Who should own that data?

We asked our respondents to answer these questions in the third installment of our In Focus Series on Direct-to-Consumer Health Apps. Learn about the respondents and their views on data privacy concerns in the first installment of this series, and read their thoughts on consumer access to DTC health app data in the second installment.

Read More

Illustration of multicolored profiles. An overlay of strings of ones and zeroes is visible

Should Users Have Access to Data Collected by Direct-to-Consumer Health Apps?

By Sara Gerke and Chloe Reichel

Should consumers have access to the data (including the raw data) that are collected via direct-to-consumer (DTC) health apps? What real-world challenges might access to this data introduce, and how might they be addressed?

In this second installment of our In Focus Series on Direct-to-Consumer Health Apps, that’s what we asked our respondents. Learn about the respondents and their views on data privacy concerns in the first installment of this series. Read on for their thoughts on whether and how consumers should gain access to the data that direct-to-consumer health apps collect.

Read More

hands hold phone with app heart and activity on screen over table in office

Perspectives on Data Privacy for Direct-to-Consumer Health Apps

By Sara Gerke and Chloe Reichel

Direct-to-consumer (DTC) health apps, such as apps that manage our diet, fitness, and sleep, are becoming ubiquitous in our digital world.

These apps provide a window into some of the key issues in the world of digital health — including data privacy, data access, data ownership, bias, and the regulation of health technology.

To better understand these issues, and ways forward, we contacted key stakeholders representing a range of perspectives in the field of digital health for their brief answers to five questions about DTC health apps.

Read More

Illustration of a pill with a sensor embedded

Ethical and legal issues of ingestible electronic sensors

This post originally appeared in Device and Materials Engineering. You can read it here.

By Sara Gerke & I. Glenn Cohen

In our new paper, we discuss the ethical challenges of ingestible electronics sensors (IESs; also called “smart pills”) and examine the legal regulation of such sensors in the United States and Europe.

IESs are increasingly being developed for improving health outcomes. One such use could facilitate monitoring and promoting medication adherence. Once swallowed, an IES connects with a wearable sensor that can detect and record valuable data, including behavioral and physiological metrics or the time of drug intake. The wearable sensor subsequently sends the collected data to a computing device (e.g., a smartphone) that processes and displays the information. There is also the option to link the display function with a cloud database for data sharing with the patient’s doctor or family.

Read More

Picture of doctor neck down using an ipad with digital health graphics superimposed

Is Data Sharing Caring Enough About Patient Privacy? Part II: Potential Impact on US Data Sharing Regulations

A recent US lawsuit highlights crucial challenges at the interface of data utility, patient privacy & data misuse

By Timo Minssen (CeBIL, UCPH), Sara Gerke & Carmel Shachar

Earlier, we discussed the new suit filed against Google, the University of Chicago (UC), and UChicago Medicine, focusing on the disclosure of patient data from UC to Google. This piece goes beyond the background to consider the potential impact of this lawsuit, in the U.S., as well as placing the lawsuit in the context of other trends in data privacy and security.

Read More

Image of binary and dna

Is Data Sharing Caring Enough About Patient Privacy? Part I: The Background

By Timo Minssen (CeBIL, UCPH), Sara Gerke & Carmel Shachar

A recent US lawsuit highlights crucial challenges at the interface of data utility, patient privacy & data misuse

The huge prospects of artificial intelligence and machine learning (ML), as well as the increasing trend toward public-private partnerships in biomedical innovation, stress the importance of an effective governance and regulation of data sharing in the health and life sciences. Cutting-edge biomedical research strongly demands high-quality data to ensure safe and effective health products. It is often argued that greater access to individual patient data collections stored in hospitals’ medical records systems may considerably advance medical science and improve patient care. However, as public and private actors attempt to gain access to such high-quality data to train their advanced algorithms, a number of sensitive ethical and legal aspects also need to be carefully considered. Besides giving rise to safety, antitrust, trade secrets, and intellectual property issues, such practices have resulted in serious concerns with regard to patient privacy, confidentiality, and the commitments made to patients via appropriate informed consent processes.

Read More

Graphic of a brain with a circuitry design within it

A User-Focused Transdisciplinary Research Agenda for AI-Enabled Health Tech Governance

By David Arney, Max Senges, Sara Gerke, Cansu Canca, Laura Haaber Ihle, Nathan Kaiser, Sujay Kakarmath, Annabel Kupke, Ashveena Gajeele, Stephen Lynch, Luis Melendez

A new working paper from participants in the AI-Health Working Group out of the Petrie-Flom Center for Health Law Policy, Biotechnology, and Bioethics at Harvard Law School and the Berkman Klein Center for Internet & Society at Harvard University sets forth a research agenda for stakeholders (researchers, practitioners, entrepreneurs, policy makers, etc.) to proactively collaborate and design AI technologies that work with users to improve their health and wellbeing.

Along with sections on Technology and a Healthy Good Life as well as Data, the authors focus a section on Nudging, a concept that “alters people’s behavior in a predictable way without forbidding any options,“ and tie nudging into AI technology in the healthcare context.     Read More

Graphic of sheets with algorithms shaped into a neuron

The Tricky Task of Defining AI in the Law

By Sara Gerke and Joshua Feldman

Walking her bike across an Arizona road, a woman stares into the headlights of an autonomous vehicle as it mistakenly speeds towards her. In a nearby health center, a computer program analyzes images of a diabetic man’s retina to detect damaged blood vessels and suggests that he be referred to a specialist for further evaluation – his clinician did not need to interpret the images. Meanwhile, an unmanned drone zips through Rwandan forests, delivering life-saving vaccines to an undersupplied hospital in a rural village.

From public safety to diagnostics to the global medical supply chain, artificial intelligence (AI) systems are increasingly making decisions about our health. Legislative action will be required to address these innovations and ensure they improve wellbeing safely and fairly.

In order to draft new national laws and international guidelines, we will first need a definition of what constitutes artificial intelligence. While the examples above underscore the need of such a definition, they also illustrate the difficulty of this task: What is uniquely common between self-driving cars, diagnostic tools, and drones? Read More