Is Data Sharing Caring Enough About Patient Privacy? Part II: Potential Impact on US Data Sharing Regulations

A recent US lawsuit highlights crucial challenges at the interface of data utility, patient privacy & data misuse

By Timo Minssen (CeBIL, UCPH), Sara Gerke & Carmel Shachar

Earlier, we discussed the new suit filed against Google, the University of Chicago (UC), and UChicago Medicine, focusing on the disclosure of patient data from UC to Google. This piece goes beyond the background to consider the potential impact of this lawsuit, in the U.S., as well as placing the lawsuit in the context of other trends in data privacy and security.

Read More

Image of binary and dna

Is Data Sharing Caring Enough About Patient Privacy? Part I: The Background

By Timo Minssen (CeBIL, UCPH), Sara Gerke & Carmel Shachar

A recent US lawsuit highlights crucial challenges at the interface of data utility, patient privacy & data misuse

The huge prospects of artificial intelligence and machine learning (ML), as well as the increasing trend toward public-private partnerships in biomedical innovation, stress the importance of an effective governance and regulation of data sharing in the health and life sciences. Cutting-edge biomedical research strongly demands high-quality data to ensure safe and effective health products. It is often argued that greater access to individual patient data collections stored in hospitals’ medical records systems may considerably advance medical science and improve patient care. However, as public and private actors attempt to gain access to such high-quality data to train their advanced algorithms, a number of sensitive ethical and legal aspects also need to be carefully considered. Besides giving rise to safety, antitrust, trade secrets, and intellectual property issues, such practices have resulted in serious concerns with regard to patient privacy, confidentiality, and the commitments made to patients via appropriate informed consent processes.

Read More

Robot and human facing each other. silhouetted against lit background

Please and Thank You: Do we Have Moral Obligations Towards Emotionally Intelligent Machines?

By Sonia Sethi

Do you say “thank you” to Alexa (or your preferred AI assistant)?

A rapid polling among my social media revealed that out of 76 participants, 51 percent thank their artificial intelligence (AI) assistant some or every time. When asked why/why not people express thanks, a myriad of interesting—albeit, entertaining—responses were received. There were common themes: saying thanks because it’s polite or it’s a habit, not saying thanks because “it’s just a database and not a human,” and the ever-present paranoia of a robot apocalypse.

But do you owe Alexa your politeness? Do you owe any moral consideration whatsoever? Read More

What Should Happen to our Medical Records When We Die?

By Jon Cornwall

In the next 200 years, at least 20 billion people will die. A good proportion of these people are going to have electronic medical records, and that begs the question: what are we going to do with all this posthumous medical data? Despite the seemingly logical and inevitable application of medical data from deceased persons for research and healthcare both now and in the future, the issue of how best to manage posthumous medical records is currently unclear.

Presently, large medical data sets do exist and have their own uses, though largely these are data sets containing ‘anonymous’ data. In the future, if medicine is to deliver on the promise of truly ‘personalized’ medicine, then electronic medical records will potentially have increasing value and relevance for our generations of descendants. This will, however, entail the public having to consider how much privacy and anonymity they are willing to part with in regard to information arising from their medical records. After all, enabling our medical records with the power to influence personalized medicine for our descendants cannot happen without knowing who we, or our descendants, actually are.  Read More

On Social Suicide Prevention, Don’t Let the Perfect be the Enemy of the Good

In a piece in The Guardian and a forthcoming article in the Yale Journal of Law and Technology, Bill of Health contributor Mason Marks recently argued that Facebook’s suicide prediction algorithm is dangerous and ought to be subject to rigorous regulation and transparency requirements. Some of his suggestions (in particular calls for more data and suggestions that are really more about how we treat potentially suicidal people than about how we identify them) are powerful and unobjectionable.

But Marks’s core argument—that unless Facebook’s suicide prediction algorithm is subject to the regulatory regime of medicine and operated on an opt-in basis it is morally problematic—is misguided and alarmist. Read More

The bottom half of a robotic face, featuring nose and mouth in blue lighting

Sex Robots are Here, But Can the Law Keep Up The With Ethics and Privacy Issues?

The robots are here. Are the “sexbots” close behind?

From the Drudge Report to The New York Times, sex robots are rapidly becoming a part of the national conversation about the future of sex and relationships. Behind the headlines, a number of companies are currently developing robots designed to provide humans with companionship and sexual pleasure – with a few already on the market.

Unlike sex toys and dolls, which are typically sold in off-the-radar shops and hidden in closets, sexbots may become mainstream. A 2017 survey suggested almost half of Americans think that having sex with robots will become a common practice within 50 years.

As a scholar of artificial intelligence, neuroscience and the law, I’m interested in the legal and policy questions that sex robots pose. How do we ensure they are safe? How will intimacy with a sex robot affect the human brain? Would sex with a childlike robot be ethical? And what exactly is a sexbot anyway? Read More

A User-Focused Transdisciplinary Research Agenda for AI-Enabled Health Tech Governance

By David Arney, Max Senges, Sara Gerke, Cansu Canca, Laura Haaber Ihle, Nathan Kaiser, Sujay Kakarmath, Annabel Kupke, Ashveena Gajeele, Stephen Lynch, Luis Melendez

A new working paper from participants in the AI-Health Working Group out of the Petrie-Flom Center for Health Law Policy, Biotechnology, and Bioethics at Harvard Law School and the Berkman Klein Center for Internet & Society at Harvard University sets forth a research agenda for stakeholders (researchers, practitioners, entrepreneurs, policy makers, etc.) to proactively collaborate and design AI technologies that work with users to improve their health and wellbeing.

Along with sections on Technology and a Healthy Good Life as well as Data, the authors focus a section on Nudging, a concept that “alters people’s behavior in a predictable way without forbidding any options,“ and tie nudging into AI technology in the healthcare context.     Read More

The Tricky Task of Defining AI in the Law

By Sara Gerke and Joshua Feldman

Walking her bike across an Arizona road, a woman stares into the headlights of an autonomous vehicle as it mistakenly speeds towards her. In a nearby health center, a computer program analyzes images of a diabetic man’s retina to detect damaged blood vessels and suggests that he be referred to a specialist for further evaluation – his clinician did not need to interpret the images. Meanwhile, an unmanned drone zips through Rwandan forests, delivering life-saving vaccines to an undersupplied hospital in a rural village.

From public safety to diagnostics to the global medical supply chain, artificial intelligence (AI) systems are increasingly making decisions about our health. Legislative action will be required to address these innovations and ensure they improve wellbeing safely and fairly.

In order to draft new national laws and international guidelines, we will first need a definition of what constitutes artificial intelligence. While the examples above underscore the need of such a definition, they also illustrate the difficulty of this task: What is uniquely common between self-driving cars, diagnostic tools, and drones? Read More

Machine Learning in Medicine: Addressing Ethical Challenges

Machine learning in medicine is accelerating at an incredible rate, bringing a new era of ethical and regulatory challenges to the clinic.

In a new paper published in PLOS Medicine, Effy Vayena, Alessandro Blasimme, and I. Glenn Cohen spell out these ethical challenges and offer suggestions for how Institutional Review Boards (IRBs), medical practitioners, and developers can ethically deploy machine learning in medicine (MLm). Read More

image of hands texting on a smart phone

Artificial Intelligence for Suicide Prediction

Suicide is a global problem that causes 800,000 deaths per year worldwide. In the United States, suicide rates rose by 25 percent in the past two decades, and suicide now kills 45,000 Americans each year, which is more than auto accidents or homicides.

Traditional methods of predicting suicide, such as questionnaires administered by doctors, are notoriously inaccurate. Hoping to save lives by predicting suicide more accurately, hospitals, governments, and internet companies are developing artificial intelligence (AI) based prediction tools. This essay analyzes the risks these systems pose to safety, privacy, and autonomy, which have been under-explored.

Two parallel tracks of AI-based suicide prediction have emerged.

The first, which I call “medical suicide prediction,” uses AI to analyze patient records. Medical suicide prediction is not yet widely used, aside from one program at the Department of Veterans Affairs (VA). Because medical suicide prediction occurs within the healthcare context, it is subject to federal laws, such as HIPAA, which protects the privacy and security of patient information, and the Federal Common Rule, which protects human research subjects.

My focus here is on the second track of AI-based suicide prediction, which I call “social suicide prediction.” Though essentially unregulated, social suicide prediction uses behavioral data mined from consumers’ digital interactions. The companies involved, which include large internet platforms such as Facebook and Twitter, are not generally subject to HIPAA’s privacy regulations, principles of medical ethics, or rules governing research on human subjects.

Read More