The bottom half of a robotic face, featuring nose and mouth in blue lighting

Sex Robots are Here, But Can the Law Keep Up The With Ethics and Privacy Issues?

The robots are here. Are the “sexbots” close behind?

From the Drudge Report to The New York Times, sex robots are rapidly becoming a part of the national conversation about the future of sex and relationships. Behind the headlines, a number of companies are currently developing robots designed to provide humans with companionship and sexual pleasure – with a few already on the market.

Unlike sex toys and dolls, which are typically sold in off-the-radar shops and hidden in closets, sexbots may become mainstream. A 2017 survey suggested almost half of Americans think that having sex with robots will become a common practice within 50 years.

As a scholar of artificial intelligence, neuroscience and the law, I’m interested in the legal and policy questions that sex robots pose. How do we ensure they are safe? How will intimacy with a sex robot affect the human brain? Would sex with a childlike robot be ethical? And what exactly is a sexbot anyway? Read More

A User-Focused Transdisciplinary Research Agenda for AI-Enabled Health Tech Governance

By David Arney, Max Senges, Sara Gerke, Cansu Canca, Laura Haaber Ihle, Nathan Kaiser, Sujay Kakarmath, Annabel Kupke, Ashveena Gajeele, Stephen Lynch, Luis Melendez

A new working paper from participants in the AI-Health Working Group out of the Petrie-Flom Center for Health Law Policy, Biotechnology, and Bioethics at Harvard Law School and the Berkman Klein Center for Internet & Society at Harvard University sets forth a research agenda for stakeholders (researchers, practitioners, entrepreneurs, policy makers, etc.) to proactively collaborate and design AI technologies that work with users to improve their health and wellbeing.

Along with sections on Technology and a Healthy Good Life as well as Data, the authors focus a section on Nudging, a concept that “alters people’s behavior in a predictable way without forbidding any options,“ and tie nudging into AI technology in the healthcare context.     Read More

The Tricky Task of Defining AI in the Law

By Sara Gerke and Joshua Feldman

Walking her bike across an Arizona road, a woman stares into the headlights of an autonomous vehicle as it mistakenly speeds towards her. In a nearby health center, a computer program analyzes images of a diabetic man’s retina to detect damaged blood vessels and suggests that he be referred to a specialist for further evaluation – his clinician did not need to interpret the images. Meanwhile, an unmanned drone zips through Rwandan forests, delivering life-saving vaccines to an undersupplied hospital in a rural village.

From public safety to diagnostics to the global medical supply chain, artificial intelligence (AI) systems are increasingly making decisions about our health. Legislative action will be required to address these innovations and ensure they improve wellbeing safely and fairly.

In order to draft new national laws and international guidelines, we will first need a definition of what constitutes artificial intelligence. While the examples above underscore the need of such a definition, they also illustrate the difficulty of this task: What is uniquely common between self-driving cars, diagnostic tools, and drones? Read More

Machine Learning in Medicine: Addressing Ethical Challenges

Machine learning in medicine is accelerating at an incredible rate, bringing a new era of ethical and regulatory challenges to the clinic.

In a new paper published in PLOS Medicine, Effy Vayena, Alessandro Blasimme, and I. Glenn Cohen spell out these ethical challenges and offer suggestions for how Institutional Review Boards (IRBs), medical practitioners, and developers can ethically deploy machine learning in medicine (MLm). Read More

image of hands texting on a smart phone

Artificial Intelligence for Suicide Prediction

Suicide is a global problem that causes 800,000 deaths per year worldwide. In the United States, suicide rates rose by 25 percent in the past two decades, and suicide now kills 45,000 Americans each year, which is more than auto accidents or homicides.

Traditional methods of predicting suicide, such as questionnaires administered by doctors, are notoriously inaccurate. Hoping to save lives by predicting suicide more accurately, hospitals, governments, and internet companies are developing artificial intelligence (AI) based prediction tools. This essay analyzes the risks these systems pose to safety, privacy, and autonomy, which have been under-explored.

Two parallel tracks of AI-based suicide prediction have emerged.

The first, which I call “medical suicide prediction,” uses AI to analyze patient records. Medical suicide prediction is not yet widely used, aside from one program at the Department of Veterans Affairs (VA). Because medical suicide prediction occurs within the healthcare context, it is subject to federal laws, such as HIPAA, which protects the privacy and security of patient information, and the Federal Common Rule, which protects human research subjects.

My focus here is on the second track of AI-based suicide prediction, which I call “social suicide prediction.” Though essentially unregulated, social suicide prediction uses behavioral data mined from consumers’ digital interactions. The companies involved, which include large internet platforms such as Facebook and Twitter, are not generally subject to HIPAA’s privacy regulations, principles of medical ethics, or rules governing research on human subjects.

Read More

concept of artificial intelligence, human brain with machinery

Four Roles for Artificial Intelligence in the Medical System

How will artificial intelligence (AI) change medicine?

AI, powered by “big data” in health, promises to transform medical practice, but specifics remain inchoate.  Reports that AI performs certain tasks at the level of specialists stoke worries that AI will “replace” physicians.  These worries are probably overblown; AI is unlikely to replace many physicians in the foreseeable future.  A more productive set of questions considers how AI and physicians should interact, including how AI can improve the care physicians deliver, how AI can best enable physicians to focus on the patient relationship, and how physicians should review the recommendations and predictions of AI.  Answering those questions requires clarity about the larger function of AI: not just what tasks AI can do or how it can do them, but what role it will play in the context of physicians, other patients, and providers within the overall medical system.

Medical AI can improve care for patients and improve the practice of medicine for providers—as long as its development is supported by an understanding of what role it can and should play.

Four different roles each have the possibility to be transformative for providers and patients: AI can push the frontiers of medicine; it can replicate and democratize medical expertise; it can automate medical drudgery; and it can allocate medical resources.

Read More