By Rachele Hendricks-Sturrup
On April 21, 2021, the European Commission released a “first-ever” legal framework on artificial intelligence (AI) in an attempt to address societal risks associated with AI implementation.
The EU has now effectively set a global stage for AI regulation, being the first nation of member states to create a legal framework with specific intent to address or mitigate potentially harmful effects of broad AI implementation.
Within the proposed framework, the Commission touched on a variety of considerations and “high-risk” AI system scenarios. The Commission defined high-risk AI systems as those that pose significant (material or immaterial) risks to the health and safety or fundamental rights of persons.
This post outlines four key considerations in the proposal with regard to health: 1) prioritizing emergency health care; 2) law enforcement profiling as a social determinant of health; 3) immigrant health risk screening; and 4) AI regulatory sandboxes and a health data space to support AI product commercialization and public health innovation.
Prioritizing Emergency Health Care
AI systems are being studied, including during COVID-19, as a tool to help health care providers in a variety of emergency and non-emergency health care settings, despite the general absence of health care AI regulatory oversight.
The framework specifically calls out scenarios in which AI is or can be used to triage or prioritize patients requiring emergency health care services (e.g., firefighters, ambulances, and other emergency medical aid).
National regulators like the U.S. Food and Drug Administration could thus consider this framework to further their oversight over AI-based software as a medical device. Ethical and institutional review boards could also consider this framework as another tool to assess ethical and practical implications for uses of AI, including AI used to triage or prioritize patients in emergency and non-emergency situations.
Law Enforcement Profiling as a Social Determinant of Health
Social determinants, including the rule of law, play a significant role in determining a person’s and the public’s health. The Commission considered the use of AI for law enforcement profiling, and also discussed hazards within an environment that might cause or lead to death or serious damage to a person’s health.
Law enforcement uses of AI are not new, given that lie detector tests that rely on algorithms have been around for decades. However, if law enforcement largely relies on AI technology with only partial accuracy, due to bias in its training data, to profile or identify individuals, then AI technology could potentially lead to increased or AI-provoked altercations between individuals in civil society and law enforcement that might be frightening, brutal, or even fatal.
Immigrant Health Risk Screening
Public immigration authorities have used, or may consider implementing, AI to assess the health risks of persons who intend to enter or who have entered.
The Commission specifically referred to such scenarios, naming AI systems used by immigration authorities as “high risk,” and citing health risks as a key concern (other risks listed are security risk and risk of irregular or otherwise illegal immigration).
In February 2019, the government of Canada released its initial version of its Directive on Automated Decision Making, and has already implemented AI with the intent to make its immigration process more efficient. Under this Directive, an “Algorithmic Impact Assessment” in the form of a 60-question survey is used by the Canadian government to measure an immigrant’s potential impact level with regard to several factors that include the health or well-being of individuals or communities.
Governments like Canada’s, which are implementing AI systems that might fall under the definition of “high-risk AI systems” in the EU proposed framework, have an opportunity to consider, adopt, and/or adapt certain provisions within the EU’s proposed framework.
AI Regulatory Sandbox and Health Data Space
The Commission provides some details about developing or supporting the creation of AI regulatory sandboxes to help address “public safety and public health, including disease prevention, control and treatment” and a health data space to facilitate “non-discriminatory access to health data and the training of artificial intelligence algorithms on those datasets, in a privacy-preserving, secure, timely, transparent and trustworthy manner, and with an appropriate institutional governance.”
In addition to supporting public health innovations, the health data space will help catalyze AI-driven health innovations led by small and medium enterprises and startups.
Given recent discussions about the inherent limitations to applying AI models like deep phenotyping that are trained on biased or skewed datasets, the questionable reliability of direct-to-consumer medical artificial intelligence/machine learning applications, as well as the unresolved tensions around the lack of transparency into how AI software systems are built, the proposed regulatory sandboxes and health data space could be valuable resources for developers in both public health or private industry.
Looking Forward: AI Regulation in Health with Guidance from the Proposed Framework
At a high level, the EU’s proposed framework offers a risk methodology approach for AI regulation and highlights a variety of health-relevant scenarios to consider. As civil society, law enforcement, policymakers, developers, enterprises, and users in several countries like the U.S., Canada, and EU member states continue to grapple with thorny questions and tensions around the use and deployment of AI across a variety of public and private contexts, it is important to highlight and digest these key considerations that are consequential to health.
Dr. Rachele Hendricks-Sturrup is the Health Policy Counsel and Lead at the Future of Privacy Forum in Washington, DC. The views herein do not necessarily reflect those of Future of Privacy Forum supporters or board members.