empty hospital bed

Regulatory Gap in Health Tech: Resource Allocation Algorithms

By Jenna Becker

Hospitals use artificial intelligence and machine learning (AI/ML) not only in clinical decision-making, but also to allocate scarce resources.

These resource allocation algorithms have received less regulatory attention than clinical decision-making algorithms, but nevertheless pose similar concerns, particularly with respect to their potential for bias.

Without regulatory oversight, the risks associated with resource allocation algorithms are significant. Health systems must take particular care when implementing these solutions.

What Are Resource Allocation Algorithms?

Health care operations cover a wide range of staffing, scheduling, supply chain management, financial, and other administrative workflows. AI/ML-based tools are increasingly used to aid with health care operations — allocating or otherwise optimizing resources.

Hospitals often operate on relatively slim margins (especially since the onset of the pandemic), so using AI to increase efficiency and optimize the use of scarce resources makes sense.

Optimizing scarce resources is particularly important, as such resource allocation decisions can be fraught and take time that hospital staff may not have. Allocating scarce resources is generally governed by medical ethics. Especially since the onset of the COVID-19 pandemic, ethical approaches to resource allocation abound

Algorithms may recommend who should receive limited assets, like hospital beds, or limited personnel resources, like clinician time. For example, a hospital may use an algorithm that predicts patient deterioration from COVID-19 to allocate ventilators. Or, a health system may use an algorithm that predicts a patient’s risk of severe illness to allocate care management resources.

Risks And Impacts Of Resource Allocation Algorithms

Health care algorithms used to allocate resources do not carry the same risk to individual patient health as, say, an algorithm that diagnoses breast cancer based on MRI scans. However, the use of resource allocation algorithms does impact access to care and the level of care patients receive. When an algorithm is biased, hospitals may deny patients needed resources.

In a well-known example, an algorithm designed to allocate care management resources under-estimated the care needs of Black patients and over-estimated the care needs of white patients. Black patients were therefore less likely to receive enhanced care management than white patients.

On a population level, biased or improperly used resource allocation algorithms can worsen existing inequities in health care. If an algorithm is less likely to recommend enhanced resources to Black and Brown patients, health systems may withhold more resources from already underserved patient populations.

Despite their potential for harm, these algorithms remain unregulated.

The Regulatory Gap

Unlike clinical algorithms, resource allocation algorithms cannot be regulated by the FDA as a medical device. These algorithms fall well outside the definition of a medical device, as they focus on the provision and administration of health care. 

Hospitals may decide to use resource allocations algorithms based on concerns of biased clinician decision-making. Providers may prefer that an algorithm make these frequent but difficult decisions. Health systems may view these tools as impartial when they in fact can encode the existing bias in health care. 

But without a regulatory framework, health care organizations are ill-equipped to quantify the risk associated with implementing these algorithms. Resource allocation algorithm vendors are not required to share algorithm development information with regulators or purchasing health care organizations. Therefore, hospitals may not know whether the algorithm was trained on a diverse data set or whether the algorithm has been sufficiently tested. Despite their experience navigating complex ethical waters in resource allocation, they still may not catch algorithmically encoded bias.

Hospital Oversight

Without regulatory oversight, health care organizations should take particular care when implementing resource allocation algorithms. Health systems should work with algorithm developers to ensure that resource allocation algorithms were trained on diverse data sets. They should also ask developers to outline the algorithm’s inputs and consider whether those factors could lead to bias. Finally, implementing health systems should test the algorithms in their own system. Health systems should pay particular attention to whether algorithms over- or under-recommend additional resources for specific patient populations. 

Health systems that implement resource allocation algorithms without sufficient oversight may worsen the inequities that they intend to alleviate.

Jenna Becker

Jenna Becker is a 2L at Harvard Law School with a background in healthcare software.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.