By John Tingle
Artificial intelligence (AI) is making an impact on health law in England.
The growing presence of AI in law has been chronicled by organizations such as the Law Society, which published a forward-thinking, horizon-scanning paper on artificial intelligence and the legal profession back in 2018.
The report identifies several key emerging strands of AI development and use, including Q&A chatbots, document analysis, document delivery, legal adviser support, case outcome prediction, and clinical negligence analysis. These applications of AI already show promise: one algorithm developed by researchers at University College London, the University of Sheffield, and the University of Pennsylvania was able to predict case outcomes with 79% accuracy.
More recently, Skunkworks (the NHS AI Lab) published a report about a negligence claims prediction project. For this project, Skunkworks partnered with NHS Resolution, the NHS organization that includes, within its wide remit of functions, the management of clinical negligence claims against NHS hospitals and other members of its claims indemnity schemes.
In early 2021, the organizations began a rapid feasibility study to investigate whether it is possible to use machine learning AI to predict the number of claims a trust/hospital is likely to receive, and to learn what drives these claims, in order to improve safety for patients.
They developed a rapid delivery plan to:
- create a machine learning model to predict claims, and
- produce a code pipeline to prepare input data, and then train and run the chosen model.
The report goes into some detail on automated machine learning and testing methods used, constraints that impacted the project, data security, outcomes, and next steps.
Some of the initial findings include:
- a positive correlation between the presence of specialties in a trust and the predicted rate of claims.
- a positive correlation between longer waiting times and the predicted rate of claims (this varied, however, with specialty).
The project provides a good foundation for further research in the area. This further research also may draw on insights from a recent global report issued by the World Health Organization (WHO) on AI in health.
The report offers six guiding principles for the design and use of AI in health:
- Protecting human autonomy.
- Promoting human well-being and safety and the public interest.
- Ensuring transparency, explainability and intelligibility.
- Fostering responsibility and accountability.
- Ensuring inclusiveness and equity.
- Promoting AI that is responsive and sustainable.
In section eight of the report, several key legal issues relating to AI and health are discussed, including the issues of fault and liability, and considerations for low- and middle-income countries.
AI is here to stay in terms of patient safety and clinical negligence litigation in England. Elements of this are already in play. Keeping legal, ethical, and patient safety issues at the forefront is key. The WHO guidance is a welcome development as a fulcrum point for further discussion.