From Regulation to Innovation: The Impact of the EU AI Act on XR and AI in Healthcare

By Marcelo Corrales Compagnucci

Extended Reality (XR) technologies like Virtual Reality (VR), Augmented Reality (AR), and Mixed Reality (MR), are revolutionizing healthcare. These tools, powered by artificial intelligence (AI), are enhancing how medical professionals work across various specialties such as cardiology, pharmacy, and neuroscience, improving precision and efficiency in ways previously unimaginable. Tools like IBM Watson and DeepMind are already in use, with current applications in diagnosis, predictive analytics, and personalized treatment. Near-future advancements include AI in surgical robotics and real-time patient monitoring through wearables.

Looking further ahead, technologies like XR for brain health promise significant breakthroughs in neurological treatments. For example, neurosurgeons are using XR to visualize a patient’s brain in 3D from various angles. XR hand controllers enable surgeons to simulate the surgical procedure, allowing for precise preoperative planning and simulation. Beyond clinical applications, XR also enhances teamwork and patient education by providing clear, interactive visualizations of medical procedures, which help reduce patient anxiety and improve surgical outcomes.

However, integrating XR and AI into healthcare comes with ethical, legal and liability challenges. The EU has responded with new regulatory frameworks, aiming to manage these risks by establishing clear guidelines for AI and XR deployment. This includes addressing potential specific AI-related risks—such as biases, discrimination, and data poisoning—as well as ensuring data security and privacy, and delineating liability for failures or harm caused by these technologies.

The EU AI Act: A Blueprint for Safe AI Systems

The EU’s AI Act, enacted in March 2024, sets a new benchmark for AI regulation with a risk-based classification system—ranging from unacceptable to minimal risk—that dictates stringent safety measures and upholds fundamental rights. The EU’s AI Act sets rigorous  standards for high-risk AI systems, emphasizing robust data quality, detailed record-keeping, and clear user information to ensure transparency and accountability. It requires human oversight to allow interventions and mandates systems to be accurate and secure against cyber threats.

This human-centric framework is especially crucial in healthcare, where many XR solutions are categorized as high-risk due to their significant implications for patient care. Furthermore, these AI systems must undergo strict quality management and compliance assessments before market deployment. For compliance, stakeholders must thoroughly identify and document risks, continually audit, and keep abreast of technical standards, ensuring these tools are used safely and responsibly in the healthcare sector.

Understanding and Embracing the “Duality of Risk”

In the field of information systems and risk management, Claudio Ciborra introduced the concept of “duality of risk,” which identifies risk not only as a potential barrier but also as a catalyst for innovation. Therefore, grasping and addressing the complexities and risks associated with XR and AI in healthcare is crucial for technological advancement and adapting to the digital landscape. Effective management of these technologies requires understanding their specific risks, including the potential impacts of AI models and the likelihood of causing harm. These assessments are vital as they guide regulatory measures and help formulate tailored risk management strategies.

Recommended International Standards and Frameworks to Manage Risks

To effectively manage and mitigate the risks posed by AI and XR technologies in healthcare, the following standards developed by the International Organization for Standardization (ISO)   are recommended for stakeholders:

  • ISO/IEC 23053 and 23894 provide frameworks for describing and managing AI systems.
  • ISO/IEC 42001 focuses on creating a management system for responsible AI use.
  • ISO 31000 and ISO/IEC Guide 51 set out broad risk management guidelines and integrate safety considerations across product lifecycles.

In addition, frameworks developed by the Institute of Electrical and Electronics Engineers (IEEE), such as IEEE 7000-21, address ethical concerns in the design and development of AI systems. Similarly, the NIST AI Risk Management Framework (NIST AI RMF), created by the National Institute of Standards and Technology (NIST) in the U.S., offers a structured approach to managing risks associated with AI technologies. Both frameworks are highly recommended for ensuring responsible and ethical AI deployment.

Finally, HUDERIA (Human Rights, Democracy, and Rule of Law Impact Assessment) is a framework designed to ensure AI systems respect human rights and democratic principles. Developed by the Council of Europe’s Ad Hoc Committee on AI (CAHAI), it aims to standardize the assessment of AI’s societal impacts, promoting accountability and transparency. HUDERIA builds on traditional impact assessments to address the specific risks and benefits of AI technologies, ensuring they are ethically and legally compliant.

Incorporating these standards and frameworks into an organization’s AI Governance Framework is essential for effectively managing risks associated with AI technologies. This process involves documenting all AI applications and implementing a risk classification framework. Such a structured approach not only enhances risk management but also ensures compliance with ethical standards and regulatory requirements. As we incorporate advanced technologies, a one-size-fits-all solution is impractical. Each application requires a tailored approach, integrating various standards to address specific needs.

Legal Reforms on AI liability and Alignment with the Medical Device Regulation

In addition to establishing an effective governance and risk management for XR and AI in healthcare, it is crucial to stay abreast of evolving legal liability rules. The legal landscape regarding the applicability of liability law to AI is evolving rapidly. The EU has introduced two proposals aimed at simplifying liability and compensation processes for AI-related harm: The Amendment to the Product Liability Directive and the AI Liability Directive. These reforms streamline liability claims and establish clearer accountability for AI-generated issues. They are critical for providing clear pathways for compensating damages and ensuring rigorous compliance with safety standards.

While the European Commission believes that these proposed directives modernize liability rules, ensuring robust consumer protection and encouraging innovation and investment in AI technologies, some scholars argue they fall short of providing comprehensive clarity and uniformity. Notably, these directives leave potential liability gaps, especially for injuries caused by opaque, black-box medical AI systems.

Additionally, it is yet to be determined how the new AI Act will synchronize with other legislative proposals like the European Health Data Space (EHDS), and existing EU regulations such as the General Data Protection Regulation (GDPR) and the Medical Device Regulation (MDR). These frameworks need to be harmonized to prevent overlap and confusion, ensuring that AI-driven medical technologies remain both innovative and safe.

The potential global influence of the EU’s regulatory framework, often referred to as the “Brussels effect,” may shape the future trajectory of AI development worldwide. As we continue to integrate these technologies, maintaining a focus on human-centric designs will be essential to build trust and ensure the ethical deployment of AI.

Marcelo Corrales Compagnucci is Associate Professor & Associate Director at the Center for Advanced Studies in Bioscience Innovation Law (CeBIL), Faculty of Law, University of Copenhagen in Denmark; and Inter-CeBIL Research Affiliate, Petrie-Flom Center for Health Law Policy, Biotechnology, and Bioethics at Harvard Law School. 

Marcelo Corrales

Marcelo Corrales is Attorney-at-Law specializing in intellectual property, information technology and corporate law. His research interests are the legal issues involved in disruptive innovation technologies and biomedicine.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.