AI in the NHS: Can the Law Keep Up?
By Jacqueline Anderson – posted on 23 April 2025
Jacqueline recently graduated from our LLM in Internet Law and Policy and has a special interest in the law around artificial intelligence (and related emerging technologies) in healthcare. In this post, she considers some topical issues in the field.
The AI Opportunity in Healthcare
Imagine your next medical diagnosis is shaped not just by your doctor’s expertise, but by an algorithm trained on millions of data points. Sounds futuristic? In parts of the NHS, it’s already happening.
The UK Government’s AI Opportunities Action Plan (January 2025) signals an ambitious push to lead global innovation in artificial intelligence. Healthcare is a key target, with promises of faster diagnoses, personalised treatments, and more efficient service delivery.
And yet, despite a thriving health-tech sector, adoption of AI in the NHS remains limited. Why?
The answer lies in law and governance.
The potential AI to transform the NHS has captured the imagination of policymakers- and attracted considerable resource- driving development of evermore powerful solutions. However, this is being met with increased concerns over how to manage novel risks.
AI slips through the cracks of existing legal frameworks. It blurs lines of accountability and challenges available regulatory frameworks- specifically medical device and data protection rules.
The most pressing legal challenges of AI however, lie in is transformative potential.
AI can disrupt established relationships and practices, raising novel questions about safety and fairness that can undermine fundamental principles of ethics and human rights.
Unlocking AI’s full potential in healthcare requires stronger legal foundations. This is a prerequisite to adequate governance frameworks required to support adoption.
This post explores the key legal and ethical challenges and considers what must change to close the governance gap.
Barriers to Adoption: The Governance Gap
Realising the benefits of AI in the NHS requires widespread adoption and effective integration with established clinical practice. This requires trust of clinicians and policymakers, which can only be achieved by establishing clear governance frameworks.
This is challenging because AI operates in ways that are fundamentally different from conventional software. This creates ambiguity over applicability of existing legal and regulatory frameworks.
AI’s adaptability and autonomy create legal uncertainty, as existing laws and regulations are not adequate to govern its unique risks.
The opacity of AI decision-making raises concerns over transparency, accountability, and explainability—key principles necessary for ensuring patient safety and public trust.
The lack of clear regulatory definitions prevents effective oversight, while AI’s reliance on large datasets challenges fundamental principles of data protection law, such as data minimisation and informed consent.
The greatest risk presented by AI, however, is its potential to influence clinical decision-making.
AI-enabled decision support effectively delegates aspects of the decision-making process in ways that are poorly understood and not adequately captured by existing regulations. This presents clear risks to patient safety and ethical practice, with implications for human rights protections and urgent legal consequences.
The AI Accountability Gap has particular legal implications in healthcare, creating uncertainty over clinician liability—with clear consequences for medical negligence law.
AI in healthcare presents novel risks with direct legal consequences- particularly in application of human rights law, medical device regulation (MDR), and data protection law (GDPR).
Legal certainty is essential to enable safe adoption. This requires systematic consideration of AI’s impact on existing healthcare laws.
The Current Legal Landscape
There is no dedicated ‘Law of AI’ in the UK. Governance of AI in healthcare must therefore look to existing legal frameworks:
- Human Rights and Equalities Law: these are the main instruments that ensure patient safety and give legal effect to principles of medical ethics.
- Medical Device Regulations (MDR): classifying AI-based medical tools as Software as a Medical Device (SAMD).
- UK GDPR: the dependency of AI on large datasets places its regulation firmly within the remit of data protection law.
However, these frameworks are not fully equipped to address AI’s unique challenges.
Legal Challenges of AI in Healthcare
-
Impact on Decision-Making and Human Rights
The greatest risk of AI in healthcare setting is its potential to influence clinical decision making.
While decision support is a source of value, the potential to influence decisions in ways that are poorly understood is the basis of fundamental ethical and legal concerns.
The clinician-patient relationship is governed by regulatory frameworks that ensure medical ethics is embedded in all interactions and decision-making processes. AI’s unique capabilities can disrupt these dynamics. Legal implications impact application of the Human Rights Act 1998 and the Equality Act 2010.
Issues include:
- Lack of transparency over how AI outputs are generated.
- Potential for bias leading to discriminatory decisions. In a modern healthcare system this can result in inaccurate diagnoses or sub-standard care for individuals. When deployed at scale, harms are amplified as biased or inaccurate AI systems could result in patients being excluded from care completely and discriminatory policies impacting very large groups of people. Lack of adequate procedures to monitor and manage these processes may prevent harms being identified.
- Legal accountability for AI-driven decisions: Opacity and autonomy inherent in AI systems raise clear concerns over accountability.
-
Regulatory Gaps in Medical Device Law
MDRs serve as the gateway that determine which medical devices are safe for clinical use. This includes AI, which is classified and regulated as a software medical device.
However, AI’s autonomous and adaptive nature creates ambiguity over core definitions that determine risk classification and subsequent regulatory requirements.
The limitations of MDR to regulate AI has prompted concerns that large numbers of AI medical devices have been approved for use, without sufficient oversight and on basis of inappropriate risk criteria.
Furthermore, MDRs are designed for static software. Under current procedures substantial changes to software are subject to new approvals. AI requires new procedures to manage continual change. The MHRA are currently working to address this regulatory challenge.
-
Data Protection and AI Ambiguities
UK GDPR governs all processing of personal data, including AI. However, AI challenges data protection law by requiring massive training datasets and processing them in ways traditional safeguards weren’t designed for. This has direct implications for key data protection principles:
- Defining personal data: AI often infers sensitive insights beyond explicit data inputs.
- Data minimisation: AI models require large datasets, conflicting with GDPR’s minimisation principle.
- Automated decision-making (ADM): AI’s opaque decision-making undermines GDPR’s requirement for meaningful human oversight.
- Data subject rights: AI creates challenges in ensuring rights to explanation and creates risks of ‘profiling’ which is explicitly prohibited by GDPR.
Bottom line? The government’s reliance on existing laws to regulate AI is insufficient. Legal and ethical use of AI requires a new approach to how existing law is applied.
Key Governance Challenges
AI’s technical capabilities present distinct challenges for law and regulation:
-
Opacity (Black-Box AI)
Many AI models, particularly deep learning systems, lack explainability, making it difficult to determine how decisions are made. This raises legal and ethical concerns about transparency and accountability.
-
Autonomy
AI models operate with varying degrees of independence. When combined with opacity, this creates risk as neither users nor developers can fully assess AI-generated outputs. AI’s autonomy in training further complicates regulatory oversight.
-
Data Quality and Bias
AI accuracy depends on high-quality, representative training data. Inaccurate or biased data can lead to misdiagnoses, incorrect treatments, or exclusion of certain patient groups from care. For example, AI-enabled tools to detect skin cancer may be less effective for diagnosing minority ethnic patients. This is due to bias in training data where lighter skin tones are disproportionately represented. The result is that these systems may work better for "white" patients.
-
Scale of Harm
AI’s benefits—and risks—are amplified at scale. Poorly governed AI could introduce systemic harm across entire populations, making strong regulation essential.
-
EU AI Risk Framework
The EU Parliament’s 2022 report on AI in healthcare identified seven key risks, including patient harm, bias, transparency issues, and accountability gaps. The UK must address similar risks to ensure AI adoption aligns with safety and ethical standards.
Data Governance Challenges: Accuracy and Bias
AI outputs depend on reliable datasets. However, weaknesses in NHS datasets are well documented:
- Historical underrepresentation of certain groups.
- Bias in clinical coding and diagnostic practices.
- Incomplete environmental and social health determinants.
Poor data governance results in discriminatory AI outcomes and systemic inequities in care. In the NHS, outdated digital infrastructure and regulatory uncertainty hinder access to high-quality datasets for AI training, complicating compliance with Data Protection laws.
Human-AI Interaction: Liability and Accountability
AI integration raises legal uncertainties over accountability for clinical decisions. Clinicians may hesitate to override AI recommendations, particularly when AI’s decision-making process is opaque- creating liability concerns under MDR and GDPR, as existing risk classifications are not suited to autonomous AI systems.
The World Health Organization (WHO) has warned that Large Language Models (LLMs) introduce additional risks by mimicking human expertise, misleading users into over-reliance on unreliable AI-generated information.
Ethical and Legal Considerations
Healthcare law is rooted in biomedical ethics, which underpins UK regulatory standards. AI challenges the four core principles:
- Autonomy: Opaque AI undermines patient consent and understanding.
- Beneficence: Biased or erroneous AI outputs compromise best-interest decisions.
- Non-maleficence: AI errors pose risks of patient harm.A
- Justice: AI-driven decision-making could exacerbate inequalities in resource allocation.
Existing legal frameworks, such as the Oviedo Convention, Human Rights Act 1998, Equality Act 2010, and Data Protection Act 2018, give legal effect to ethical principles. However, these frameworks are challenged by novel features of AI.
Bridging Legal and Ethical Gaps
“Managing the legal and ethical risks of AI in healthcare requires a fresh approach. The question is not ‘what are the risks?’ But rather, how can we begin to identify and understand the risks?’.
A good place to start is the framework developed by Mittelstadt and Morley that classifies three sources of risk:
- Epistemic risks: Flawed AI outputs from biased or opaque processes.
- Normative risks: Unfair outcomes and worsened inequalities.
- Traceability risks: Lack of auditability in AI decision-making.
These risks are then analysed at levels of ‘abstraction’ corresponding to where decision-making is taking place.
- Patient-level (individual autonomy and informed consent)
- Doctor-patient relationship (trust and professional accountability)
- Cohort-level (population-wide equity)
- Institutional governance (NHS policies and oversight)
- Public trust (societal impact of AI in healthcare)
- Regulatory sector (alignment with wider health-tech frameworks)
AI’s integration will reshape healthcare relationships and decision-making. Legal certainty and robust ethical oversight are essential to maintaining public trust.
UK-Specific Challenges: Post-Brexit Regulatory Divergence
Brexit adds further uncertainty to AI regulation in the UK. Key issues include:
- Divergence from EU Medical Device Regulation, affecting AI product approval.
- Potential UK GDPR changes, as the government explores “pro-innovation” regulatory frameworks.
- Lack of a unified AI governance framework, forcing NHS bodies to navigate overlapping regulations from the Medicines and Healthcare products Regulatory Agency (MHRA), the Information Commissioners Office (ICO) and the (English) Care Quality Commission (CQC).
Legal uncertainty over how existing law applies to AI creates clear risks to safety and ethical practice, preventing development of clear regulatory frameworks. This will persist as a crucial barrier to AI adoption.
The growing governance gap must be resolved to allow NHS to realise benefits AI can bring. Aside from safety and ethical risks, lack of clear governance will eventually deter investment and undermine the achievements in the NHS innovation ecosystem.
Closing the Governance Gap: What Comes Next
AI has the potential to transform the NHS—but transformation without trust is impossible.
As this post has outlined, current legal frameworks—data protection, medical device regulation, human rights law—are not designed for autonomous, adaptive, opaque systems. And without clear rules, clinicians, developers, and regulators are left in a grey zone of risk.
To move forward, the UK urgently needs a health-specific AI governance framework that:
- Defines accountability and liability.
- Embeds ethical principles into real-world regulatory processes.
- Ensures transparency and auditability of AI decisions.
- Guards against bias and protects patient rights.
Without legal certainty, AI will remain a promising but underused tool in UK healthcare. The challenge now is not just to identify risks, but to act on them, while the opportunity to shape safe and effective AI is still within reach.