Artificial intelligence is making waves in health care, offering the potential to improve patient outcomes, streamline administrative tasks, and enhance decision-making. But as AI embeds itself into hospitals and health systems, legal and ethical questions remain murky. While health care is a heavily regulated industry, AI sits in an evolving gray area where the lines between acceptable and unacceptable use are still being drawn.
Dr. Danny Tobey, a physician-lawyer and global co-chair of DLA Piper’s AI and data analytics practice, warns that while AI presents risks, ignoring it could be even more dangerous. “There is a risk in not adopting,” Tobey told Newsweek. “To throw out the baby with the bathwater is a big mistake.”
The Patchwork of AI Regulations in Health Care
Hospitals and health systems aren’t just dealing with one set of AI rules—they’re facing an overlapping web of federal, state, and international regulations.
- The U.S. Food and Drug Administration (FDA) is setting guidelines for AI-powered medical devices.
- The Department of Health and Human Services (HHS) is focusing on patient data privacy in AI applications.
- State attorneys general are taking their own stance, often adding another layer of scrutiny.
- Global bodies like the European Union are crafting AI laws that could impact multinational health care organizations.
This patchwork approach leaves health leaders in a difficult spot. Too many regulations make compliance a headache, yet too little guidance creates uncertainty. AI in health care is advancing faster than lawmakers can keep up, leaving hospitals wondering whether their AI tools will land them in legal trouble.
AI in Health Care: A Lawsuit Waiting to Happen?
AI’s ability to generate human-like responses and solve complex problems is both its greatest strength and biggest liability. The technology is probabilistic—it makes educated guesses rather than definitive decisions. That’s a nightmare for risk-averse industries like health care, where a single misdiagnosis or flawed recommendation could lead to lawsuits.
Some of the most common legal issues include:
- Algorithmic discrimination: AI models trained on biased data may lead to unfair patient treatment, triggering discrimination lawsuits.
- “Hallucinations”: AI systems sometimes generate false information with confidence, creating risks for medical decision-making.
- Lack of transparency: Many AI models operate as “black boxes,” making it hard for hospitals to explain why an AI-driven decision was made.
DLA Piper has already defended major lawsuits over AI “hallucinations” and algorithmic bias. The firm is also working with health systems to create governance structures that can help prevent legal issues before they start.
Build or Buy? The AI Dilemma for Health Systems
Hospitals have two main choices when implementing AI: build their own models or buy solutions from vendors. Neither option is risk-free.
- In-house development: Allows customization but requires strong internal expertise and oversight.
- Vendor solutions: Faster to deploy but may not be tailored for a specific hospital’s needs, leading to unintended consequences.
Tobey emphasizes that the real issue isn’t whether to build or buy—it’s whether hospitals have strong AI governance in place. Without proper oversight, either approach can lead to compliance failures and litigation risks.
The Unseen Costs of AI: Why Governance Matters
Many hospital executives are surprised to learn that AI’s biggest cost isn’t development—it’s governance. AI tools are often cheap to implement, but ensuring their safety, accuracy, and compliance requires serious investment.
AI governance requires:
- Commitment from leadership: Boards and executives must make AI oversight a priority.
- Dedicated budget: Compliance, safety testing, and security investments are critical.
- A multi-disciplinary approach: AI governance teams should include doctors, data scientists, lawyers, and ethicists.
- Frequent testing: AI models can drift over time, requiring ongoing validation to prevent errors.
AI isn’t just another hospital technology—it’s a fundamental shift in how health care operates. Without strong governance, hospitals risk exposing themselves to lawsuits, reputational damage, and regulatory penalties.
Hospitals Must Strike a Balance
Tobey warns against over-indexing on risk, reminding health care leaders that the industry is already filled with human errors, information silos, and inconsistencies. AI has the potential to improve access to care, reduce physician workload, and enhance diagnostic accuracy—but only if it’s implemented responsibly.
Health care leaders can’t afford to ignore AI’s risks, but they also can’t afford to avoid AI altogether. The legal gray zone surrounding AI won’t be resolved overnight, but hospitals that take proactive steps toward governance and compliance will be in the best position to navigate whatever regulations come next.