Article

AI risk and health care third parties: Addressing exposure across operations

How health systems can keep pace as risk complexity increases

April 14, 2026

Key takeaways

AI has become a consistent component of health care technology.

Line Illustration of a shield

Health care providers must govern AI risk across the full third-party lifecycle.

Ongoing monitoring and proper oversight are critical as AI capabilities change and vendor risk evolves.

#
Health care

Artificial intelligence is no longer an emerging capability in health care. It is a standard feature embedded across the technology ecosystem that supports clinical, operational and administrative functions. As use cases accelerate, health care organizations are increasingly exposed to AI-related risks that originate from third-party vendors that design, operate and continuously update AI-enabled products.

This shift raises a critical question for health care leaders: How can organizations govern AI risks, particularly those driven by third-party relationships, before those risks are introduced into the environment?

AI is already embedded across the health system

AI has become a consistent component of health care technology, particularly within third-party solutions that organizations already rely on. AI capabilities are often introduced through vendor upgrades or feature enhancements, sometimes without clear visibility into how models operate, what data they ingest or how outputs are generated. These capabilities are not limited to proprietary tools; they are increasingly embedded across the health system, and include:

  • Medical devices and clinical systems, such as diagnostic and treatment tools that leverage AI to support decision making
  • Ambient listening solutions used in clinical settings to capture and summarize patient interactions
  • Revenue cycle management platforms that apply AI to coding, billing and claims processes
  • Patient scheduling and access tools designed to optimize workflows and provide a seamless patient experience
  • Chatbots and virtual assistants embedded within electronic health records (EHR), supporting patient engagement and administrative tasks

Because many of these solutions process protected health information (PHI) or personally identifiable information (PII), the potential impact of AI failures, data misuse or model errors is significantly greater in health care than in many other industries.

Why third-party AI risk matters now

Health care organizations care about third-party AI risk because the speed, scale and opacity of AI adoption have fundamentally changed the risk landscape. Unlike traditional software, AI systems evolve rapidly as data inputs change, models retrain and updates are deployed more frequently, often with broader downstream impact.

As a result, AI risk can no longer be managed as a series of special use cases or pilot programs managed through a center of excellence or committee. It must be treated as a standard operating practice and embedded into core governance, risk management and third-party oversight processes and considered as part of the aggregated risk exposure of the third party.

The business case for proactive AI governance

Proactive AI governance is not simply a compliance exercise; it is a strategic imperative. Organizations that fail to account for AI-driven third-party risk may face increased exposure to privacy incidents, patient safety concerns, regulatory scrutiny and operational disruption. Conversely, organizations that integrate AI risk into their governance and third-party risk management (TPRM) frameworks are better positioned to adopt innovation responsibly while maintaining trust with patients, regulators and business partners.

Evaluating AI risk across the TPRM lifecycle

Managing AI risk requires a lifecycle-based approach that extends beyond initial vendor onboarding. Leading organizations are embedding AI considerations across each stage of the TPRM lifecycle.

Planning
Effective AI risk management begins before a solution is introduced. Organizations should clearly define the intended use case and assess inherent risk, including whether the AI solution processes PHI or PII and whether it aligns with the organization’s risk appetite. This stage is especially important when vendors introduce AI capabilities through product enhancements rather than new contracts.

Due diligence
Traditional third-party due diligence is no longer sufficient when AI is involved. In addition to assessing core risk domains, organizations should perform AI-specific assessments that address how models are trained and validated, data ownership, data quality assurance practices, bias identification and mitigation strategies, and PHI/PII redaction or de-identification methods.

Health care organizations should also seek transparency through an AI bill of materials that identifies model dependencies, open-source components, APIs and downstream services, including nth-party relationships. Security controls specific to AI components—such as audit logging, encryption and testing—should be evaluated alongside broader cybersecurity controls. Finally, organizations should assess AI operational readiness, including system maintenance, update processes, and vendor-provided training and support.

Contracts as a governance control, not a legal afterthought

Contracts play a critical role in governing third-party AI risk and should extend well beyond executing a business associate agreement (BAA). Strong agreements clearly define shared responsibilities, enforceable obligations and measurable service level agreements related to AI performance, security and compliance.

Key contractual considerations include data ownership and permitted uses, protections for data used to train models, security requirements such as access controls and audit rights, termination protections and end-of-life support. In health care, BAAs may require enhancement to explicitly address AI, such as prohibiting the use of PHI to train public models, clarifying breach notification timelines specific to AI systems and flowing requirements down to subcontractors.

Ongoing monitoring
AI systems do not remain static. Models can drift as data inputs change, and vendors may push updates more frequently than with traditional software. As a result, organizations should move beyond point-in-time assessments toward continuous monitoring. This includes retrofitting appropriate due diligence for existing vendors with newly enabled AI features, monitoring performance and service-level metrics, tracking security control effectiveness and conducting periodic reassessments. Due to the rapid speed of adoption and frequent software updates, ongoing monitoring of AI systems should happen more frequently than that of standard technology, ensuring the health system is proactive in identifying and mitigating potential new risks.

Termination
AI risk does not end when a vendor relationship ends. Organizations should establish clear end-of-life management plans, including transition strategies, data destruction requirements and vendor cleanup activities. Without defined termination controls, organizations may retain residual AI risk long after a contract expires.

Strengthening oversight

As AI becomes embedded across the health care technology ecosystem, leaders can take several practical steps to strengthen oversight and reduce third-party risk:

Gain visibility into AI use across the environment. Start by identifying where AI is already embedded within third-party solutions, particularly those processing PHI or supporting clinical workflows.


Integrate AI into existing governance and TPRM processes. Treat AI as business as usual by embedding AI-specific considerations into planning, due diligence, contracting and ongoing monitoring.


Enhance due diligence and monitoring for AI-enabled vendors. Go beyond traditional assessments to address model training, data use, bias, security controls and model updates, especially for existing vendors that have introduced AI through product enhancements.


Use contracts as an active governance tool. Ensure agreements clearly define data ownership, permitted AI uses, security obligations and end-of-life requirements, rather than relying solely on standard BAAs.

As AI becomes embedded across the health care ecosystem, managing third-party AI risk can no longer be treated as an exception. Organizations that integrate AI governance into their TPRM programs, and make it part of everyday operations, will be better equipped to balance innovation with resilience. The path forward is not about slowing adoption, but about ensuring AI risk management evolves at the same pace as technology itself. 

RSM contributors

  • Amy Feldman
    Amy Feldman
    Director, Risk Consulting
  • Lenny Levy
    Lenny Levy
    Managing Director

Subscribe to Health Care Leader Insights

Actionable insights to help health care industry leaders successfully navigate challenges and take advantage of opportunity.