Blog
A New Intermediary: Artificial Intelligence and the Learned Intermediary Doctrine
Blog
February 24, 2025
Artificial intelligence (AI) is an emerging tool in healthcare settings, altering the relationships between drug manufacturers, physicians, and patients.[1] Its use in medicine will likely impact the medical-legal framework, particularly as it relates to the learned intermediary doctrine. The learned intermediary doctrine, a crucial defense in many products liability cases, establishes that pharmaceutical manufacturers’ duty to warn extends only to physicians, not directly to consumers.[2] It rests on the premises that physicians possess specialized knowledge to evaluate drug risks and benefits, maintain direct relationships with patients, and are best positioned to convey personalized medical advice, including warnings.[3]But AI’s increasing participation in medical decision-making raises complex questions about the application the doctrine.
As AI systems become more sophisticated in diagnosing conditions and recommending treatments,[4] they may arguably operate as de facto intermediaries between drug manufacturers and patients. Already, AI tools can analyze patient data, suggest medication options, and predict potential adverse reactions.[5] In some cases, AI is more accurate than human physicians, and some medical AI tools allow patients to avoid speaking directly to a physician altogether.[6]
Legally, drug manufacturers can and should continue to rely on the learned intermediary doctrine in failure-to-warn cases; however, in instances where AI replaces the role of the physician, should it be considered a learned intermediary in its own right?
When AI systems provide direct-to-patient recommendations, the chain of communication envisioned by the learned intermediary doctrine becomes less clear. With traditional AI systems, neither physicians nor manufacturers can point to how AI forms its conclusions. Given AI’s notoriety for struggling to properly cite its sources,[7]who bears the responsibility if it fails to account for drug interactions or misses crucial patient-specific contraindications?
The traditional learned intermediary doctrine must expand to accommodate scenarios where AI systems function alongside human physicians in the decision-making process. This could conceivably strengthen the doctrine. Afterall, the learned intermediary doctrine is predicated on the assumption that doctors are more knowledgeable about prescription medications than consumers; if that is the case, then the same rationale would apply to doctors assisted by AI and potentially to AI acting on its own. If AI systems and a human physician make the same recommendation and/or offer similar guidance, for instance, the rationale behind the defense is strengthened.
But in a hybrid decision-making model, the responsibilities for physicians, AI systems, and drug manufacturers must be explicitly delineated. For example, if drug manufacturers expect AI systems to process and interpret drug information directly from their databases, they may need to provide warnings in formats and locations that are optimized for AI consumption. To facilitate this, AI systems must be clear about their capabilities.
In conclusion, challenges and opportunities abound for the learned intermediary doctrine in the face of medical artificial intelligence. But with a thoughtful approach, the doctrine can continue to prioritize patient safety without sacrificing medical development.
[1] https://pmc.ncbi.nlm.nih.gov/articles/PMC10517477/
[2] Reyes v. Wyeth Lab'ys, 498 F.2d 1264, 1276 (5th Cir. 1974)
[3] Id.
[4] https://www.mgma.com/articles/artificial-intelligence-in-diagnosing-medical-conditions-and-impact-on-healthcare
[5] https://www.alation.com/blog/ai-healthcare-top-use-cases/
[6] https://www.news-medical.net/news/20241106/AI-outperforms-doctors-in-diagnostics-but-falls-short-as-a-clinical-assistant.aspx#:~:text=AI%20outperforms%20doctors%20in%20diagnostics%20but%20falls%20short%20as%20a%20clinical%20assistant&text=New%20study%20reveals%20that%20large,making%20without%20replacing%20human%20expertise.
[7] https://sites.usc.edu/graduate-writing-coach/ai-writing-and-attribution-ai-cannot-cite-anything/
This entry has been created for information and planning purposes. It is not intended to be, nor should it be substituted for, legal advice, which turns on specific facts.