Article
Key Corporate Governance Strategies for Mitigating AI-related Legal Risks in the Pharmaceutical Industry
Article
September 4, 2024
This article was originally published in Bloomberg Law. Reprinted with permission. Any opinions in this article are not those of Winston & Strawn or its clients. The opinions in this article are the authors’ opinions only.
For more insights on this topic, read this Bloomberg insights article.
Generative artificial intelligence (AI) is a salient topic for industry leaders with nearly 75% citing it as a top C-suite and board priority in a recent Bain & Company study. This finding comes as AI is poised to transform conventional pharmaceutical drug development. Although pharmaceutical companies have historically used prototypes of machine learning in drug development, companies are increasingly utilizing AI because it is a versatile, advanced tool that can yield significant monetary benefits and time savings. Nevertheless, AI-enabled drug development presents substantial, unprecedented legal risks because AI is a novel, rapidly evolving technology that is currently largely unregulated. As of the date of this publication, there are no U.S. federal regulations governing the use of AI in the private sector, including FDA regulations, and the European Union’s AI Act will only become effective in 2026.
Although AI can be deployed across the drug development lifecycle, it has primarily been used in the drug discovery, preclinical testing, and clinical trial phases (collectively, clinical development). Since 2016, the FDA has received over 300 submissions that incorporate AI, with the vast majority utilizing AI in clinical development. This article provides an overview of AI-enabled clinical development, including key benefits and legal risks, and offers practical recommendations to mitigate these risks, including a novel, three-tiered, bottom-up corporate governance framework.
Overview
Pharmaceutical Drug Development is Risky Business
Beset by inefficiencies and failures, drug development is a costly and protracted undertaking. According to McKinsey & Company, it takes on average 10-15 years for a drug candidate to progress from the laboratory bench to a patient’s bedside, and success is rare with the Congressional Budget Office estimating that only 12% of drugs entering clinical trials receive regulatory approval. EY Parthenon estimates that the cost of developing one new drug may be as high as $2 billion. Accordingly, it is no surprise that companies spend roughly one-fifth of their revenues on drug development alone, as reported by McKinsey & Company.
AI Can Aid Clinical Development
In the drug discovery phase, companies deploy AI for target identification, target validation, hit generation, and lead optimization. AI can streamline the search for suitable compounds for new drug candidates by quickly and efficiently screening chemical databases. AI can also be used to rapidly discover new purposes for existing drugs (drug repurposing), a process that typically takes years. For example, Genentech used AI to identify the potential repurposing capability of an experimental drug in just nine months. With respect to target validation, AI can aid in the discovery of new molecules for a specific disease by searching the chemical landscape—a universe too expansive to be explored by humans alone. Regarding hit generation, AI can expedite hit generation by rapidly identifying proteins that could produce side effects. Finally, with respect to lead optimization, AI can identify compounds with desirable therapeutic attributes.
AI can also facilitate preclinical testing and in particular, safety evaluations of drug compounds. For example, by analyzing a compound’s structure, AI can assess its toxicity.
AI can also streamline clinical trials. By scanning troves of patient data, it can accelerate the recruitment of trial participants, including participants from historically underrepresented groups. Finally, AI can improve trial design by efficiently evaluating key parameters, such as dosage.
AI-enabled Clinical Development Can Mitigate Future Revenue Losses
AI’s expected revenue generation and cost savings can mitigate future revenue losses resulting from the Inflation Reduction Act (the IRA) and the wave of blockbuster drugs losing patent exclusivity over the next few years (the patent cliff). The IRA, which requires Medicare to negotiate the prices of certain prescription drugs directly with manufacturers, contains several drug pricing provisions that are expected to substantially reduce the price of drugs over the next few years. The White House recently announced that the prices of 10 drugs had been reduced by 38-79% following the first round of IRA price negotiations. Additionally, industry leaders are bracing for steep revenue declines from the patent cliff, with billions in drug sales at risk through 2028.
AI-enabled Clinical Development Can Produce Monetary Benefits
Given the foregoing applications, AI can generate new revenue and cost savings. Although AI investments have soared in recent years, as discussed below, the long-term benefits of such investments are expected to outweigh the upfront costs.
While no drugs resulting from AI-enabled clinical development have been successfully marketed yet, AI presents a $50 billion new revenue opportunity by 2032 due to moderate enhancements in drug discovery that could result in 50 new drug candidates over a 10-year period.
In addition to generating new revenue, AI can also yield current and future cost savings. While clinical testing is the most expensive phase of drug development, the drug discovery and preclinical testing phases also remain costly, with leaders estimating that drug discovery alone costs approximately $400 million, according to EY Parthenon. Accordingly, over a third of pharmaceutical executives surveyed by Bain & Company report that they are incorporating AI cost savings into their budgets. In the long-term, when peak adoption of AI is achieved, industry leaders expect cost savings of 67%, 66%, 63%, 56%, and 44% for target identification, target validation, lead optimization, hit generation, and preclinical testing, respectively, per EY Parthenon. AI’s potential cost savings also extend to the exceedingly costly clinical trial phase, which is estimated to cost, on average, $117.4 million per drug candidate. By streamlining the conventional clinical trial process, AI can deliver significant savings. For example, according to NEJM AI, the per patient screening cost of an AI tool to identify heart failure clinical trial patients was about $0.11, representing exponential savings compared to human-driven screening methods. Additionally, AI can also deliver considerable savings from clinical study design optimization. When peak adoption of AI is achieved, industry leaders expect cost savings of 62% from clinical study design optimization, as reported by EY Parthenon.
AI-enabled Clinical Development’s Advantages Have Fueled AI-Pharmaceutical Alliances
Recognizing these advantages, leading pharmaceutical companies are increasingly betting on AI, with investments in AI-enabled drug discovery totaling nearly $25 billion in 2022 alone. One example of this is Insmed’s recent collaboration with Google Cloud, a subsidiary of Alphabet.
In addition to striking partnerships with leading technology firms, companies are also pursuing collaborations with creators of pure-play AI platforms. For example, Moderna recently announced its continued collaboration with OpenAI, the creator of leading AI tool ChatGPT, citing the success of mChat, an internal version of ChatGPT, which achieved greater than 80% adoption and led to the integration of more than 750 GPTs across Moderna’s business.
Key Risks
Despite the foregoing advantages, AI-enabled clinical development is not immune from risk. Set forth below is a non-exhaustive discussion of key legal risks, along with related risk mitigation strategies.
AI-enabled Clinical Development May Endanger Intellectual Property Rights
While there are various intellectual property (IP) risks implicated by AI-enabled clinical development, no risk is as thorny as that of the unpatentability of drugs resulting from AI-enabled clinical development. In February, the US Patent and Trademark Office (the USPTO) issued guidance making it clear that wholly AI-generated inventions are ineligible for patents. However, the USPTO clarified that AI-assisted inventions are not entirely unpatentable. Rather, AI-assisted inventions may be patentable if a natural person provided a significant contribution to the invention.
Simply put, drugs are unpatentable if AI entirely replaces human ingenuity. It is unlikely that human innovation will completely disappear from clinical development. Similar to a laboratory device or computer, AI is an optimization tool rather than a tool to replace human ingenuity. For instance, while AI can identify suitable chemical compounds for new drug candidates, scientists still have to analyze AI-generated results.
Nonetheless, this area is still unsettled as courts and legislatures have yet to extensively opine on IP rights for AI-enabled clinical development. Therefore, companies should heed the three Rs to stay ahead of the curve—Retain, Record, and Review.
Retain. First, companies should retain an external IP counsel to advise on how to best structure a clinical development program to ensure that human ingenuity is not replaced by AI.
Record. Second, companies should diligently record how AI is utilized throughout the drug development process, as they may need to disclose certain information to the USPTO when applying for a patent.
Review. Third, if companies choose to pursue strategic alliances with AI firms, they should carefully review the terms of such collaborations to ensure that they do not lose their IP rights in future drugs. As a best practice, companies should insist that AI firms contractually assign all potential future IP rights in any drug to the company.
AI-enabled Clinical Development Magnifies Data Privacy Risks
While clinical development is subject to a myriad of data privacy risks because of the sensitive nature of clinical trial data and other commercially sensitive information, AI amplifies these risks because of its inherent nature. AI scans large volumes of data to generate new content. However, by storing and managing large volumes of data, companies face a higher probability of sensitive information being leaked or stolen. A data breach could have dire consequences for clinical development, including increased cost and clinical trial delays. Additionally, companies could face, among other things, reputational loss, litigation, and/or regulatory penalties.
Therefore, companies should follow the three Cs to guard against this heightened threat—Chart, Coordinate, and Control.
Chart. First, companies should chart the various domestic and foreign data privacy regulatory regimes implicated by their AI-enabled clinical development programs.
Coordinate. Second, companies should coordinate health data management, privacy and cybersecurity training programs for their employees.
Control. Third, companies should implement strict control measures to manage access to and use of sensitive clinical trial data and other commercially sensitive information.
AI-enabled Clinical Development May Trigger Product Liability Claims
AI-enabled clinical development also poses product liability risks. Although companies undertake rigorous, comprehensive measures to ensure drug safety, no safeguard is 100% effective and it is possible that drugs born out of AI-enabled clinical development may cause harm, whether due to bias, error, or defect. Similar to other product liability cases, questions of attribution will arise. Is the pharmaceutical company responsible for an AI-enabled model that incorrectly evaluates the protein interactions of a drug compound or is the creator of the AI at fault? Is the pharmaceutical company responsible for an AI-enabled model that employs bias in clinical trial enrollment or is the creator of the AI to blame? Answering these questions has proven difficult, as regulators and lawmakers have yet to establish clear and consistent guidelines and legal precedent is sparse. In the absence of clear guidance and legal precedent, many have turned to traditional product liability doctrine. However, this is an imperfect solution because AI is distinctly different from traditional products—it is constantly changing as it is fed new data. Therefore, it may be challenging for courts to pinpoint causation because of AI’s malleability.
As case law continues to develop, companies should consider the three Ss to manage product liability risk—Survey, Supervise, and Study.
Survey. First, companies should survey case law to stay abreast of court decisions applying product liability doctrine to AI.
Supervise. Second, companies should closely supervise their AI to clearly understand how it is used. In the event of a lawsuit, companies must be able to explain their decisions and practices to a court because willful ignorance is not a legal defense.
Study. Third, if companies choose to partner with AI firms, they should study their contract to ensure that it clearly delineates responsibility for potential AI-induced product liability claims.
Corporate Governance
In addition to the risk management strategies discussed above, companies should ensure that there are appropriate mechanisms in place to enable a company’s board and management to exercise proper oversight over AI-enabled clinical development. Although the adoption of an AI governance framework is not a one size fits all endeavor and is dependent upon factors such as company type (public vs. private), size, and resources, a three-tiered, bottom-up approach could prove useful.
The AI Standing Committee is the First Line of Defense
Given the novel, esoteric nature of AI-enabled clinical development, it may not be well understood by a company’s board and management. Therefore, companies should establish a permanent AI standing committee (the AI Standing Committee) to serve as the eyes and ears of key decision makers. Comprised of highly skilled personnel from all levels of seniority across the drug development lifecycle, as well as other functions such as corporate development/strategy, finance, legal/compliance, information technology, government affairs, and investor relations, the AI Standing Committee is the first line of defense in the board and management’s oversight of AI-enabled clinical development. The AI Standing Committee shall report to the C-Suite AI Committee, as discussed below. To ensure continuity while still fostering new perspectives, members of the AI Standing Committee shall have staggered terms. The AI Standing Committee’s mandate is captured by the four Rs—Record, Review, Recommend, and Report.
Record. In recent proxy seasons, shareholder proposals have increasingly focused on AI transparency reports. Such shareholder proposals may gain greater support among shareholders as AI-enabled clinical development gains traction across the industry. Accordingly, the AI Standing Committee should maintain clear, comprehensive records of the company’s current use of AI in clinical development, including the inputs provided to the AI. The AI Standing Committee should update these records on a quarterly basis in connection with the preparation of its quarterly report, as discussed below.
Review. Additionally, the AI Standing Committee is charged with reviewing, on a quarterly basis, the following: the company’s current use of AI-enabled clinical development, including related metrics, guardrails, policies, and best practices; relevant domestic and foreign regulatory regimes, including legislative, judicial, and regulatory developments; and internal and external proposals regarding the company’s current and future uses of AI-enabled clinical development, including potential collaborations with AI companies.
Recommend. Upon completing the review discussed above, the AI Standing Committee should prepare a set of recommendations regarding the company’s current and future uses of AI, including recommendations pertaining to metrics, guardrails, policies, and best practices. Each recommendation should be accompanied by a risk profile clearly articulating legal and commercial risks.
Report. The AI Standing Committee is charged with preparing a report containing the key findings of its review and related recommendations. The report is delivered to the C-Suite AI Committee on a quarterly basis.
The C-Suite is the Second Line of Defense
Given the increased salience of AI among institutional investors and the general shareholder population, companies should consider appointing a Chief AI Officer who will be primarily responsible for developing and executing the company’s AI strategy at all stages of the drug development lifecycle, including clinical development. Additionally, to ensure adequate and balanced representation of AI expertise, scientific/clinical expertise, strategic expertise, and risk mitigation expertise in AI decision making, companies should establish a C-Suite AI Committee comprised of the Chief AI Officer, the Chief Scientific Officer, the Chief Medical Officer, the Chief Compliance, Quality, and Risk Officer, the Head of Corporate Development/Strategy, and the Chief Legal Officer. The mandate of the C-Suite AI Committee is encompassed by the four As—Authorize, Analyze, Act, and Advise.
Authorize. The C-Suite AI Committee is authorized to oversee the AI Standing Committee.
Analyze. The C-Suite AI Committee analyzes the findings and recommendations produced by the AI Standing Committee in its quarterly reports.
Act. Drawing upon the AI Standing’s Committee quarterly reports and ad hoc discussions with the AI Standing Committee, the C-Suite AI Committee should produce quarterly AI Executive Summaries for review by the company’s CEO. Each AI Executive Summary should discuss metrics, guardrails, best practices, policies, procedures, risk management strategies, and other considerations for the use of AI in clinical development.
Advise. The C-Suite AI Committee should advise the CEO on all AI-related decisions, including metrics, guardrails, best practices, policies, procedures, and risk management strategies.
The Board is the Final Line of Defense
The final line of defense is the Board. The Board has primary oversight of risks related to AI-enabled clinical development. The Board’s function can be summarized by the four Cs—Care, Competency, Consult, and Control.
Care. The customary fiduciary duty of care applies to board members when exercising oversight of risks arising from AI-enabled clinical development. Accordingly, using the quarterly reports and executive summaries produced by the AI Standing Committee and the C-Suite AI Committee, respectively, as well as guidance from external advisors, the board should establish, evaluate, and update, as necessary, metrics and guardrails for the proper use of AI in clinical development.
Competency. Not only is AI competency required to exercise proper care, but AI competency is increasingly becoming a priority among investors. While some boards may choose to form a separate committee or subcommittee to demonstrate the board’s AI expertise, this is not strictly necessary. Rather, the focus should be on cultivating competency at the individual director level. Accordingly, directors should develop their competencies in AI by, among other things, regularly reviewing the quarterly reports and executive summaries produced by the AI Standing Committee and the C-Suite AI Committee, respectively, and participating in relevant training opportunities.
Consult. Given the nascent and patchwork nature of the AI regulatory landscape, the board should seek advice from external legal and other advisors when confronted with AI-related decisions.
Control. The board’s oversight over AI-related risks also extends to disclosure control risk. The board, together with company management, legal, accounting, and investor relations personnel, must ensure proper disclosure controls over the company’s SEC AI-related disclosures to prevent AI washing—an embellishment of a company’s AI use. The SEC has prioritized this area recently, noting that public companies must ensure that AI disclosures are clear, current, customized, and based in reasonable belief. Recently, the SEC pursued enforcement action in respect of AI washing. Accordingly, the board should take appropriate measures to ensure robust controls over AI disclosures.
Conclusion
While AI-enabled clinical development provides compelling benefits, there are substantial risks associated with its use. To date, there is little guidance available on how best to manage these risks. Accordingly, industry leaders should consider implementing the three-tiered, bottom-up corporate governance framework and other risk mitigation strategies discussed herein.