Webinar
Emerging Antitrust and IP Challenges in the Age of Artificial Intelligence and Algorithms
Webinar
December 5, 2023, 11:30 AM - 12:30 PM
On December 5, 2023, Winston partners Susannah Torpey and Kelly Hunsaker presented a webinar hosted by Mondaq on the key antitrust and IP issues stemming from the use of artificial intelligence (AI) and algorithms.
The program covered algorithmic collusion and information exchange, exclusionary practices involving AI services, AI’s impact on patent law, copyright concerns with AI-generated content, and more. The speakers discussed emerging trends, reviewed case studies, and addressed the challenges and opportunities these cutting-edge technologies present.
Key Takeaways
- As the capabilities of AI—and particularly generative AI—continue to evolve, we’re seeing a global race to establish ground rules for this emerging technology. Summits have been convened and international agreements drafted to stress global cooperation for safe AI development. Jurisdictions are around the world are scrambling to figure out how best to protect against the risks that AI poses without stifling innovation. In the United States, President Biden recently issued an “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence” directing agencies to create rules, task forces, and guidance on the risks of AI across various sectors. Congress and several state legislatures are debating various bills targeting AI but are also concerned about overregulation.
- The FTC, DOJ, and private plaintiffs are focused on competitive risks posed by the use of AI, with particular scrutiny on AI-driven information exchanges and the use of pricing algorithms. Tacit collusion facilitated by AI presents challenges in establishing liability, and it is yet to be seen whether liability will attach solely based on AI “agreements” without human involvement. The evolving landscape of legal interpretations and precedents around AI-driven collusion and information exchanges poses uncertainties for companies using pricing optimization software. The particular analytical frameworks that courts adopt in analyzing these cases—whether they apply the strict per se rule or the more lenient rule of reason—will significantly affect the outcomes and the implications for companies’ use of algorithmic pricing tools.
- Regulators are sounding alarms about the potential for dominant firms to use AI to solidify market positions. President Biden’s AI executive order and recent comments by FTC officials highlight the government’s concern that companies will use “key assets” to exclude rivals and reinforce dominance. AI also brings the potential to enable unfair practices like predatory pricing, self-preferencing, tying and bundling, and exclusive dealing which may support claims of monopolization or “unfair methods of competition” in violation of Section 5 of the FTC Act. The FTC is closely scrutinizing AI-related competition risks and has streamlined the process of issuing civil investigative demands in non-public AI-related investigations.
- Legal battles surrounding the intersection of AI and copyright have established two general types of copyright suits against owners of AI models, though the two types are not mutually exclusive in their application: (1) cases involving the use of copyrighted material to train AI models, and (2) cases involving the output of AI models constituting unauthorized derivative works. Nearly all of the cases grappling with these issues are still in the pleading stages, with the most significant takeaway thus far being that courts are requiring that complaints contain significant amounts of detail or risk being dismissed for lack of specificity.
- While virtually all such lawsuits have been brought against corporations, many providers of AI models and services—including Adobe, Microsoft, OpenAI, and Google—have begun introducing user-indemnity clauses for certain AI-enabled products to ease users’ concerns about the legal risks. These clauses commonly include carve-outs for various reasons, such as disabling safety and/or filtering features or intentionally infringing acts. The specific language for such user-indemnity clauses vary from company to company and continue to evolve as companies navigate the risks associated with offering AI products. Accordingly, it is vital to review Terms & Conditions of AI platforms to ensure compliance and mitigate risk.