Author: Alex Morgan
-
Scraped Data and Copyright Law: Emerging Litigation Against AI Developers
Artificial intelligence developers increasingly rely on large-scale data scraping to train foundation models. As lawsuits multiply, courts are now being asked to decide whether scraping copyrighted material for model training constitutes infringement, fair use, or something entirely new under intellectual property law. This issue is rapidly becoming one of the most consequential legal battlegrounds in…
-
AI Training Data Liability: Who Is Responsible for Biased or Illegally Sourced Data?
Artificial intelligence systems are only as reliable as the data used to train them. When models produce biased results, infringe intellectual property rights, or rely on unlawfully obtained personal data, the legal question becomes immediate and consequential: who is responsible for the underlying training data? As regulatory scrutiny intensifies and litigation increases, training data governance…
-
Limitation of Liability Clauses in AI Contracts: Allocating Risk in Artificial Intelligence Agreements
As artificial intelligence systems become embedded in enterprise operations, contractual risk allocation has become a central legal concern. Limitation of liability clauses in AI contracts define how financial exposure is distributed between vendors, developers, and deploying organizations when artificial intelligence systems malfunction, generate harmful outputs, or trigger regulatory scrutiny. These provisions often operate alongside AI…
-
AI Documentation and Recordkeeping: How Governance Files Reduce Legal Risk
Artificial intelligence governance does not end with model design or policy adoption. In regulatory investigations and litigation, what often matters most is documentation. Organizations deploying AI systems must maintain structured records demonstrating oversight, monitoring, and risk evaluation. Without documentation, even well-intentioned governance practices can become difficult to defend. AI documentation refers to the organized recordkeeping…
-
What Is an AI Audit? Legal and Regulatory Perspectives on Model Oversight
As artificial intelligence systems become embedded in hiring, lending, healthcare, insurance underwriting, and law enforcement, the concept of an “AI audit” has shifted from a technical review to a legal necessity. Organizations are increasingly expected to demonstrate that their AI systems are tested, monitored, and governed in a way that satisfies regulatory and liability expectations.…
-
AI Vendor Indemnification Clauses: Who Pays When Artificial Intelligence Fails?
As organizations deploy artificial intelligence systems sourced from third-party vendors, contractual indemnification provisions play a critical role in allocating liability. When AI systems malfunction, generate biased outcomes, or trigger copyright disputes, the central legal question often becomes: which party bears financial responsibility under the governing contract? Why Indemnification Matters in AI Deployments Artificial intelligence systems…
-
Does Fair Use Protect AI Training Data? Legal Analysis of Generative Model Defenses
As litigation involving artificial intelligence training data expands, the fair use doctrine has emerged as a central defense strategy. AI developers frequently argue that model training constitutes transformative use rather than unlawful copying. Courts evaluating these claims must determine whether machine learning processes qualify for protection under established fair use principles. For a broader overview…
-
Can AI Companies Be Sued for Copyright Infringement Based on Training Data?
Artificial intelligence systems are trained on vast datasets that may include copyrighted works. As litigation surrounding generative AI expands, courts are increasingly asked whether the use of copyrighted material in model training creates actionable infringement liability. This issue sits at the intersection of intellectual property law, regulatory scrutiny, and emerging theories of artificial intelligence responsibility.…
-
Emerging Legal Theories of Liability in Artificial Intelligence Litigation
Artificial intelligence litigation in the United States is developing through adaptation of existing legal doctrines rather than through entirely new statutory frameworks. Courts are applying traditional negligence, product liability, discrimination, fraud, and contract principles to AI-driven systems. As regulatory scrutiny intensifies and insurers reassess exposure, litigation risk continues to evolve alongside enforcement activity. For a…
-
How Insurers Evaluate Artificial Intelligence Risk Exposure
As artificial intelligence systems become integrated into core business operations, insurers are reassessing how traditional policies respond to AI-driven exposure. Unlike conventional operational risks, AI introduces layered regulatory, litigation, contractual, and reputational dimensions. Understanding how insurers evaluate AI risk exposure is essential for organizations seeking adequate coverage and defensible underwriting outcomes. Why Artificial Intelligence Changes…