Author: Alex Morgan
-
Model Risk & Data Retention in AI
Model risk and data retention in artificial intelligence raise a difficult legal problem: even after data is deleted, AI models may continue to reflect patterns learned from that data. This persistence challenges traditional assumptions about consent withdrawal, data minimization, and remediation. Courts and regulators increasingly examine whether organizations understand and manage the long-term risks created…
-
Can AI Models Leak Personal Data?
Yes, AI models can leak personal data. Even when models do not store raw personal information in traditional databases, they may memorize, infer, or reproduce sensitive data through their outputs. This capability raises significant legal and regulatory concerns, particularly under privacy and data protection laws that focus on control, consent, and individual rights. Understanding how…
-
AI Incident Reporting & Disclosure
When AI incidents occur, organizations may face obligations to report or disclose those events to regulators, customers, partners, or the public. AI incident reporting and disclosure focus on when notification is required, what must be disclosed, and how transparency affects legal exposure. Failure to report or disclose AI incidents appropriately can compound liability, trigger regulatory…
-
How to Respond to AI Failures
When artificial intelligence systems fail, the response often matters more than the failure itself. Courts, regulators, and insurers evaluate whether organizations acted promptly, responsibly, and transparently once issues were identified. Effective response to AI failures reduces harm, limits legal exposure, and demonstrates diligence. Poor response can compound liability even when the original error was unintentional.…
-
What Is an AI Incident?
An AI incident is any event in which an artificial intelligence system causes, contributes to, or creates a meaningful risk of harm. Incidents may involve incorrect outputs, biased decisions, system drift, misuse, security failures, or outcomes that fall outside approved use cases. From a legal and regulatory perspective, an AI incident is not limited to…
-
Why AI Documentation Matters Legally
When artificial intelligence systems are challenged, documentation often determines legal outcomes. From a legal perspective, AI documentation provides evidence of how systems were approved, monitored, and corrected over time. Courts, regulators, and insurers rarely rely on verbal assurances or policy statements alone. They look for records that demonstrate what decisions were made, when they were…
-
How to Monitor AI Systems
Monitoring AI systems is the process of continuously observing how artificial intelligence behaves after deployment. From a legal and risk perspective, monitoring ensures that AI systems continue to operate within approved parameters and do not produce harmful, biased, or unexpected outcomes over time. Unlike pre-deployment testing, monitoring addresses real-world performance. It allows organizations to detect…
-
Common AI Contract Clauses That Create Risk
AI contracts are often drafted using standard software templates that were not designed to address the unique risks created by artificial intelligence. As a result, certain contract clauses can unintentionally increase legal exposure rather than reduce it. Understanding which AI contract clauses create risk helps organizations avoid agreements that undermine governance, oversight, and legal defensibility.…
-
Can Contracts Shift AI Liability?
Contracts can shift some aspects of AI liability between parties, but they cannot eliminate liability entirely. While contractual provisions may allocate risk between vendors and customers, courts and regulators often look beyond contract language to assess who actually controlled and benefited from AI systems. Organizations that rely solely on contractual disclaimers to manage AI risk…
-
When Are AI Vendors Liable?
AI vendors can be liable when the systems they provide cause harm, but liability does not arise automatically. Courts and regulators evaluate vendor responsibility based on control, representations, foreseeability, and the role the vendor played in the AI system’s design and deployment. While many AI contracts attempt to limit vendor liability, those limitations are not…