Insurers Exclude AI-Related Liabilities, Citing Systemic Risk and High-Profile Incidents
## Executive Summary
The global insurance industry is actively distancing itself from risks associated with Artificial Intelligence, with major carriers seeking regulatory approval to exclude AI-driven errors and misconduct from corporate liability policies. This strategic retreat, led by firms such as **AIG** and **W.R. Berkley**, is a reaction to the unpredictable and potentially systemic nature of AI-related liabilities. Recent high-profile cases involving significant financial loss and reputational damage have underscored the immense risk, forcing a re-evaluation of insurability and creating a significant coverage gap for enterprises increasingly reliant on AI technologies.
## The Event in Detail
Leading insurers are formally moving to isolate AI-related perils. **W.R. Berkley**, for instance, has introduced an "Absolute" AI exclusion in its specialty lines, including Directors and Officers (D&O), Errors and Omissions (E&O), and Fiduciary Liability coverage. This endorsement broadly seeks to deny coverage for any damages arising from the "actual or alleged use, deployment,or development of Artificial Intelligence."
This shift is not theoretical; it is a direct response to tangible and costly AI failures:
- **Defamation and Libel:** **Google** is facing a **$110 million** lawsuit stemming from false legal accusations generated by its AI, establishing a precedent for AI-driven defamation.
- **Operational Failures:** **Air Canada** was legally compelled to honor discounts incorrectly offered by its customer service chatbot, demonstrating direct financial liability from AI errors.
- **Sophisticated Fraud:** Engineering firm **Arup** suffered a **$25 million** loss after fraudsters used AI voice-cloning technology to impersonate a company director and authorize illicit transactions.
## Market Implications
The insurance industry's withdrawal poses a significant obstacle to corporate AI adoption. Without a financial backstop to mitigate risks, companies may become more conservative in deploying AI solutions, potentially slowing innovation and competitive development. This creates an uninsured risk category that Chief Risk Officers must now address through other means.
However, this void is also fostering a new market for specialized risk management. Startups like **Armilla** and the **Artificial Intelligence Underwriting Company (AIUC)** are stepping in to fill the gap. These firms are developing new underwriting standards and tools specifically designed to evaluate AI models for vulnerabilities, aiming to create a viable insurance market for these emerging risks.
## Expert Commentary
According to Rajiv Dattani, co-founder of the **AIUC**, the move by traditional insurers is largely driven by a "fear of the unknown." He suggests that insurance can serve as a "neat middle-ground solution" for governing AI, providing a layer of third-party oversight that is more agile than government regulation. Dattani argues that voluntary corporate commitments are insufficient to manage AI risks, positioning specialized underwriting as a critical component of the ecosystem.
## Broader Context
This trend is unfolding as legal challenges against AI providers mount. Major technology companies, including **Google**, **Meta**, and **OpenAI**, are all facing lawsuits related to defamation and misinformation generated by their large language models. The insurance industry's response is a clear financial formalization of this risk. By explicitly excluding AI, insurers are treating it as a distinct and high-hazard category, much like cyber-attacks or terrorism, which require specialized policies. This industry-wide recalibration signals a new chapter in corporate risk management, where understanding and mitigating AI-specific liabilities is becoming a non-negotiable aspect of business strategy.