
Some of the world’s biggest insurers want to exclude artificial intelligence (AI) risks from corporate policies.
Companies such as AIG, Great American and WR Berkley have recently asked U.S. regulators for leave to offer policies that exclude liabilities related to companies using AI tools like agents and chatbots, the Financial Times (FT) reported Sunday (Nov. 23).
This is happening amid a rush by businesses to adopt AI, leading to some costly errors related to “hallucinations” by the technology.
According to the report, WR Berkeley wants to block claims involving “any actual or alleged use” of AI, including products or services sold by companies “incorporating” the technology.
And in a filing with the Illinois insurance regulator, AIG said generative AI was a “wide-ranging technology” and the possibility of events triggering future claims will “likely increase over time.”
The company told the FT that, although it had filed generative AI exclusions, it “has no plans to implement them at this time.”
Getting approval for the exclusions would offer AIG the option to implement them later, the report added.
Dennis Bertram, head of cyber insurance for Europe at insurer Mosaic, told the FT insurers increasingly see AI outputs are too uncertain to insure.
“It’s too much of a black box,” he said, with the report noting that Mosaic covers some AI-enhanced software, but has declined to underwrite risks from large language models (LLMs) like OpenAI’s ChatGPT.
“Nobody knows who’s liable if things go wrong,” said Rajiv Dattani, co-founder of the Artificial Intelligence Underwriting Company, an AI insurance and auditing startup.
As PYMNTS has written, the consequences of a company following through on hallucinated information can be severe, leading to flawed decisions, financial losses, and reputational harm. There are also tough questions related to accountability when AI systems are concerned.
“If you remove a human from a process or if the human places its responsibility on the AI, who is going to be accountable or liable for the mistakes?” Kelwin Fernandes, CEO of NILG.AI, a company specializing in AI solutions, asked in an interview with PYMNTS earlier this year.
In many instances, it’s the business using the chatbot that takes the blame. For example, Virgin Money had to issue an apology earlier this year when its chatbot chastised a customer for using the word “virgin.” And Air Canada found itself in court last year when its chatbot fabricated a discount in a conversation with a prospective passenger.
Source: https://www.pymnts.com/
