Lloyd's of London: Insurance to cover damage caused by AI hallucinations

Faulty AI chatbots cause a stir when they make false statements to companies. It should now be possible to insure yourself against the resulting costs.

listen Print view
A hand types on a smartphone, with the view of a chat with a chatbot superimposed over it

(Image: TippaPatt/Shutterstock.com)

2 min. read

Companies can now insure themselves against the costs of faulty AI technology on the Lloyd's of London insurance market. This is reported by the Financial Times, which explains that it will cover bills incurred if a company is sued by a customer who has suffered damages due to an inadequate AI-generated text. Insurers participating in the program would then cover damages and legal fees caused by an AI, for example. The product could therefore convince more companies to introduce AI technology. Although costs caused by AI errors are already partially covered, the maximum amounts have so far only been small.

As examples of inadequately functioning AI that have become a problem for companies, the British newspaper refers to a chatbot from parcel delivery company DPD, which insulted its own company and described it as"the worst delivery company in the world". At the Canadian airline Air Canada, a chatbot had generated incorrect information, as a result of which the airline had to refund the overpaid part of the cost of a flight ticket to the person concerned. The Financial Times quotes the Canadian start-up Armilla, which developed the new insurance policy, as saying that it would cover precisely these costs.

Videos by heise

However, an error by an AI text generator alone would not be enough for the resulting costs to be covered, the newspaper explains further. Instead, it would be necessary for the responsible insurance company to come to the conclusion that the AI technology worked worse than expected. This means that the insurance company could step in if a chatbot only provides correct answers in 85 percent of all cases, compared to 95 percent at the beginning. The AI model used must be tested beforehand. At the same time, this means that the insurance does not cover all financial losses caused by a so-called hallucination. Models that are too flawed may not be covered.

(mho)

Don't miss any news – follow us on Facebook, LinkedIn or Mastodon.

This article was originally published in German. It was translated with technical assistance and editorially reviewed before publication.