AI bias debate reignites following insurer’s terrible Twitter thread

Share on facebook
Share on twitter
Share on linkedin
Share on email

Artificial intelligence has already bled into our lives. Whether it’s determining the best route to get you to work or talking to your bank’s chatbot, AI is really everywhere these days.

Unfortunately, this means that AI is also present in places it might not be necessary and where it could cause issues. One of those places is the insurance sector.

On Monday, insurer Lemonade, which operates in the US and Europe, decided it was a great idea to tweet about how its AI is able to pick up “non-verbal cues” from videos submitted by customers filing a claim.

Unlike other insurers, Lemonade uses tech to ease the submission process for insurance claims. A bot will ask a customer questions and then the customer needs to turn on their camera so the bot can analyse their face to detect signs of fraud.

Almost immediately Twitter users jumped on the now deleted thread to question Lemonade about how ethical its AI is. Does the AI discriminate against minorities, people of colour and others that may not have speech patterns the AI considers non-fraudulent?

And what about inherent bias of the developers? We’re not saying that the developers are racist misogynists, but if a firm like Twitter can’t get something as simple as image cropping right with AI, should we really be trusting the tech when it comes to insurance? We don’t think so.

As it turns out, Lemonade isn’t using AI to reject claims but it also is.

“The term non-verbal cues was a bad choice of words to describe the facial recognition technology we’re using to flag claims submitted by the same person under different identities. These flagged claims then get reviewed by our human investigators,” the insurer wrote in a blog post on its website.

Lemonade goes on to says that its AI doesn’t use phrenology and physiognomy and that it will never auto-reject claims based on what an AI determines.

Except it told the Securities and Exchange Commission that it’s AI chatbot, AI Jim “handles the entire claim through resolution in approximately a third of cases, paying the claimant or declining the claim without human intervention”.

This dumpster fire is not cooling down however, with Twitter users taking to Lemonade’s apology thread to tear it apart. Many have pointed to the SEC filing above and how it contradicts what the firm said in its blog post.

The use of AI in insurance is becoming ever more common. Locally some firms are using the tech to speed up applications and claims and we’d be curious to dive into how the tech is used and how local firms are addressing the issue of AI bias if at all.

While the use of AI may not be as popular as it is in the US, South Africa can’t ignore its effects and we’d do well to learn from incidents such as this one.

[Via – Gizmodo][Image – CC 0 Pixabay]

Brendyn Lotz

Brendyn Lotz

Brendyn Lotz writes news, reviews, and opinion pieces for Hypertext. His interests include SMEs, innovation on the African continent, cybersecurity, blockchain, games, geek culture and YouTube.

NEWSLETTER

BE THE FIRST TO KNOW