advertisement
Facebook
X
LinkedIn
WhatsApp
Reddit

Gemini and Copilot hallucinated stats for Super Bowl LVIII

  • Before this weekend’s Super Bowl LVIII took place, Google’s Gemini and Microsoft’s Copilot AI-powered chatbots served up incorrect statistics for the game.
  • In some instances, it provided the wrong score, as well as noting that it happened a week ago.
  • OpenAI’s ChatGPT also served up the wrong answers when prompted regarding the American football showpiece event.

Super Bowl LVIII wrapped up this weekend, with the Kansas City Chiefs victorious against the San Francisco 49ers in overtime, winning 25 to 22 in Las Vegas, Nevada.

If you ask AI chatbots what happened, however, the results are rather concerning, especially when it comes to the propensity for these platforms to hallucinate from time to time.

To that end, TechCrunch found that both Google’s Gemini and Microsoft’s Copilot served up incorrect statistics for the showpiece American football event, as well as getting the final score wrong, and noting that it happened a week earlier than its scheduled 11th February date in some cases.

In a Reddit post (featured below) shared to r/ChatGPT, a user shared a screenshot that showed Gemini getting the date wrong by 24 hours, as well as noting that the 49ers won Super Bowl LVIII 34 to 28. The answer was generated after the user asked the AI chatbot what the betting odds were for the sporting event.

In the case of Copilot, it too leaned in favour of a 49ers win, which TechCrunch joked may have been some Silicon Valley bias.

The AI chatbot even added that Brock Purdy, the quarterback of the 49ers, “rallied” the team to an upset victory as underdogs, which is in fact a reversal of how the actual game played out, with the Chiefs’ Patrick Mahomes making several big plays in the second half of Super Bowl LVIII in order to send it into overtime, before leading a game winning drive to seal his third championship, as well as a back-to-back win.

As we have seen in recent months, generative AI platforms are still far from perfect, with their penchant to hallucinate even having serious consequences, with a handful of lawyers building arguments based on fabricated past cases.

An Oxford paper has also noted that generative AI should not be used in scientific research, and can be viewed as a threat to many disciplines as a result of how many biases and hallucinations occur at the moment.

OpenAI’s ChatGPT, which is undoubtedly the most popular platform at the moment, is also guilty of fabricated information, as TechCrunch asked it similar questions related to Super Bowl LVIII that yielded no answer, even though the event had ended by that time.

While generative AI has proved pervasive, just don’t expect it to be much help when it comes to sports betting.

[Image – Photo by Johnny Williams on Unsplash]

advertisement

About Author

advertisement

Related News

advertisement