Google s Microsoft chatbot misreported the results of the competition, exposing the limitations and

Mondo Technology Updated on 2024-02-13

Recently, there has been a wave in the field of science and technology. Google's Gemini chatbot and Microsoft's Copilot chatbot made a ridiculous error when processing information about a Super Bowl football game. The chatbots launched by the two tech giants actually "announced" the statistics and results of the matches that have not yet been played in advance, which caused widespread attention and discussion for a while.

According to a post on Reddit, the Gemini chatbot acted as if the game was over when answering questions about Super Bowl LVIII. It gives detailed player stats, including Kansas state quarterback Patrick Mahomes' 286-yard run, two touchdowns and one interception, and Brock Purdy's 253-yard run and one touchdown pass. However, these statistics are completely fictitious, as Super Bowl LVIII has not yet taken place.

Coincidentally, Microsoft's Copilot chatbot also insisted that the game was over, providing false quotes to back up its claims. Unlike Gemini, though, Copilot reported that the 49ers won with a "24-21 final scoreline." This erroneous result has sparked heated discussions and questions among netizens.

It is worth noting that both chatbots are developed based on the Genai model. The Genai model is trained on a large amount of public network data to learn the patterns and probability distributions of text data. However, it is this probability-based approach that leads to errors in the processing of real-time information by chatbots. They can't really understand the actual progress and outcome of the game, and instead just generate plausible responses based on previous training data.

This incident has raised concerns about the limitations and potential risks of AI technology. First, the level of intelligence of AI models is still limited. While they demonstrate amazing capabilities in some areas, they often struggle to guarantee accuracy and reliability when dealing with complex, real-time information. Second, too much trust in AI models can lead to misleading and wrong decisions. Since chatbot-generated content tends to look very realistic and convincing, users can easily be misled and thus make wrong judgments.

In response to this problem, experts remind users to be vigilant when using AI technology. They suggest that the AI model be considered as an auxiliary tool and combined with other ** information to make comprehensive judgments. In addition, technology companies should also strengthen the supervision and review of AI models to ensure that the information they provide is accurate and reliable.

The incident of Google and Microsoft's chatbots misreporting the Super Bowl results reveals the limitations and potential risks of AI technology. We should be cautious and critical, and do more verification and verification when using AI technology. Only in this way can we make better use of AI technology to bring convenience and progress to our lives and work. February** Dynamic Incentive Program

Related Pages