That s why people say GPT 4 has become lazy.

Mondo Social Updated on 2024-01-19

In recent months, OpenAI's GPT-4 language model has caused quite a stir in the AI industry. This hot topic has sparked a big discussion in the tech community about the development of artificial intelligence technology and ethics.

After CEO Sam Altman's swift dismissal and rehiring, the abrupt cessation of ChatGPT Plus' paid subscription service is even more puzzling. These incidents have raised more questions about OpenAI, and have put GPT-4 as a language model into the public spotlight.

Over time, some AI enthusiasts began to question whether GPT-4 had become "lazy". Many people who have used it to speed up more intensive tasks have already expressed their dissatisfaction with the perceived changes on X (formerly Twitter). One of the users, Rohit Krishnan, described in detail some of the problems he had with GPT-4. He complained that when he asked the chatbot some questions before, he would get very detailed answers, but now he was rejected or only got the abridged version requested. He also noted that GPT-4 sometimes uses tools other than instructions, such as using DALL-E when prompts require the use of ** interpreter. Another user, Matt Wensing, also shared his experience of experimenting. He asked ChatGPT Plus to make a list of dates between now and May 5, 2024, but the chatbot needed additional information to complete the simple task.

In addition to these questions, there are research institutes that have tested the level of knowledge of GPT-4. According to a study by Stanford University and the University of California, Berkeley, the percentage of GPT-4 correctly answered questions from March to June increased from 976% plummeted to 24%。This makes one wonder if the language model is really as smart as it says it is.

After the question was asked, OpenAI's VP of product, Peter Welinder, said that a large number of users may experience a psychological phenomenon where the quality of answers may deteriorate over time. However, according to some experts, the current problems may be related to system overload or changes in the prompt style, which are not obvious to the user.

At the same time, some developers also shared their experience of dealing with GPT-4 accidents. One programmer noted that the unreliability of the latest GPT-4 model had to go back to traditional coding methods due to declining instruction adherence. Another user suggested creating a custom GPT specifically for the task or application to improve the efficiency of the model's output. However, these solutions don't seem to solve the problem completely, as some users are still experiencing similar issues.

In conclusion, OpenAI's GPT-4 language model has had a number of issues in recent months. These issues have not only attracted public attention and discussion, but also raised more questions about the development of artificial intelligence technology. Although OpenAI has taken some steps to solve the problem, these issues still need more research and ** to be solved. In this competitive and innovative tech world, we look forward to OpenAI and other AI companies being able to continue to bring us more surprises and innovations.

Related Pages