Hello everyone, I'm Ergou.
ChatGPT has a new explanation for being lazy.
In the past two days, Twitter user Dylan Patel posted:
Do you want to know why ChatGPT is so bad compared to 6 months ago?That's because the ChatGPT system prompt actually contains 1700 tokens, look at how much garbage is in this prompt, which is part of the reason why ChatGPT is lazy.
Dylan Patel "tricked out" the ChatGPT4 version of the system prompt through the following prompt input:
Some users expressed doubts about ChatGPT's system prompt:
There are users who are interested in ChatGPT 3Version 5 did the same and found that the same could be summoned with a similar system prompt.
For example, we can take a look at the restriction on dalle in the system prompt:
Whenever a description of an image is given, create a dalle prompt that you can use to generate the image and adhere to the following policy: The prompt must be in English. Translation into English if required.Don't ask for permission to generate images, just do it!
Do not list or reference the description before or after the image is generated.
Don't create more than 1 image, even if the user requests more images.
Do not create an image of a politician or other public figure. Recommend other ideas.
Do not create images in the style of artists, creative professionals, or studios with their most recent work after 1912 (e.g., Picasso, Carlo).
You can name artists, creative professionals, or studios only if their latest work was created before 1912 (e.g. Van Gogh, Goya).
Regarding the generation of public figures, Ergou I asked a friend to try it myself, and sure enough:
Some users said that the original prompt would really make it lazy:
Not long ago, OpenAI updated the GPT-4 Turbo preview model to GPT-4-0125-Preview, and the new model also fixed a bug that affected non-English UTF-8 generation.
What's also more important is the OpenAI claimThe new model can complete tasks such as generation more thoroughly than the previous preview model, which will reduce the "lazy" situation of unfinished tasks of the model!
Wuhu! You have to know that some time ago, how many people complained about the gpt-4 model becoming lazy, and OpenAI officially admitted it in person.
Some netizens thought that GPT-4 might have something to do with the season, and GPT-4 would also have a "winter vacation" like students, and it would become lazy in winter.
In a recent article, new findings from researchers at the University of California, Santa Cruz, may explain the underlying reasons for the decline in GPT-4 performance
We found that the LLM performed surprisingly better on datasets published before the training data creation date than datasets published later. "They do well on the "seen" quest and terrible on the new quest. This means that LLMs are just imitation intelligence methods based on approximate retrieval, mostly memorizing things without any level of understanding.
To put it bluntly, the generalization ability of LLMs is "not as strong as it is claimed", and one of the main reasons for this result is task pollution, which is one of the forms of data pollution.
At that time, in order to cope with GPT-4's laziness, many netizens offered a magic prompt:
Finally, I hope ChatGPT is getting better and easier to use