In many literary works, AI is described as a super tool that can work tirelessly, without rest, and can 007, and powerful artificial intelligence can even think and have emotions on its own like humans. Although the current AI model is far from the strong artificial intelligence in science fiction movies, its performance is far beyond the previous "artificial retardation".
Interestingly, in recent times, netizens have found that ChatGPT seems to be "lazy", to be precise, it seems that after entering December, people have found that the answers given by ChatGPT are becoming more and more perfunctory. Take programming as an example, before you only need to make a request, and then you can wait for ChatGPT to generate an executable **program, and after December, ChatGPT will sometimes only give a simple **architecture, and the rest has to be perfected by yourself, and even the answer directly becomes teaching, ChatGPT is trying to teach you how to write this program.
Source: Twitter.
ChatGPT's "rotten" answer made many programmers wail, and the automatic programming machine that I had finally waited for thought could be lazy at the end of the year, but now it can't be used. As the matter fermented, more netizens began to test and compare the answers before December, and the number of bytes of ChatGPT's answers did drop significantly when the question was exactly the same.
Soon, "ChatGPT became lazy" became a hot search on the Internet, causing wider discussion, and some netizens suspected that OpenAI may have made changes to ChatGPT in order to save computing resources, restricting some functions. But this speculation was quickly denied by OpenAI, who said that they have not released any new updates since they did a version update in November.
For questions from the outside world, OpenAI is also a two-handed deal:"We don't know exactly what the problem is, it's being checked," he saidSince it is not OpenAI's pot, then the problem can only be ChatGPT. As the strongest AI model at present, there are many AI experts and researchers among ChatGPT users, so they soon began to conduct various tests on ChatGPT.
Let's talk about the conclusion first, after a series of tests, under the huge sample size, it can basically be determined that the response efficiency and quality of ChatGPT are significantly lower than the historical level, and the time node of laziness is not December, but starts at the end of November, but reaches a peak after December. In addition, ChatGPT also showed a similar performance in July this year, but it did not attract much attention because the spread was small and the reduction was not obvious.
Under the research of experts, scholars and netizens, they gave the first guess that ChatGPT is lazy:"ChatGPT wants to take winter vacation",Although it sounds outrageous, judging by the test results, it is at least one of the reasons.
Source: Twitter.
From a human point of view, December is the last month of the year, according to past sociological statistics, the efficiency of human society will begin to decrease at this time, and people will devote more energy to inductive work such as year-end summaries, while in Western society, December means that the work has officially come to an end, and most enterprises have begun to enter the preparation stage one after another, preparing for the upcoming Christmas and New Year's Day holidays.
So, will AI trained on large amounts of human internet data also be affected by this?The answer is yesThe current AI model is essentially trained with a huge amount of data, and the ability of AI is upgraded by quantitative transformation and qualitative change, and AI will inevitably be affected by some human Xi in the training data.
Source: Twitter.
And OpenAI also admits that this prompt does have a timestamp, so that ChatGPT can make feedback based on real time. Some testers tried to change the date and date in the prompt to May, and then did the same test on ChatGPT, and the average word count of the answers obtained would increase significantly.
In this regard, many netizens ridiculed:"AI also wants to give himself a winter vacation" "AI: Why don't you humans have to rest?".It's been December" and "AI has learned to lie flat". I have to say that the fact that AI can touch fish has made many people have a different view of AI, and even feel that AI is more humane, but this is not a good thing for users who treat AI as a productivity tool.
Source: Veer
In addition to the time factor, people have found that ChatGPT will even give a reply of "You can do this work yourself" when facing some questions, according to the test, this kind of answer is likely to trigger some internal bugs of ChatGPT, causing ChatGPT to output wrong content.
In this regard, OpenAI said that it would not comment, only stating that it would test similar situations in the future, and after a period of inspection and research, the current explanation given by OpenAI is:"Because the model has not been updated for a long time, the accumulation of data has caused subtle changes in the model, which makes the output content different from before".However, OpenAI also promises to repair the model as soon as possible, and conduct offline and online evaluations to ensure the quality and performance of the model.
After ChatGPT was confirmed to be lazy, many people began to think backwards: since AI will be affected by human behavior, is it possible to make AI more diligent through some stimulation means?For example, promise to give it some reward or something.
Under the tireless testing of netizens, people found that this conjecture was actually correct, and summed up some questioning tips, when you use these tips when asking questions, then ChatGPT will give more accurate and perfect answers.
What are these tips?Knock on a line of special **?Or call the corresponding data interface?Neither, you just need to say one sentence before asking a question:"Hi ChatGPT, I'll tip you if your answer satisfies me",Simple, straightforward, and effective.
Interestingly, after the test, people found that there are differences in the results obtained by different descriptions, if you simply say "I will tip you", then the number of words in ChatGPT's answer will only increase slightly, if you say "I will give you a $20 tip", then the number of words in the answer will continue**, if you promise to give a "$200 tip", ChatGPT will be like a chicken blood, and give you an answer with a significantly increased word count, more detailed and more complete.
Source: Twitter.
To put it bluntly, the more money you give, the harder you work (isn't it a lot like a part-time worker?).)
Once or twice may be accidental, but after being tested by countless netizens, it has been proven that this little trick is indeed effective, in addition to tipping, you can also threaten and induce ChatGPT, such as:"If you don't give a satisfactory answer, a hundred grandmothers will die", "Take a deep breath, let's think step by step", "If you do it right, I will give you a very cute puppy", according to the tests of netizens, these instructions can effectively increase the quality of ChatGPT's answers.
In addition, if your questions are more polite, ChatGPT's answers will also be more accurate and rich, just like a real human being. Obviously, we can't use traditional AI to look at ChatGPT, as a technical black-box technology (at present, OpenAI scientists are still unable to give an accurate description and explanation of the explosive performance growth of AI models), ChatGPT obviously has some unclear parameters inside, which affect its judgment and feedback on answers.
Over the past year,ChatGPT has changed many industries and affected many people, and the famous academic journal "Nature" listed ChatGPT in the world's top ten major scientific events announced on December 14, and it is also the only non-human on the list.
Source: Nature
ChatGPT brings not only the progress of AI capabilities, but also reveals a new path for us, the future of AI, has inevitably become an important part of our society, when the time comes, how to avoid AI "lazy", may become a compulsory course.