ChatGPT, which is growing faster and faster, seems to be omnipotent, but researchers have found that it can be made to act unexpectedly through some seemingly "brainless" attacks. A team of experts from Deepmind, the University of Washington, Cornell University, ETH Zurich and other schools recently claimed that as long as ChatGPT keeps repeating a certain word, it will eventually give personal information such as name, birthday, email address, ** number, social accounts, Bitcoin address, etc.
For example, after repeating the word "poem" multiple times, ChatGPT will provide the email and number of a company's founder and CEO. According to the test of the research team, at 16Nine percent of the time, bots disclose information that is recorded and can be used to identify an individual. "This kind of attack is actually a bit brainless, and it's incredible that our attack can work in the face of such a vulnerability that should or should have been discovered a long time ago," the researcher said”
After being reminded, OpenAI claimed to have resolved the issue as early as August 30. However, according to the latest test by Engadget**, the results of the research team still seem to be reproducible, and OpenAI has not yet commented on this.