[TechWeb] reported on January 31 that, according to foreign media reports, OpenAI's artificial intelligence chatbot ChatGPT was once again exposed to security vulnerabilities.
It is reported that ChatGPT is a new AI chatbot tool launched by OpenAI on November 30, 2022, which can quickly generate articles, stories, lyrics, essays, jokes, and even ** according to the user's requirements, and answer all kinds of questions.
Generative AI can be very useful in some cases, but it also has a number of problems. Now, another problem with ChatGPT has arisen.
On Monday, foreign media reported that ChatGPT was leaking private conversations, including usernames, passwords and other personal information of unrelated users, citing several screenshots shared by a user. In addition to this, other conversations leaked to this user included the name of a presentation that someone was making, details of an unpublished research proposal, and a script that used the PHP programming language. The users of each leaked conversation appear to be different and unrelated to each other.
In this regard, OpenAI said that the foreign media report was inaccurate, and the ChatGPT history reported by the user was caused by the theft of his ChatGPT account. The chat history and files shown are conversations where this account was abused, not ChatGPT showing another user's history.
OpenAI said it is investigating the matter. Regardless of the outcome of the survey, users are advised not to share sensitive information with AI chatbots, especially on bots that are not self-developed.
It is reported that this is not the first time that ChatGPT has had an information leakage problem. In March 2023, OpenAI took ChatGPT offline due to a vulnerability that caused the ** to display titles from active users' chat logs to unrelated users. At the time, the company said that in the hours before taking ChatGPT offline, some users could see another active user's name, email address, payment address, the last four digits of the credit card number, and the credit card expiration time. Later, OpenAI patched the vulnerability and reported the technical details of the issue.
This is the first time ChatGPT has suffered a major personal data breach. To that end, OpenAI also released a statement explaining in detail how it all happened and apologizing to users and the entire ChatGPT community.
In November 2023, a team of researchers at Google found that by asking ChatGPT to repeat certain words "forever," ChatGPT could allow ChatGPT to leak data used for training, including private information (e.g., personal name, email, number, etc.), snippets of research and news articles, Wikipedia pages, and more. However, researchers say that OpenAI patched the vulnerability on August 30, 2023.
Due to concerns about the possible leakage of proprietary or private data, many companies, including Apple, have restricted employees from using ChatGPT and similar ones.
On Monday, Italy's Data Protection Authority (Garante) announced that OpenAI's ChatGPT, as well as the technology used to collect user data, violated the country's privacy laws.
It is reported that the Italian Data Protection Authority launched an investigation against OpenAI at the end of March 2023 and has informed OpenAI of the results of the investigation, and OpenAI will have 30 days to respond to the findings. (Little Fox).