Maliciously concealing the use of AI may face severe punishment from the platform! Meta will launch

Mondo Technology Updated on 2024-02-07

Finance Associated Press, February 6 (edited by Shi Zhengcheng).According to incomplete statistics, in 2024, people from more than 50 countries and half the world will face the first re-election. At the same time, with the latest round of AIGC technology, the threat of AI technology to the Internet ecology has also reached an unprecedented height. In recent days, incidents such as "fake Biden calls" and well-known actress deepfake** have aroused strong international concerns.

Before the whole thing gets more and more out of control,Social platforms Facebook, Instagram, and Threads decided to make some attempts to separate AI-generated content on social platforms from real life by identifying and tagging AI content on the platform.

Identify, detect, and tag

Meta said that in the coming months, Facebook, Instagram and Threads will try to detect ** uploaded to the platform and label it with "AI".

This matter should be divided into two levels: "evidence-based" AIGC**, and disorganized AI**, audio content.

First of all, it is the first field where the standard is relatively clear. Take Meta's own AI "Wensheng Map" function as an example, the ** generated through this way will not only be in the lower left cornerVisible watermark, at the same time** will also be marked with an "invisible watermark" in the metadata

Meta says it is developing tools that can recognize these types of watermarks in bulk, particularly the C2PA and IPTC technical standards — meaning whenAfter Google, OpenAI, Microsoft, Adobe, Midjourney, Shutterstock, and other companies add metadata to the AIGC tool as planned, Meta will be able to identify and label the AI generated by these companies in batches on social platforms.

Of course, the cooperation of a few giants alone cannot solve all the problems, and those who use AI maliciously also have a way to remove the mark watermark in AI**.

Former UK Deputy Prime Minister and Meta President of Global Affairs and Communications Nick Clegg addedMeta is also now developing classifiers with the aim of automatically detecting those that are generated by AI but do not have a data watermark. At the same time, Meta's AI lab recently shared a digital watermarking technology called "Stable Signature", which integrates the watermarking mechanism directly into the image generation step, which will be very valuable for many open-source models.

What about the audio?

Meta revealed that unlike the "watermark" standard, AI-generated audio lacks consensus in this regard, so it is now impossible to label by heartbeat.

In response, Meta decided to introduce a self-declaration and penalty mechanism. In addition to allowing users to self-declare AI** and audio,Meta may penalize users who knowingly create deepfake content and deliberately fail to report it.

Clegg further stated that Meta may determine that certain AI-created or modified images,** or audio content are at particularly high risk of substantially deceiving the public on material issuesAdd a more prominent label as appropriate

Even if there are concerns about the "AI havoc year", Clegg still believes that the possibility of this situation "sweeping the Meta platform" this year is not high. "We're not going to be going to see anything that is completely synthesized and politically significant ** or audio anytime soon, and I just don't think that's going to happen," Clegg said. ”

Clegg also mentioned that Meta is already testing large language models trained according to community guidelines, and said that this technology provides an efficient "diversion mechanism" to ensure that the posts seen by human reviewers are indeed marginal cases that require human judgment.

Related Pages