When AI becomes the creator, who does the copyright belong to?

Mondo Technology Updated on 2024-02-24

In December 2023, an incident about artificial intelligence and journalistic ethics shook the ** world. The Arena Group, the giant that owns iconic magazines like Sports Illustrated and Men's Magazine, abruptly announced the dismissal of its chief executive, Ross Levinsohn, along with two other executives.

Behind this decision is a series of commercial articles published by Sports Illustrated, which are not only written by artificial intelligence, but also accompanied by AI-generated avatars. In addition to the three executives, the group also fired its general counsel, Julie Fenster. Sports Illustrated also quickly removed all articles written by these fictional authors.

The incident sparked a heated discussion on social media and sparked a deep reflection on the role and ethical boundaries of AI in news production. Some netizens compared the incident with an "exclusive interview" with Schumacher, a German magazine that used artificial intelligence to fake racing legend, and lamented that "this scene is familiar".

There is no doubt that generative artificial intelligence (AI) represented by ChatGPT, Wenxin Yiyan, iFLYTEK Xinghuo, etc., is bringing various opportunities for the efficiency and transformation of journalism. For example, Buzzfeed, a news aggregator that publishes quizzes, a test column with AI answers, announced that it will use artificial intelligence-generated content (AIGC) to write test articles and reduce its reliance on human editors. Similarly, the Associated Press has created a dedicated artificial intelligence and news automation unit; The Washington Post has established a cross-departmental AI collaboration mechanism, including AI Task Force, a strategic decision-making team, and AI Hub, an executive team. The Financial Times has even created a new position, appointing an AI editor.

These actions not only provide journalists with efficient tools, but also bring additional resources and innovative possibilities to news organizations. However, it also raises professional questions about the quality and authenticity of the news. Critics point out that this practice could undermine the authenticity and reliability of information provided to the public, blurring the line between humans and machines automatically generating news. For finance** companies such as Wall Street**, which is known for its in-depth analysis and professional insights, the use of AI-generated content can raise concerns about the quality and credibility of financial reporting.

First of all, there is a lot of concern about whether AI-created content can constitute a legal written work or a copyrighted work.

Article 2 of the Regulations for the Implementation of the Copyright Law of the People's Republic of China clarifies that the term "work" as used in the Copyright Law refers to intellectual achievements in the fields of literature, art and science that are original and can be reproduced in some tangible form. According to the Regulations for the Implementation of the Copyright Law, the examination requirements for determining whether a literary work constitutes a literary work include: 1) whether it is reproducible; 2) whether it is expressed in written form; 3) Whether it has originality. At the moment, the biggest point of contention about AI-generated content is: Is the content original?

In November 2023, the Beijing Internet Court made a first-instance judgment in a copyright infringement dispute over artificial intelligence generation** (AI painting**). This is the first case in China involving the copyright of "AI Wensheng Diagram". The trial was broadcast live on CCTV and multiple platforms, attracting a total of about 170,000 netizens**, sparking extensive public discussion on the relationship between AI-generated content and copyright.

The case is actually not complicated, the plaintiff Li Moumou proposed: on February 24, 2023, he used the open-source software Stable Diffusion to generate a dream girl** by entering prompt words, and named it "Spring Breeze Sends Tenderness" and published it on the Xiaohongshu app. On March 2, 2023, the defendant Baijiahao account "I am Yunkai Sunrise" published an article titled "Love in March, in the Peach Blossoms", with the ** used in the accompanying picture and cut out the plaintiff's signature watermark. Therefore, the plaintiff filed a lawsuit with the court, demanding an apology from the defendant and compensation for damages.

In hearing the case, the court held that the plaintiff had made a certain amount of intellectual investment in the process of generating this **. This includes designing how the character is presented, selecting the input prompts, arranging the order of the prompts, setting the relevant parameters, and finally selecting the desired **. Therefore, the court found that the ** reflected the plaintiff's intellectual investment and met the requirements of "intellectual achievement".

Importantly, the court noted that generative AI models at this stage do not have free will and cannot be legally subjected. When people use AI models to generate**, they're essentially creating with tools. Throughout the creative process, it is the people, not the AI models, who make the intellectual input. This judgment provides an important legal interpretation on the copyright issue of AI-generated content, and clarifies the role and responsibility of humans in the process of AI creation at this stage. At the same time, this case has important guiding significance for the future interpretation and practice of relevant laws.

Secondly, the issue of the ownership of the rights of content shaped by generative AI is a new challenge in the field of intellectual property.

This challenge, in August 2023, U.S. District Judge Beryl AHowell). Judge Howell dismissed AI entrepreneur Stephen Thaler's lawsuit against the U.S. Copyright Office. ruled that AI-generated works of art are not protected by copyright, and emphasized that "human creation is an essential part of a valid copyright claim".

Thaler's previous copyright application was "A Recent Entrance to Paradise," which was created by his AI system, Creativity Machines. He argues that AI should qualify as a creator and that "if AI meets the criteria for authorship, then the owner of the AI system should be considered the true owner of copyright". But his application was rejected by the Copyright Office.

Justice Howell disagreed with Thaler's argument. In her ruling, she noted that even if human creativity is achieved through new tools or new **, the identity of human creators remains a fundamental requirement for copyright protection and is at the heart of copyright capacity. She stressed that copyright law had never granted copyright to works "without any human guidance".

Judge Howell also cited the past "Monkey ** Copyright Case" to support her decision. In 2011, British outdoor photographer Slater was visiting North Sulawesi National Park in Indonesia when his camera was snatched away by a black macaque and he took a picture of himself. This ** photo of the macaque was immediately turned crazy by many ** around the world.

However, many organizations, including Wikipedia, refuse to pay royalties. They claim that it was taken by an animal, so the copyright of this ** does not belong to Leicester at all. At the same time, an animal protection organization in the United States also sued Slater for infringing the monkey's copyright. On September 6, 2017, U.S. District Court Judge William Orrick said that although the U.S. Congress has expanded the scope of animal protection**, there is no indication that animals can have copyright. To this end, a San Francisco court ruled in 2017 that copyright protection does not apply to monkeys, clarifying the "human creator" requirement in copyright law.

From this case, we can see that the current legal framework still insists on attributing copyright to individual human beings. Even with the growing popularity of AI-generated content, the issue of its ownership remains a complex issue that needs to be further addressed and resolved.

When it comes to the relationship between generative AI and intellectual property, we see a significant increase in litigation cases against AI training data in the second half of 2023. These cases focus on alleging that AI companies illegally use copyrighted works as training data, thereby infringing the copyright of the original author.

At the end of June 2023, writers Mona Ahad and Paul Tremblay filed a lawsuit in federal court in San Francisco, accusing ChatGPT of illegally using their books as training data for large language models. Then, on July 10, comedian and author Sarah Silverman and two other authors filed copyright infringement lawsuits against OpenAI's ChatGPT and Meta's Llama, accusing the companies' large language models of using their unauthorized work. By September 8, a number of American writers, including Pulitzer Prize-winning Michael Chabon and playwright D**id Henry Hwang, filed similar lawsuits against OpenAI.

The lawsuits have caught the attention of The Authors Guild. In July 2023, the group issued an open letter to AI companies such as Alphabet, OpenAI, Meta, and Microsoft, demanding that authors' consent and appropriate compensation be obtained when using copyrighted data to train AI. The letter was endorsed by more than 10,000 writers.

*Generative AI for authoring faces a similar dilemma. Getty Images has filed a lawsuit against Stability AI in the High Court of London for copyright and trademark protection. They believe that Stability AI illegally copied and processed millions of copyrighted images to train their Stable Diffusion model. In addition, many artists have filed similar lawsuits, arguing that AI infringes their intellectual property rights by using their works as training materials.

These cases illustrate that with the development and application of generative AI, the field of IP is facing new challenges and changes. The existing legal framework is inadequate to deal with the copyright issue of AI-generated content, and further adjustments and improvements are urgently needed.

Despite the growing intellectual property controversy over the use of generative AI in journalism and writing, there are opposing views and theories that support the legitimacy and innovation of AI creation.

Some legal experts and lawyers believe that when an AI uses an original image during training, this behavior can be considered a secondary creation. The process of denoising and visual reconstruction of software is not simply copying, but creating new means of expression, and should be considered derivative works. New forms of expression brought about by generative AI give new meaning to the original work. Therefore, when the nature of the original copyrighted work has changed, such a transformed work may fall within the scope of fair use and no longer constitute infringement.

Internationally, some countries such as Israel, Japan, and the United Kingdom have begun to enact more lenient laws to meet the needs of text and data mining (TDM). This international legal discrepancy has raised concerns among Professor Pam Samuelson, a copyright scholar at the University of California, Berkeley. She pointed out that differences in AI training norms between countries could lead to the emergence of "innovation arbitrage", where AI companies and practitioners may choose to operate in countries with less stringent AI training practices.

This debate around artificial intelligence and intellectual property rights is not only a legal issue, but also a profound insight into human creativity and technological innovation. On the one hand, the increasing number of lawsuits regarding copyright infringement by AI-generated content reflects the concerns of creators and copyright holders. On the other hand, there is also a growing number of arguments and legal interpretations in favour of the legality of AI works, which is compounded by the diversity of legal environments in different countries. In this world of art woven by algorithms and human intelligence, it is difficult to find a clear answer. What we are dealing with may not be a single answer in black and white, but a complex and multifaceted picture made up of multiple shades of gray.

Related Pages