Anthropic responds to music publisher AI copyright lawsuit, accusing it of "subjective behavior"

Anthropic, an importantGenerative AIStartupsIn a new court filing Wednesday, Anthropic laid out in detail why the group of music publishers' claims are invalid.

In the fall of 2023, music publishers including Concord, Universal, and ABKCO filed a lawsuit against Anthropic, alleging copyright infringement through its chatbot Claude (now replaced by Claude2). The complaint, filed in federal court in Tennessee (one of the United States’ “Music Cities” and home to many record companies and musicians), claimed that Anthropic’s business “illegally” scraped lyrics from the internet to train its AI model, and then reproduced these copyrighted lyrics for users in the form of a chatbot.

Reactions to the preliminary injunction, which, if granted by the court, would force Anthropic to stop offering its Claude AI Models--Anthropic made familiar arguments that have emerged in numerous other copyright disputes involving AI training data. Generative AI companies like OpenAI and Anthropic rely heavily on scraping information from large amounts of publicly available data, including copyrighted works, to train their models, but they insist that this use constitutes fair use under the law.HighestCourt.

Anthropic responds to music publisher AI copyright lawsuit, accusing it of "subjective behavior"

In its response, Anthropic argues that its "use of Plaintiff's lyrics to train Claude was a transformative use" that added "further purpose or different properties" to the original work. To support this, the filing directly quotes Anthropic's director of research, Jared Kaplan, who says the purpose was to "create a dataset to teach a neural network how human language works."

Anthropic claimed that its actions had "no 'material adverse effect' on the lawful market for plaintiff's copyrighted works," noting that lyrics comprised only a "tiny percentage" of the training data and that the scale of the license required was incompatible.

Like OpenAI, Anthropic claims that licensing the vast amounts of text needed to train neural networks like Claude is technically and financially infeasible. Training tens of thousands of clips across genres may be an unattainable licensing scale for either party.

In the fileup to dateYing's argument might be that the plaintiffs themselves, rather than Anthropic, engaged in "subjective conduct" with respect to resulting infringement liability.

In copyright law, “subjective conduct” means that the person accused of infringement must be proven to have control over the output of the infringing content. In this case, Anthropic is essentially saying that the record company plaintiffs caused its AI model Claude to generate infringing content and were therefore in control and responsible for the infringement they reported, rather than Anthropic or its Claude product, which reacted autonomously to user input.

The document states that the output was generated through "attacks" conducted by the plaintiffs on Claude that were designed to elicit lyrics.

In addition to fighting copyright liability, Anthropic insisted that the plaintiffs could not prove irreparable harm.

Anthropic points out that there is no evidence that song licensing revenues have decreased since Claude launched, or that the harm to quality is “definite and immediate.” Anthropic points out that the publishers themselves argue that monetary damages would compensate for the losses, which contradicts their claims of “irreparable harm” (because, by definition, accepting monetary damages would suggest that the harm is quantifiable and payable).

Anthropic argued that an injunction against it and its AI model was unreasonable because the plaintiffs were weak in proving irreparable harm.

It claims the music publishers’ demands are overly broad and seek to restrict use not only of the 500 representative works in this case but also of the millions of other works that the publishers claim to control.

Additionally, the AI startup noted that the lawsuit was filed in the wrong jurisdiction. Anthropic maintains that it has no relevant business ties to Tennessee. The company noted that its headquarters and primary operations are in California.

The copyright wars in the generative AI industry continue to intensify. More and more artists have joined the lawsuits against art generators such as Midjourney and OpenAI, strengthening the evidence of infringement from the reconstruction of the diffusion model. Recently, the New York Times filed a copyright infringement lawsuit against OpenAI and Microsoft, claiming that they violated its copyright by using scraped Times content to train models for ChatGPT and other AI systems. The lawsuit seeks billions of dollars in damages and requires the destruction of any models or data trained with Times content.

Amid these debates, a nonprofit called Fairly Trained launched this week to advocate for "licensed models" certification for data used to train AI models. Major platforms have also joined, including Anthropic, Google and OpenAI, and content companies such as Shutterstock and Adobe have pledged to provide corporate users with legal defenses for AI-generated content.

Still, creators are undeterred, protesting the dismissal of authors’ claims by companies like OpenAI. Judges will need to weigh technological advances and legal rights in a complex dispute.

Additionally, regulators are raising concerns about the scope of data mining. Litigation and congressional hearings may determine whether fair use shields proprietary appropriation, frustrating some while benefiting others. Overall, negotiation seems inevitable to satisfy all stakeholders in the ongoing development of generative AI.

Where this will go is unclear, but this week’s filings suggest that generative AI companies are forming a consensus around a core set of fair use and harm-based defenses that force courts to weigh technological advancement against the control of rights holders. As VentureBeat previously reported, there is no precedent to date for a copyright plaintiff winning a preliminary injunction in this type of AI dispute. Anthropic’s arguments are intended to ensure that precedent persists, at least at this stage of the many ongoing legal battles.

statement:The content is collected from various media platforms such as public websites. If the included content infringes on your rights, please contact us by email and we will deal with it as soon as possible.
Information

Apple AIM autoregressive vision model validation performance is related to model size

2024-1-19 10:03:40

Information

Microsoft Reading Coach launches AI to create customized reading experience

2024-1-19 10:09:13

Search