Stanford team apologizes for plagiarizing Tsinghua's AI model: Llama3-V model will be removed

recentlyStanford UniversityThe AI research team's Llama3-V open source model was accused of plagiarizing Tsinghua's star startupWall-facing intelligenceThe open source model "Little Cannon" MiniCPM-Llama3-V 2.5 developed by the company has caused heated discussions online.

Stanford team apologizes for plagiarizing Tsinghua's AI model: Llama3-V model will be removed

Image source: Pexels

On May 29, a Stanford AI team announced online that it only cost $500 to train a SOTA multimodal large model that surpasses GPT-4V, but netizens soon discovered that the model structure and code used in the project were highly similar to those of "Little Steel Cannon", with only some variable names changed.

The Mianbi Intelligent team confirmed late at night on June 2 that the Stanford model could not only recognize the ancient characters from the Warring States Period in the "Tsinghua Bamboo Slips", but also that even the incorrect recognition results were exactly the same as those of the MiniCPM model. The Mianbi Intelligent team spent several months scanning and manually annotating these ancient characters from the Tsinghua Bamboo Slips, and they had never been made public, thus confirming the fact of plagiarism.

At 1:27 am Beijing time this morning, two authors of the Stanford Llama3-V team, Siddharth Sharma and Aksh Garg, formally apologized to the MiniCPM team on the social platform X for this academic misconduct and promised to remove all Llama3-V models. IT Home noticed that they had published an apology letter with similar content a few hours ago, but it was quickly deleted.

statement:The content is collected from various media platforms such as public websites. If the included content infringes on your rights, please contact us by email and we will deal with it as soon as possible.
Information

The world's first AI beauty pageant has kicked off: the shortlist has been announced, with a prize of 16,000 pounds

2024-6-5 9:36:56

Information

OpenAI and Google DeepMind employees jointly voiced: Advanced artificial intelligence risks are huge and supervision is urgently needed

2024-6-5 9:38:46

Search