recentlyStanford UniversityThe AI research team's Llama3-V open source model was accused of plagiarizing Tsinghua's star startupWall-facing intelligenceThe open source model "Little Cannon" MiniCPM-Llama3-V 2.5 developed by the company has caused heated discussions online.
Image source: Pexels
On May 29, a Stanford AI team announced online that it only cost $500 to train a SOTA multimodal large model that surpasses GPT-4V, but netizens soon discovered that the model structure and code used in the project were highly similar to those of "Little Steel Cannon", with only some variable names changed.
The Mianbi Intelligent team confirmed late at night on June 2 that the Stanford model could not only recognize the ancient characters from the Warring States Period in the "Tsinghua Bamboo Slips", but also that even the incorrect recognition results were exactly the same as those of the MiniCPM model. The Mianbi Intelligent team spent several months scanning and manually annotating these ancient characters from the Tsinghua Bamboo Slips, and they had never been made public, thus confirming the fact of plagiarism.
At 1:27 am Beijing time this morning, two authors of the Stanford Llama3-V team, Siddharth Sharma and Aksh Garg, formally apologized to the MiniCPM team on the social platform X for this academic misconduct and promised to remove all Llama3-V models. IT Home noticed that they had published an apology letter with similar content a few hours ago, but it was quickly deleted.