January 12, 2011 - According to TechCrunch, theMicrosoftThe Company has filed a legal action against an organization alleging that it knowingly developed and used tools to bypass theMicrosoft cloud AI The lawsuit was filed by Microsoft in December 2024 in the U.S. District Court for the Eastern District of Virginia and involves 10 unnamed defendants. The lawsuit was filed by Microsoft in December 2024 in the U.S. District Court for the Eastern District of Virginia and involves 10 unnamed defendants.
According to the lawsuit documents.Microsoft alleges that these defendants unlawfully breached the Azure OpenAI service through the use of stolen customer credentials and customized softwareAzure OpenAI is a fully managed service developed by Microsoft based on OpenAI technology, offering a wide range of AI models including ChatGPT. Microsoft alleges that the defendants violated the Computer Fraud and Abuse Act, the Digital Millennium Copyright Act, and federal racketeering laws.Using its software and servers to generate "offensive, harmful and illegal" content. Microsoft did not provide specific details about the content.
Microsoft said in the lawsuit that the company discovered in July 2024 that customer credentials (specifically API keys, which are unique strings used to authenticate an application or user) for some Azure OpenAI services were being used to generate content that violated the service's usage policies. As a result of the investigation, Microsoft confirmed that these API keys were stolen from paying customers. Microsoft noted that the defendants obtained the keys from multiple customers through a systematic theft of API keys and used them to commit unlawful activities.
According to Microsoft, the defendants used the stolen API keys to create a "hack-as-a-service" model and develop a client tool called "de3u." The tool allowed users to use the stolen API keys to generate images from the DALL-E model without writing code, while attempting to bypass the Azure OpenAI service's content filtering mechanisms. For example, when a user enters a text prompt that contains words that trigger content filtering, de3u prevents the service from modifying the prompt to generate a potentially harmful image.
Microsoft also mentioned that the defendants reverse-engineered Microsoft's content and abuse safeguards by illegally programming access to the Azure OpenAI service, resulting in a loss of service. The de3u project codebase hosted on GitHub (a Microsoft company) is currently inaccessible.
In a blog post published Friday, Microsoft said the court had authorized it to seize a website "critical" to the defendant's operations in order to gather evidence, analyze its profit model, and disrupt its technology infrastructure, 1AI noted. Microsoft also said it had taken "countermeasures" in response to the observed illegal activity and added additional security safeguards to its Azure OpenAI service, without disclosing specifics.