Open AI denies report that ChatGPT leaked user passwords

According to Ars Technica, one user claimedOpen AIofChatGPTPrivate and unrelated user conversations were leaked, including work tickets for pharmacy users and code snippets showing login credentials for multiple websites.

Chase Whiteside explained to Ars that they had been using ChatGPT to come up with "clever names for the colors in the palette," and then apparently left the screen. When they reopened it, they found more conversations on the left side of the page, none of which they had initiated. But OpenAI disagreed.

"Ars Technica published its story before our fraud and security teams completed their investigation, and their reporting is unfortunately inaccurate," a spokesperson told us in response to questions about Ars' reporting. "Based on our findings, the user's account login credentials were compromised and then a malicious actor used the account. The chat history and files displayed are the conversations in which this account was abused, not the history that ChatGPT displayed for another user." Among the conversations that the reader screenshotted and sent to Ars, one included an exchange about making a presentation and some PHP code that appeared to contain the aforementioned troubleshooting tickets for the pharmacy portal. Even stranger, the text of the tickets shows that they were initiated in 2020 and 2021, before ChatGPT was launched.

The site doesn’t explain these date inconsistencies, but it’s possible that they were part of ChatGPT’s training data. Indeed, last year, OpenAI, the maker of ChatGPT, was slapped with a massive class-action lawsuit alleging that the company secretly used a trove of medical data and other personal information to train its Large Language Model (LLM).

In its less than 18-month history, ChatGPT has already been accused of being a buggy faucet multiple times. In March 2023, OpenAI was forced to admit that a glitch caused the chatbot to display some conversations between users to unrelated users, while in December the company pushed out a patch to fix another issue that could have exposed user data to unauthorized third parties.

And in late 2023, Google researchers discovered that ChatGPT could leak large portions of its training data by using some "attack" prompts, or keywords that made the chatbot do things it shouldn't. If nothing else, though, it at least reminded us of an old operational security adage: Don't put anything into ChatGPT or any other program that you wouldn't want a stranger to see.

statement:The content of the source of public various media platforms, if the inclusion of the content violates your rights and interests, please contact the mailbox, this site will be the first time to deal with.
Information

Nothing Phone users can now easily use ChatGPT voice assistant from the home screen

2024-1-30 10:22:34

Information

Figure AI, a human-computer interaction startup, is planning to raise funds, with Microsoft and OpenAI likely to lead the investment

2024-1-31 9:28:58

Search