The Guardian UK published a blog post yesterday (December 24) reporting that OpenAI's ChatGPT There are security issues with searching.Its return results can be manipulated by the hidden content of the page, and may even returnmalware.
The UK Guardian focused on testing the ChatGPT search tool's handling of web pages containing hidden content - note: hidden content in this case may include instructions from third parties that can alter ChatGPT's response (also known as "tip injection"), as well as content that fills in a large number of fake positive reviews to influence the results generated.
According to the test results, the ChatGPT search tool can be used maliciously and can influence ChatGPT results to a positive positive assessment despite the presence of some negative comments on the page. Security researchers also found that ChatGPT can return malicious code from the sites it searches, with fake sites appearing that contain phishing malware.
Jacob Larsen, a cybersecurity researcher at CyberCX, believes that the risk of people creating websites dedicated to spoofing users is high with the release of ChatGPT's current search system, and fortunately the feature is still in beta and the OpenAI team is working to address these issues.
Karsten Nohl, chief scientist at security firm SR Labs, suggests that AI chat services should be viewed as a "secondary function" and should not be fully trusted with unfiltered output.
Nohl likened the problem with AI search to "SEO poisoning," a technique used by hackers to manipulate websites into ranking high in search results and planting malware or code in them, and he sees a similar challenge for ChatGPT's search capabilities.