recently,GoogleA major new online safety initiative is being rolled out to more effectively remove explicit content from searches at scale.Deep fake content, and prevent it from being featured prominently in search results.
It is understood that when users successfully request to remove non-consensual, explicit false content depicting themselves from search, Google's system will not only process this specific content, but will also work to filter out all similar explicit results related to it, and even delete duplicate images.
Google product manager Emma Higham said these protections have been effective in dealing with other types of non-consensual images, and now the same features have been established for false explicit images to give users greater peace of mind.
Not only that, Google's search ranking mechanism is also being adjusted and optimized.Deep fakesSearches for images, such as the AI-generated pornographic images of Taylor Swift that circulated earlier this year, will now surface high-quality, non-explicit content, such as relevant news stories. Sites that are heavily removed for fake explicit images will be demoted in Google's search rankings.
Google said that previous updates this year have reduced the exposure of explicit image results by more than 70% in queries that specifically look for such deep fake content. And Google is working on ways to distinguish between real explicit content (such as consensual nude scenes of actors) and explicit fake content, so that legitimate images can be displayed normally while downgrading deep fakes. These updates are the continuation of a series of measures by Google to combat dangerous and explicit content on the Internet.
As early as May this year, Google banned advertisers from promoting deepfake porn services. In 2022, Google expanded the types of "human flesh search" information that can be deleted from searches, and in August 2023, it began blurring pornographic images by default.