DeepMind finds political deepfakes are top problem for malicious use of AI

Google DeepMind For the first time, a survey was conducted on the most common malicious AI ApplicationsThe study, a collaboration between Google’s AI division DeepMind and Jigsaw, a Google-owned research unit, aims to quantify the risks of generating AI tools that have been marketed by the world’s largest technology companies in the pursuit of huge profits.

DeepMind finds political deepfakes are top problem for malicious use of AI

Technology-related motivations for bad actors

The study found that creating realistic but fake images, videos, and audio of people was nearly the most common abuse of generative AI tools, nearly twice as common as the next most common way to falsify information using text tools such as chatbots. The most common goal of abusing generative AI was to influence public opinion, which accounted for 27% of uses, raising concerns about how deepfakes could influence elections around the world this year.

Deepfakes of British Prime Minister Rishi Sunak and other global leaders have appeared on TikTok, Facebook and Instagram in recent months. British voters will go to the polls in next week's general election. Despite efforts by social media platforms to label or remove such content, people may not recognize it as fake, and the spread of content could influence voters' votes. DeepMind researchers analyzed about 200 instances of abuse involving social media platforms Facebook and Reddit, as well as online blogs and media reporting on the abuse.

DeepMind finds political deepfakes are top problem for malicious use of AI

The study found that the second-largest motivation for abusing AI-generated products, such as OpenAI’s ChatGPT and Google’s Gemini, is to make money, whether by offering services to create deepfakes or using generative AI to create large amounts of content, such as fake news articles. The study found that most abuses use easily available tools and “require minimal technical expertise,” meaning more bad actors can abuse generative AI.

DeepMind’s research will influence how it improves the safety of its assessment models, and it hopes this will also influence how its competitors and other stakeholders view “manifestations of harm.”

statement:The content is collected from various media platforms such as public websites. If the included content infringes on your rights, please contact us by email and we will deal with it as soon as possible.
Information

Weimob launches AI application product WAI Pro to provide customized large-scale model application capabilities

2024-6-26 8:57:59

Information

Alipay search integrates AI to further improve the efficiency of intelligent experience of searching services and content

2024-6-26 9:01:45

Search