AI scam escalates: hyper-realistic voice attack on Gmail users

Recently, a new type of scam utilizing artificial intelligence technology is targeting Gmail users with an alarming degree of realism. As one of the world's largest email services with over 2.5 billion users, Gmail has become a key target for cybercriminals.

Sam Mitrovic, a Microsoft Solutions Consultant, almost fell victim to this advanced scam recently. He received a notification of a Gmail account recovery request followed by a call claiming to be from Google. The caller claimed that Mitrovic's account had been attacked within the past week and had downloaded account data in an attempt to gain trust by creating a sense of urgency.

AI scam escalates: hyper-realistic voice attack on Gmail users

The brilliance of this scam is its realism. Not only can scammers disguise themselves as Google business phone numbers, but they can also use AI technology to generate near-perfect artificial voices. They will even send legitimate-looking documents using tools like Google Forms to add credibility.

Another techie, Garry Tan, suffered a similar scam attempt. The scammers pretended that Google had received a death certificate from his family member, and used it to request an account recovery. This tactic capitalizes on people's vulnerability when dealing with sensitive topics such as the death of a loved one.

In response to this new type of threat, Google is partnering with other organizations to launch the Global Signal Exchange, a program designed to share fraud-related intelligence to accelerate the identification and blocking of fraudulent activity. Google's goal is to develop a user-friendly solution that will help more people recognize and resist fraud.

To protect themselves from these types of advanced Gmail scams, users should.

Stay calm and don't let the sense of urgency created by scammers cloud your judgment.

Keep in mind that real Google support does not call users proactively.

Verify the situation using a Google search and your own Gmail account.

Consider adding Google's Advanced Protection program, which provides extra security for high-risk users.

Use a combination of security keys and biometrics to enhance account security.

As AI technology continues to advance, fraudulent tactics are also escalating. Users need to remain highly vigilant and raise their security awareness in order to effectively guard against these increasingly sophisticated cyber threats.

statement:The content is collected from various media platforms such as public websites. If the included content infringes on your rights, please contact us by email and we will deal with it as soon as possible.
Information

Since the launch of ChatGPT, the U.S. defense and security sector has paid $700 million for AI programs

2024-10-15 9:40:20

Information

YouTube launches "Shot with Camera" label to "authenticate" real videos

2024-10-16 9:37:54

Search