existCybersecurityMikko Hyppönen, a 54-year-old expert who has been fighting on the front lines for decades, recently revealed to TNW in a video call his top five concerns for 2024.AI(AI) cyber threats. These are in no particular order, although one is causing him the most sleepless nights.
- Deepfakes
Researchers have long described deepfakes as the most worrying use of AI in crime, but the synthetic medium has yet to live up to their predictions. In recent months, however, their fears have begun to become a reality. According to research from London-based ID verification unicorn Onfido, deepfake fraud attempts increased by 30,00% in 2023.
- Deep Scams
Despite the similarity in name to deepfakes, deepfakes do not necessarily involve manipulated media. In this case, the “deep” refers to the large scale of the scam. Through automation, the targets can be expanded from a few to unlimited.
- LLM-enabled Malware
AI is already writing malware. Hyppönen’s team has found three worms that kick off LLM to rewrite the malware’s code every time it replicates. Although these haven’t been seen on live networks yet, they’ve been released on GitHub — and they work.
- Discovery of Zero-Days
Another emerging concern involves zero-day vulnerabilities, which are discovered by attackers before developers can create solutions. AI can detect these threats, but it can also create them.
- Automated Malware
WithSecure has integrated automation into its defenses, which gives the company an edge over attackers (who still rely primarily on manual operations). For criminals, the obvious way to close the gap is: fully automated malware campaigns.
Hyppönen ranks fully automated malware as the top security threat for 2024. However, there’s an even bigger threat lurking around the corner — the perilous path toward AGI (artificial general intelligence). Hyppönen expects we’ll see the effects of this in his lifetime. “I think we’re going to be the second most intelligent being on the planet in my lifetime,” he said. “I don’t think it’s going to happen in 2024. But I think it’s going to happen in my lifetime.”
To maintain human control over AGI, Hyppönen argues that we need to have strong alignment with our goals and needs.
“What we’re building has to understand human nature and share its long-term benefits with humanity… The upside is huge — bigger than anything — but the downside is also bigger than anything.”