Artificial intelligence is no longer just a helpful tool – it is starting to act like a professional hacker that can find weaknesses in code, steal money and help criminals run scams more easily. At the same time, big companies and security agencies are warning that AI is being used in fake kidnapping schemes and powerful spying tools that target people all over the world.
AI that hacks smart contracts
Recent tests with advanced AI models such as Claude Opus 4.5 and GPT‑5‑style systems showed that they can scan blockchain smart contracts and find serious security holes without human help. In simulations, the AI was able to spot bugs and “exploits” that could have been used to steal the equivalent of millions of dollars from DeFi platforms.
Researchers then tried the AI on new, live contracts where the problems were not known in advance. The systems still discovered fresh “zero‑day” vulnerabilities and, after subtracting the cost of using the AI, the attacks would have been profitable – meaning AI‑driven hacking can already be cheaper and faster than traditional methods.
What this means for hackers and DeFi
In the past, a human hacker had to spend many nights reading code, testing and failing before finding a single mistake. Now an AI can scan huge amounts of code in a short time, highlight likely errors and even suggest how to exploit them, turning “automated hacking” into a real business model.
Studies suggest that profits from exploiting these weaknesses are growing quickly, while the cost of attacking is going down. This puts DeFi and crypto platforms at high risk unless developers start using AI in the same way – but for defence, to find and fix bugs before the “evil robot” does.
AI‑powered fake kidnapping scams
Law‑enforcement agencies such as the FBI have also warned about new fraud schemes where criminals use AI to edit images or clone voices to fake a kidnapping. Scammers scrape photos from social media, modify them with AI so they look more shocking and send them to family members claiming that a loved one has been taken, then demand a ransom.
Experts say these photos and videos often contain small mistakes, such as wrong tattoos, strange fingers or slightly off facial features, which can reveal they are fake. People are advised to call the person directly, inspect the images carefully and agree on a private “family password” that only real relatives know, so it can be used to check identity even if the voice sounds perfectly copied by AI.
Apple, Google and state‑level spyware
At the same time, companies like Apple and Google have sent alerts to users in many countries warning about sophisticated digital attacks. Some of these attacks involve commercial spyware made by firms such as Intellexa, whose Predator tool has been linked to surveillance of journalists, politicians and activists, even while under sanctions and investigation.
These warning waves have pushed governments and organisations, including the European Union, to launch new inquiries into how such tools are used against high‑profile targets. The message is clear: advanced hacking and spying are no longer limited to secret agencies – they are becoming products that can be bought, sold and supercharged with AI.
How to stay safer in an AI‑driven threat world
Security professionals now argue that AI must be used both to attack and to defend, and that ignoring AI will leave systems wide open. For ordinary people, simple steps such as limiting what is shared online, being sceptical of emotional ransom calls, checking details in suspicious images and setting family passwords can greatly reduce the risk of falling for AI‑powered scams.







Leave a Reply