Ready or not, here it comes. AI is a powerful, potent tool with the ability to learn on its own, get smarter as it goes, and approximate – to an uncanny degree – what a human would do. This is great for cybersecurity. This is terrible for cybercrime. AI-powered ransomware is here, and it is still (thankfully) in its infancy, but like all Artificial Intelligence, it will learn quickly, and it won’t stay there for long. Here’s what we need to know to stay two steps ahead of it. The saving grace for the security community is that we can fight fire with fire by leveraging AI-driven solutions of our own. One day soon, we’re going to have to.  AI-powered ransomware could be ‘terrifying’ and  manifest in several different ways, all of them bad. 


1. Laying low and blending in | One common approach is to use machine learning algorithms to improve the efficiency and effectiveness of the ransomware attack. “Unlike older hacks, AI-powered ransomware can mimic normal system behaviors and blend in without drawing suspicion,” Cybertalk notes. “The embedded virus then trains itself to strike when someone accesses a device, taking control of any software the user opens.” 

2. Bot-based negotiations | AI-driven ransomware could also automate the ransom negotiation process. In the past, ransomware attacks often involved a human operator who would communicate with the victim and negotiate the ransom payment. With AI-powered ransomware, the negotiation process can be automated, allowing the attackers to scale their operations and potentially make more money.

3. An extremely large kill radius | Using AI and machine learning to automate the ransomware process would largely expand the scope of what a ransomware attack could do. Not only could it infect more of the network and larger networks, but it could also allow hackers to go after an even wider range of targets. When AI makes each attack so easy, why not? “It’s not worth their effort if it takes them hours and hours to do it manually,” noted Mark Driver, a research vice president at Gartner. “But if they can automate it, absolutely.” Ultimately, he said, “it’s terrifying.” 

4. Super fast, super customized phishing attacks | AI is already being used to scrape social media sites of precious identifying information and funnelling it back to malware gangs to use in customized phishing attacks. Once the time-consuming part of the campaign, AI-driven technology makes it easier than ever to glean loads of sensitive information and ultimately contribute to social engineering ploys that lead to ransomware attacks.

5. Automating time-consuming tasks | Hackers are smart. For so long, we’ve been preaching about the capability of AI to ingest petabytes of data and allow teams to sift through more traffic than they ever thought imaginable, and the hackers have heard us. Says Mikko Hyppönen, a prominent cybersecurity expert: “How do you get it on 10,000 computers? How do you find a way inside corporate networks? How do you bypass the different safeguards? How do you keep changing the operation dynamically to actually make sure you’re successful? All of that is manual.” But, he argues, everything from changing malware codes to recompiling exploits could be done automatically with the help of AI and really change the game, exponentiating attacks.” All of this is done in an instant by machines.”


The AI Arms Race

The ironic good news is that threat actors are just as strapped for qualified cybersecurity professionals as we are. But that won’t last for long. “We have already seen [ransomware groups] hire pen testers to break into networks to figure out how to deploy ransomware. The next step will be that they will start hiring ML and AI experts to automate their malware campaigns,” Hyppönen mentioned. “It’s not a far reach to see that they will have the capability to offer double or triple salaries to AI/ML experts in exchange for them to go to the dark side…I do think it’s going to happen in the near future — if I would have to guess, in the next 12 to 24 months.” And while it is hard to hire seasoned professionals in such a new and emerging field as AI, it is easier to be an expert for that very reason. In a small pool, it’s not hard to be a big fish. While this is not encouraging, it does give us time to prepare. 

Preparing now for AI-cyberwarfare

While it’s easy to buy into the hype, companies looking to future-proof against AI-powered ransomware don’t need to panic. The way to defend against it is the way to defend against a high volume of cyberattacks in general, and that is to tighten up the basics while building towards a more mature, integrated security posture. InfoSecurity Magazine argues that since the first AI-driven malware models will likely just exploit common security flaws and misconfigurations, organizations can prepare by “simply plugging holes in their infrastructure.” To do that, you need to know where the holes are. 

First, get a hold of your disparate assets and the ones running in a siloed manner. Moving towards a consolidated and integrated XDR architecture is the first step towards being able to leverage orchestrated, AI-driven protection. 

Next, double down on phishing and remote work threats by implementing secure authentication practices and training your workforce. Implement detection at the DNS layer, and hunt out those vulnerabilities by penetration testing your environment. Know where you stand, so you know where you need to improve. 

Lastly, consider partnering with a cybersecurity consultancy like Port53 to help you plan, organize, and execute your XDR and risk-managed approach. We audit against NIST standards to help you reach full cybersecurity maturity and offer our 24/7/365  SOC-as-a-Service to support your team as it reaches AI readiness.

AI-powered ransomware is coming, but with the right support, organizations of any size can be one step ahead of the game.