Society’s Comfort with AI-Driven Orchestration
Let’s face it. Machines making decisions for us still makes many of us uncomfortable. And well it should. There’s an entire ethics conversation around the morality of self-driving cars, AI-based weaponry, and even the far-reaching implications of Alexa. But there should be some things automation is allowed to do, and it is up to us to determine what – especially in the realm of cybersecurity, and especially when facing an enemy who is not afraid to use it to their own advantage.
Automated sensors alert homeowners when a burglar has breached the kitchen window – good automation. Automatic beepers sound when a heart-rate monitor registers a critically low heart rate – good automation. The toaster pops up when your bread is about to burn – you get the picture. However, an automated security solution triages a threat based on playbook responses. An autonomous threat detection system not only quarantines a threat, but takes a server offline to prevent further ingress. An AI-driven XDR tool runs remediation tactics in the background while your security team investigates threats of greater importance, letting the tool tackle the mundane tasks. Still good automation?
Why not. The problem is this – hackers have no qualms taking advantage of technology built to automate, duplicate, and learn from your mistakes. Artificial intelligence technology is publicly available and has been co-opted by both sides to do their bidding, and black hats are using it to their full advantage.
RaaS (Ransomware-as-a-Service) is an incredibly popular ransomware tactic, and growing. “Gone are the days when every attacker had to write their own ransomware code and run a unique set of activities,” states TechTarget’s SearchSecurity.com. Instead, the RaaS model allows developers to spin up advanced (or basic) ransomware exploits and present the whole thing piece-meal, assembly-line style. Individual components – malware code, credential lists, specialized client messaging – are sold on the dark web for as low as $10 and just like that, anyone can become a ransomware affiliate.
On the flip side, multinational companies are tied down to traditional, linear security solutions that are still hunting for signatures and catching indecipherable hordes of alerts in their SIEMS, rendering them often useless in their mass. Security teams are facing fatigue, SOCs are overwhelmed, and events are getting missed as system administrators are reduced to doing what they can as they struggle to hire and train more help amid a cyber talent crisis. Among security practitioners, it’s a familiar story.
Meanwhile, ransomware operators are deploying bot-based attacks and creating advanced APT-level exploits that leverage AI to learn the best ways into a target’s network, growing ever smarter with each deployment. At a certain point, technology needs to be used in an equal and opposite way to fight back.
Gartner defines artificial intelligence as “applying advanced analysis and logic-based techniques, including machine learning (ML), to interpret events, support and automate decisions and to take actions.” Those actions are pre-programmed by security developers to include playbook-based actions and heuristics leading to the identification of IOBs, or Indicators of Behavior.
Indicators of Attack (IOAs) and Indicators of Compromise (IOCs) are the signatures that let us know which way an attacker has gone, and form the basis of most traditional threat-detection solutions. An antivirus, for example, will scan for bits of known malicious code and flag you when it’s found something. However, attackers are aware of those techniques and are re-compiling their code with each exploit, using fileless malware, or spinning up a barrage of new and unknown variants that can evade detection.
There were over 10,000 new ransomware strains discovered in the first half of the year alone, and it is logical to conclude more have proliferated since. Additionally, hackers are injecting false artifacts into IOC databases and increasing the noise and confusion that can hide their eventual ingress into an organization’s network.
If security practitioners don’t catch up, we’re going to be left behind. AI-driven solutions like Extended Detection and Response (XDR) are required to level the playing fields and spot Indicators of Behavior (IOBs) that act as breadcrumbs to the hacker’s payload. Artificial Intelligence and machine learning can be leveraged by organizations to catch attacks in progress, seeing farther than signatures and incriminating IPs on their actions alone.
Automated, and autonomous, responses can not only spot bad behaviors but stop them before they start. For example, if a laptop on the company’s network becomes infected with malware, the XDR automated response would be to quarantine the device, take it off the network, delete the file and reboot the machine – all without bothering you. Good automation. Or, routine scans are run and old and outdated systems are identified. The autonomous response play would be to apply immediate and ongoing patches. Good automation.
As society becomes more comfortable with the idea of offloading mundane tasks to machines, security teams will save time, respond at scale, and do what they were hired to do – make the critical decisions.
Endpoint Security
Cisco’s Secure Endpoint (formally AMP for Endpoints) blocks attacks and helps you respond to threats quickly and confidently.
MDR vs mXDR
In today’s quickly evolving cybersecurity landscape, organizations face increasing pressure to protect their networks, systems, and data from threats. From ransomware attacks to multi-faceted breaches, the stakes have never been higher. To tackle these challenges,...
Recent Comments