Just as defenders are leveraging AI elements such as machine learning and automation, adversaries are incorporating advanced techniques into their malicious activities. Hence the emergence of AI-powered attacks. Dubbed “one of the biggest fears within the security community” by Max Heinemeyer, Director of Threat Hunting at DarkTrace, AI-powered attacks leverage more offensive AI and less human input in an attempt to prey upon unsuspecting users.

This change in tactics makes it possible for attackers to launch more campaigns, as these individuals can simply automate and orchestrate whatever operations are in place. Not only that, but offensive AI has the potential to make attacks quicker and easier to create in the first place, thus attracting more individuals into the world of digital crime. Finally, because of their ability to adapt to their environment by learning from contextual information, AI-powered attacks are better equipped than traditional attacks to maximize whatever damage they cause.

Let’s look at some examples of what these attacks might look like. Take the idea of AI-powered spear-phishing campaign. Like a traditional phishing attempt, the operation would begin with an attack email. The difference is that the email would originate from an AI-powered toolkit and not a human attacker. As pointed out by Wired, spear phishers could use such a toolkit to scan their target’s social media feeds and emails so that they could build a profile of the target’s routine correspondence and replicate their persona to increase the success rate of their attack. They could do this all while saving hours of work had they done the research themselves.

There’s also the potential threat of AI-powered malware. Back in 2018, IBM Research developed “DeepLocker” to observe how AI models could work together with cutting-edge malware techniques. The team designed DeepLocker to disguise itself as legitimate applications such as video conferencing software. The threat used facial recognition, geolocation, and voice recognition as part of a deep neural network (DNN) AI model to identify when it had reached a target. Only when it came across trigger conditions identifying specific victims did it reveal its malicious intent. This behavior complicated the task of reverse-engineering not only the malware’s functionality but also of determining the exact circumstances under which the malware would activate.

How to Defend Against AI-Powered Attacks

Organizations need to further increase their detection capabilities if they are to protect themselves against AI-powered malware, AI-created spear-phishing campaigns, and similarly advanced attacks. In doing so, however, organizations need to do so strategically. They specifically need to stop short of fully relying on AI to make their security decisions for them. It’s never a good idea to take the human analyst out of the equation. Without proper oversight from their security teams, AI-powered solutions could make a wrong decision and leave organizations open to a breach.

Organizations therefore need an AI-enabled approach to security that keeps human analysts at the center. That’s why hybrid intelligence is the best defense. As I wrote back in February on LinkedIn, hybrid intelligence is where human intelligence and machine intelligence come together. Both have their own strengths and weaknesses. By working as one, the two can help one another overcome their flaws and augment their strengths, thus becoming better than the sum of their parts.

Sanslight

Source: SANS Data Science Lightning Summit, March 19, 2021

Under a hybrid intelligence program, machines could use their intelligence to collect, analyze, and process data received from an organization’s security tools. They could then orchestrate the functionality of those tools as a means of coordinating an organization’s ability to identify, detect, and respond to any potential security issues.

But the human analyst still has a role to play. They’re the ones who are ultimately responsible for investigating and responding to security incidents, and take a proactive stance to threat by hunting. After all, they’re the ones who need to relevant context from the machines. They’re also the ones who need to reduce complexity at all costs. Towards that end, they could provide feedback to help reduce the incidence of false positives generated by the machines. This could help them save themselves time and money, all while keeping their organization focused on investigating legitimate security concerns. Similarly, they could continue to fine-tune their machines to automate mundane tasks so that they can focus on meaningfully contributing to their employer’s security posture.

For more information about how hybrid intelligence can help security teams to protect their organizations against AI-powered attacks, check out this recording of a SANS Data Science Lightning Summit I participated in.

Learn how ReliaQuest can force multiply your security team with technology + services >