For a team of intelligence analysts, threat hunters, and cyber–threat experts, it’s important to keep one ear to the ground and stay on top of what’s happening around the globe. To inform our own research and develop our skills, we typically stay on top of the news cycle by reading multiple sources, of course, mostly covering the world of cyber threats. Some of the more interesting pieces we’ve read this month are highlighted below.
This is part of our ongoing monthly series, Top SOC Reads. Read last month’s article here.
The Fog of War: How the Russia–Ukraine War Changed the Cyber-Threat Landscape
Probably the most interesting article I read this month was related to the ongoing Russia-Ukraine war, which, as I’m sure you’re aware, reached its one-year anniversary on February 24, 2023. The report provided a comprehensive overview of the cyber implications of the war to date, notably identifying that Russian state-aligned threat groups have made significant efforts to destabilize Ukraine, albeit with fairly mixed results. This, in many ways, contradicts previous sentiments from the earlier stages of the war, which suggested that Russia’s offensive cyber-activity levels were much lower than originally anticipated. Google’s Threat Advisory Group broke these efforts down into five distinct phases, really demonstrating Russia’s shifting objectives as the conflict has developed.
Russia has also conducted influence operations (i.e., operations aimed at shaping the public’s perception of the war) to full effect during the conflict. These include attempts to destabilize the morale of the Ukrainian population, fracture international support for Ukraine, and influence the Russian population’s perception of the conflict. It’s fair to suggest that these efforts have seen mixed results. The war has also had a demonstrable effect on the cybercriminal ecosystem; of course, you’ll likely remember that many of the cybercrime trends we’ve observed have been influenced by the conflict.
Overall, the article is an interesting read and a good summation of what’s happened so far, as well as what to expect going forward. While we’re at it, check out our own observations on changes to the threat landscape during the conflict.
Breaking Into a Bank Account with AI
Since the launch of OpenAI’s ChatGPT in November 2022, artificial intelligence has raced to the forefront of tech, pop culture, and now security conversations. The machine-learning-based natural language processor is making headlines by passing college exams, drafting political speeches, and writing code—both friendly and, of course, malicious. As they say, no good deed goes unpunished.
While ChatGPT is writing malware for threat actors, other AI tools are also posing new security challenges. These risks are at the forefront of Joseph Cox’s recent article: How I Broke Into a Bank Account with an AI Generated Voice.
Cox tells his story of using an artificial voice generation service to hoodwink a banks voice verification service, Voice ID. Voice verification is used by several banks to confirm customers’ identities, much like fingerprint and facial recognition scans. A customer is required to state certain phrases or answer specific questions, while the program analyzes for markers that make a voice unique like accent, speed, and pronunciation.
To test these protections, Cox used AI speech software designed by ElevenLabs, the brainchild of Google and Palantir alumni. Using only five minutes of his own audio recordings, he was able to generate an AI recording capable of deceiving Voice ID. Without even having to pay a dime to ElevenLabs, he was in.
The consequences of such attempts by threat actors are foreseeable: financial loss, identity theft, or password compromise is likely. Beyond account access, using voice generation to conduct CEO impersonation and other deception over phone calls is also plausible, although the technology may not yet be ready for full-blown conversations.
Voice impersonation fraud is, at the moment, reportedly rare, perhaps due to the preparation required. But the opportunity for cybercriminal exploitation is rife. YouTube videos, podcasts, and even TikToks, are all potential mines for audio recordings of victims. Voice generation tools are commonplace, and voice phishing (aka vishing) is still a tried-and-true technique used by threat actors to steal personal data and socially engineer targets.
The rapid proliferation and evolution of AI tools calls for speed and imagination in security responses. Security providers and users need to be prepared for both known and unknown threats. Perhaps, someday, the best person to ask for help in forecasting AI threats won’t be researchers at all, but rather tools like ChatGPT. After all, it takes one to know one.
Threat Actors Continuing to Weaponize Chrome Extensions
I thought I’d take this opportunity to highlight an internally sourced observation for my contribution to “Top SOC Reads” this month. The bottom line up front: Threat actors are continuing to use Chrome extensions in sophisticated ways, and this is an often overlooked and unmonitored attack method. Vetting these extensions through user requests or through administrator consent will help mitigate this risk to your environment.
Chrome extensions are intended to make our lives easier when we’re using the browser. You probably use some type of extension to help complete tasks, such as Grammarly or Dark Reader—the list continues to grow by the day. Attackers are all too aware of this and disguise malicious extensions in clever ways to install them on unsuspecting end-users’ browsers.
Recently, the ReliaQuest Photon Research team observed an uptick in Chrome extensions being discussed by threat actors on high-profile cybercriminal forums. These malicious extensions are often used in large botnets to collect information on users by intercepting their activities; botnets are a network of private computers that are infected with malicious software and controlled as a group.
Many of the extensions are found to have capabilities beyond this, such as injecting scripts, enabling or disabling other extensions, or adding a layer of abstraction to activities as a proxy. It’s important to take extensions seriously as a threat to your environment as they continue to have more features added, increasing the cause for concern.
To defend against this type of attack, we suggest implementing policies that allow only preauthorized extensions and placing any requested extensions under heavy scrutiny before allowing users to install them. Vetting extensions has proved to be a tedious task for defenders and system administrators alike. To assist in this process and help combat malicious extensions, DUO Security released a tool called CRXcavator. This tool scans an extension and presents a risk score that can then be used to determine whether it’s an acceptable extension to allow in your environment.
Malicious Chrome extensions aren’t going anywhere anytime soon, presenting a unique risk to your environment, which often goes overlooked. By proactively applying a security policy, though, we can mitigate much of the risk. As we say, an ounce of prevention is worth a pound of cure.