If your team is suffering from security alert fatigue, too many false positives, and an overall reactive posture, you’re not alone. Organizations are continuing to invest in a growing suite of cyber security tools, complicating security operations, overwhelming teams, and negatively impacting threat detection. According to a 451 Research Report, 43% of enterprises are unable to act on at least 25% of the alerts generated by their security products. This can leave organizations unsure of the state of their security posture, or even with a false sense of confidence.
To understand how to resolve these security operations challenges, it’s important to first understand what common threat detection problems are the culprit:
Problem 1: Trying to fit generic threat detection content into a unique security environment
It’s quite common for companies to run with the default rule sets that come pre-packaged with a tool such as a SIEM or EDR. And why not? The rules are already prebuilt, they are designed to be plug and play so a security operations team just has to update some definitions and within a few minutes the rule starts firing.
Perfect! Or is it?
For a vendor to build a rule that is generic enough to plug and play into any environment, with just a few tweaks, comes with some trade-offs. Because the rule is so broadly built, it often leads to a higher volume of fires, a high number of false positives, lower fidelity, and sometimes unexpected fail to fires due to coverage gaps or edge cases where a particular log got categorized in an unusual way that fell outside the scope of the rule logic.
Of course, the intention behind these out-of-box rules is that the security operations team will work on tuning any excess noise, and then after some time of hammering it out they are able to get the rule to a place where its working reasonably well. And sometimes companies will have great success stories with this approach. It’s not necessarily a bad way to do it… it’s just not the most efficient or effective method, particularly when teams are already overwhelmed with responding to alerts and managing tools
Instead of slowly trying to tune out excess noise over time, security teams will see better luck by first assessing their unique environment and customizing the rule to fit.
Solution: Build environment-specific detection content
Let’s take a look at how to develop one of the most common rules across the board: the humble port scan rule.
To demonstrate the process, we’ll walk through the base logic we use at ReliaQuest:
- 5 events
- Where the log source type is Firewall or Flow Data
- Where the direction is Internal
- Where the destination port is in ReliaQuestCommon Ports List
- Where the destination port is not in ReliaQuest Common Port Whitelist List
- Where the protocol is not ICMP
- Where the source IP is not on a Vulnerability Scanner List
- With the same source IP and destination IP
- With unique destination ports
- Within 10 minutes
ReliaQuest Common Ports List: 1-1024,1433,1434,3306,3389,4567,5900,31337
ReliaQuest Common Port Whitelist List: 53,88,123,389,464,137,161,80,443
What does the infrastructure landscape look like ?In order to build a custom rule similar to the above, you must first determine how the rule will fit into your environment and what log sources are needed. To do so, start by asking yourself and your team questions like:
- What firewalls are in place?
- How many vendors do we need to account for? Perhaps there is a mixture of Cisco ASAs and Palos? Maybe there are Cisco ASAs on the edge with Palos deployed internally? If the ASA is an edge device is it expected to capture local to local traffic? Are there VPN considerations? If not, is there any need to have the SIEM evaluate the extra noise?
Problem 2: Unknown security logging gaps and blind spots
Once you’ve identified the log source types where a detection could be expected to take place, you’ll need to determine any gaps in log source coverage. This is perhaps the most significant issue that can affect your security alerting. To determine whether you may have coverage gaps that could affect your rule, ask these questions:
- Is all flow data logging? Are all firewalls logging?
- How does data typically flow through the network?
- If two computers are connected on the same switch and they communicate with each other will that data ever make it to a router? Much less a firewall? Are those switches logging?
It’s important to keep in mind that regardless of how well built or tuned the detection rule is, you will never detect the attack if it occurs in a logging blind spot.
By asking these questions, it often becomes apparent that environments have East-West blind spots. Following the example above, it’s important to keep in mind that regardless of how well built or tuned the rule is, you will never detect a port scan if it happens in a logging coverage gap.
Without an understanding of visibility and logging gaps within their environment, companies can develop a false sense of security that only gets realized after a pen test (or worse, an actual breach!) takes place and the rule failed to fire.
That’s why for every rule built, it’s important to know your logging gaps ahead of time so you can answer:
- When is the rule expected to fire?
- When is it NOT expected to fire?
If you know there is a specific situation where the rule wont fire due to a visibility gap, then you must consider the impact of the gap, whether or not the risk is acceptable, and if you need to take any steps to mitigate the gap.
Do you know where your visibility gaps are? Learn how to measure and track your visibility gaps with the Guide to Metrics that Matter.
Solution: Understand and address the risk of logging gaps
Once you understand where your gaps are, determine what gaps you need to mitigate based on risk levels and impact to the business. For example, if you know that Host A could run a port scan against Host B and you don’t have the means to detect that, what does this mean for your organization? If Host B is a mission critical host, perhaps there is a need to move it to another network location with better visibility. If Host B is not mission critical and there is already existing visibility on that host through Anti-virus or EDR (endpoint detection and response), perhaps missing a port scan is an acceptable risk.
You may also need to consider solutions to increase your visibility, such as an Open XDR solution that integrates your existing investments for unified visibility.
Custom-built and validated detection with ReliaQuest GreyMatter
ReliaQuest GreyMatter reduces security complexity and force multiplies stretched security teams, helping leaders to realize exponential gains in efficacy, efficiency, resiliency, and confidence, enabling them to proactively advise and manage risk for the business.
On top of this platform add diverse, use case-based detection content that works across technologies, for high fidelity, unified alerting, delivering an investigation package from which analysts can make fast, informed decisions, automate response across technologies, or dive deeper, pulling additional insights without needing to know specific syntax or pivoting into other tools.
For more information on reducing false positives and improving your detection capabilities, get The Comprehensive Guide to Optimizing Your Security Operations.