Ransomware droppers and data ex-filtration and phishing, oh my! Web traffic is an essential part of operations but also represents one of the most dynamic attack surfaces all companies have to secure. The first step to securing this attack surface is installing a web proxy, which introduces an overwhelming volume of logging that’s difficult to regularly audit and comb thru. Enter threat hunting.
Threat hunting is an active form of cyber defense that allows your team to proactively identify abnormal behavior or vulnerabilities and mitigate these before any harm is done. But how do you know what to hunt for? Knowing where to begin and what to look for can be the greatest challenge; that’s why we’re sharing a series of the threat hunting use cases we use, developed and refined by our Research and Development teams over years and across different environments, to help you get started.
Hunting for Enlightenment
One goal of threat hunting is to constantly learn, understand, and improve your environment to set your team up to be able to identify “abnormal” with higher confidence. For instance, in conducting a web proxy threat hunt you may notice anomalous user-agents like Python or cURL sending requests to SharePoint sites within your organization. After documenting this finding, you escalate and learn that this activity is authorized and sourcing from DevOps automation processes. With this enhanced understanding of your environment, you can then enact application control policies to restrict Python and cURL usage to only authorized hosts.
The practice of hunting for ‘enlightenment’ is essential for effective outcomes. In our Threat Hunting Use Case Blog Series, we’ll walk through some of the most common and critical threat hunt campaign objectives, covering log source requirements, expected outcomes, and sample analysis per use case. For this entry, we’ll cover the threat hunting use case of “Web Proxy”.
Use Case: Web Proxy
Objective: Execute this threat hunt to baseline web traffic by devices in the environment in order to identify abnormalities related to malicious activity or policy violations.
Log Source & Requirements: Forward Proxy Logs
Duration: 7 days
Related MITRE Techniques: T1102, T1567, T1566, and T1071
Outcomes:
Baseline/Hygiene
What to look for | Why? |
---|---|
Review all web categories seen in allowed traffic to ensure no risky categories are allowed. | Uncategorized web traffic should be denied by default following best security practice of deny by default and allow by exception. If allowed, the risk of successful malicious web requests increases. |
Baseline the most common user agents generating web traffic and identify older browser versions being used in the environment. | This helps identify abnormal user agents and older browser versions that could have unpatched vulnerabilities. These should be addressed and updated to improve overall network hygiene. |
Threat Analysis
What to look for | Why? |
---|---|
Review uncommon user agent strings to identify suspicious web traffic. | Rare, blank, or non-standard user agents can be an indicator of malicious actions. User agents associated with known malware can expose potentially compromised machines. |
Review blocked traffic to risky categories (Malnets, Phishing, Anonymizer, etc.) to identify potential infected hosts that may have gone unnoticed. | Denied traffic is often overlooked since the activity was prevented. However, blocked traffic to risky categories can be a strong indicator when threat hunting since something caused the host to attempt communication to the blocked destination, and likely attempted additional destinations that could have gone unblocked. |
Filter on URLs containing direct IP requests to identify hosts possibly infected with malware that does not make use of DNS. | Threat actors will leverage hardcoded IP addresses to avoid DNS security controls and to quickly host malicious assets. |
Look for evidence of exfiltration by reviewing POST/PUT traffic to cloud storage sites, raw paste sites, or hardcoded IPs. | Cloud storage and raw paste sites could be used by insiders and external threats to quickly exfiltrate stolen data, especially if those sites are allowed and commonly used by hosts in the network. |
Filter on URLs containing risky file extensions (ex: .pdf, .exe, .doc, etc.) and examine the file names, and domains to identify potential malicious files. | File types such as portable executables, macro-enabled office documents, PDFs, and script files are commonly used to introduce malware into an environment and could be downloaded from web resources via phishing links or initial malicious payloads. |
Hunt for potential phishing links by reviewing events in which the referrer URL is a common business site and the URL is an uncommon domain, and vice versa. | Threat actors will often use legitimate business sites to reflect phishing links in order to avoid detection by e-mail security solutions. This hunt is designed to reveal phishing incidents potentially missed by e-mail security. |
Review threat intel lists for any hosts that are connecting to an IP or URL on the list. | Traffic to known Indicators of Compromise (IoCs) can reveal compromised hosts, C2 attempts, or data ex-filtration. |
Make threat hunting a reality at your organization with ReliaQuest GreyMatter.
By aggregating and normalizing your data from disparate tools, such as SIEM, EDR, multi-cloud, and third-party applications, ReliaQuest GreyMatter allows your team to run focused hunt campaigns, both packaged and freeform that are strategic and iterative. Use ReliaQuest GreyMatter to analyze indicators of compromise retrospectively or perform behavior assessments to visualize abnormal from normal activity.