Container orchestration tools like Kubernetes have risen in popularity within the past few years and have enabled organizations to more efficiently deploy and manage applications. However, these tools also come with their own security risks. All tools are susceptible to misconfigurations and insider abuse, in addition to more serious vulnerabilities that exploit the tools’ code itself. Since Kubernetes has control over the systems running critical business applications, the impact of a compromise is potentially massive.
In short, Kubernetes manages containers to ensure that resources scale with the application and that the overall state of the environment is healthy.
At the lowest level in the architecture is the Kubernetes node, which is the physical or virtual machine running all the services. The group of nodes help form the Kubernetes cluster. Each node runs at least one Kubernetes Pod, which houses the actual containers, such as a Docker container. Each node also runs an agent called kubelet, which manages the node and monitors the health of the containers in the Pods, automatically starting and stopping them to remediate issues.
The Kubernetes Control Plane manages the nodes and Pods across the cluster by sending commands through the kube-api-server. It also contains etcd, a key value store that contains all of the cluster data.
Just like with any application or tool introduced to the environment, it is important to not only patch and harden them to prevent attacks, but also enable logging to detect future attacks.
Kubernetes provides logging for many of its different components, but the most useful is from the API server. Logging can be configured by passing a configuration file to the API server as described in the Kubernetes Auditing documentation.
Logs are formatted in JSON. The following is a sample of the command “ls” executing on a pod:
Common Threats to Kubernetes
While there have not been many attacks against Kubernetes observed in the wild, there is plenty of published research on various techniques to abuse and exploit the API. Most of these techniques rely on the assumption that the attacker already has access to the cluster API, usually by exploiting a public-facing service running in a Pod and getting a shell. Therefore, the risk to the Kubernetes cluster is highly dependent on the security of the applications it is hosting.
The following sections describe common attack vectors used to compromise a Kubernetes cluster and how to detect them.
1. Anonymous Access
In all versions except 1.5.1-1.5.x, Kubernetes will accept API calls from anonymous users by default. In a normal authenticated scenario, a user would need to authenticate to the API server with a token or password. However, if the user does not provide any token or password in the request, the API server will still accept it and assign the user a username of system:anonymous and a group of system:unauthenticated. This also applies to any API requests sent directly to the kubelet service.
It is strongly recommended to disable anonymous access by passing the flag –anonymous-auth=false to the API server and kubelet services. Another mitigation method is to enable Role-based Access Control (RBAC), which requires explicit authorization of anonymous requests.
This is the most severe type of misconfiguration, as it allows attackers, both internal and external, to send commands through the API directly to Kubernetes components and compromise the cluster.
Detection of this misconfiguration can be accomplished by checking the user object in the logs for the username of system:anonymous and the group of system:unauthenticated.
2. Service Accounts Compromise
In Kubernetes, there are only two types of user accounts: service accounts and regular users. Normal users must be managed outside of Kubernetes, while service accounts are managed by Kubernetes itself. Kubernetes will automatically create service accounts as needed, but they can also be created manually. Service accounts are assigned not only to the main Kubernetes services, but also to every Pod as well.
For authentication, service accounts are assigned a “bearer” token that is passed to the API server when making requests. These tokens are stored securely in the etcd datastore, however for Pod service accounts, the tokens are also mounted to the Pod’s file system in /var/run/secrets/kubernetes.io/serviceaccount/token.
Once an attacker compromises a Pod, it is trivial for them to also compromise the Pod’s service account by reading the token file. The attacker can then authenticate to the API server as the service account and inherit the account’s permissions. Therefore, it is important to implement the principle of least privilege for Pod service accounts so they cannot be used by an attacker to further compromise the cluster.
While there is no universal detection signature for service account misuse, there are anomaly detection techniques that can be used to find potential abuse.
Since service accounts are used by automated services for a specific purpose, their activity should be defined and consistent. If an attacker were to compromise an account, it is expected that their abusive actions would be outside the normal scope for the service. By trending and observing changes in the logs from service accounts, it is possible to identify a compromise.
Every API log includes the user agent of the source that generated the request. For normal Kubernetes activity, the user agent starts with the name of the service, such as “kube-controller-manager” or “aws-k8s-agent”.
When a human user interacts with the API, they will be using a tool that will send the command to the API server. These can include native networking tools, such as curl, or the Kubernetes command line interface tool, kubectl. These tools use distinctive user agents which can be observed in the logs. In the original log sample above, the user agent field contains “kubectl”, which indicates the kubectl tool was used to send the command.
By alerting on deviations in observed user agents per service account, it is possible to detect when a Pod service account that historically uses the kubelet user agent is suddenly seen using curl instead.
Resources and Verbs
The API logs also include the target resource and the action (verb) of the request. Since service accounts have a limited scope, they should need access to only a subset of resources and actions in order to function. If an attacker compromises a service account, it is likely they will execute commands outside the scope of the service account.
Trending the resources and verbs used by a service account can reveal when unexpected API calls are made and indicate a potential compromise. A Pod service account should probably not be requesting to delete, create, or execute commands on other Pods. If the service account is configured with least privilege, these requests may fail, however the compromise is still there.
3. Remote Command Execution
Once an attacker compromises a Pod and service account, where do they go from there? While the answer is highly dependent on the environment and the attacker’s goal, it is likely they will try to pivot from their initial point of compromise and escalate privileges.
Kubernetes provides an easy method for remotely executing commands on Pods; there’s even a support article called “Get a Shell to a Running Container”. For an attacker, as long as their account has create rights to the pods/exec resource, they can execute commands. A common command is /bin/bash, which opens an interactive shell to the Pod. From there, an attacker can access the Pod’s file system, potentially installing a backdoor or looking for the Pod’s service account token, which may have more privileges than their current account.
An attacker can reuse this method, pivoting through the cluster and establishing persistence and escalating privileges along the way.
Pod command execution is logged by the API server and shows the API call targeting the resource pods with the subresource exec. Kubernetes treats the execution as “creating” the execution subresource, so the verb is create. The full command that was executed is logged in the requestURI object in the log. Note that the command is URL encoded.
To open a remote shell, the kubectl program can be used:
|> kubectl exec -it test-pod /bin/bash|
To detect a user opening an interactive shell on a Pod, the following logic can be used:
|objectRef.resource=”pods” AND objectRef.subresource=”exec” AND verb=”create” AND (requestURI contains “%2Fbin%2Fbash” OR “%2Fbin%2Fsh”)|
All commands sent using exec will be logged by the API server. However, once the interactive shell is opened, any further commands sent inside the shell will not be logged by the API server.
If interactive shells are commonly used by employees, additional criteria can be applied to make the detection higher confidence, such as alerting on shells from service accounts or when the shell sources from other Pod IP addresses.
4. Node Compromise
Pods are constantly being created and deleted as the cluster operates and scales, so they are not a stable persistence method for attackers. The underlying node that hosts the Pods, however, is a much more reliable option.
When creating a Pod, there is an option to mount a file system volume to the new Pod. Kubernetes supports a wide variety of volumes, such as AWS EBS, Microsoft Azure Data Disks, and more. One type is hostPath, which mounts a part of the node’s file system to the Pod. If an attacker has the create rights for the pod resource, they can create a new Pod that is configured to mount the node’s root directory.
The attacker can then install a backdoor on the node or scrape the file system for authentication tokens or SSH keys. As even the Kubernetes documentation notes, the hostPath option “offers a powerful escape hatch” from the containerized environment.
While Pod creations are logged by the API server, the details of the Pod’s configuration are not. If an attacker mounts the node’s file system, there is no indication in the API logs.
Any detection must instead rely on the circumstances surrounding the Pod’s creation.
If the attacker is running the command from a compromised Pod, the source IP in the log would show as the Pod’s IP. A Pod creating another Pod would be highly suspicious, as most Pod creations should come from a Kubernetes Controller.
The API call could come from a user account that does not normally create Pods, such as a compromised service account, in which case anomaly detection techniques could also apply.
5. Secret Access
Secrets in Kubernetes are secure objects designed to store sensitive information, such as authentication tokens or passwords. Instead of storing credentials in files scattered across the cluster where they might be accidentally exposed, all secrets are stored in the etcd datastore and can only be accessed over secure communications. Secrets are created automatically by Kubernetes for service accounts but can also be created manually by users. When a Pod or application needs to access a secret, the secret can be mounted as a volume, such as for the Pod service account tokens, or accessed through an API call.
By default, secrets are not encrypted at rest. They are base64 encoded, so it is trivial to decode a secret once it is accessed. There is even a Kubernetes support article that explains how to decode the secrets.
Thankfully, access to secrets is restricted by the account’s permissions. First, the account must have the get or list permissions for the secrets resource to even view the secrets.
To read an individual secret, a user can run the kubectl command:
|> kubectl get secret test-secret -o yaml|
Instead of get secret, a user can run the get secrets command to read all the secrets in the namespace.Second, the account can only access secrets that are in the current namespace. Namespaces help organize resources into separate groups, similar to Active Directory domains.
|> kubectl get secrets -o yaml|
However, if the account has enough permissions, the –all-namespaces flag can be used to read all secrets in all namespaces.
|> kubectl get secrets –all-namespaces -o yaml|
Attackers can abuse these permissions to access the secret credentials in the cluster, which will only provide them more opportunities for privilege escalation and persistence.
Secrets accessed through the API are logged in the API server logs. The verb is get if a single secret is accessed and list if there are multiple with the get secrets command. The log describes which secret was accessed and who accessed it.
While there are no clear indicators in the secrets log that an attacker is abusing an account, anomaly detection can be used here again to determine if this access is normal for the account. Accessing secrets is a common occurrence in the cluster and required for its operation. Just like with service account activity, most of these accesses should be routine and expected.
By trending which secrets are normally accessed per account, it is possible to identify unexpected secret access that may be an attacker abusing the account to view out-of-scope secrets.
Hardening Best Practices
While detection may be difficult, there are several steps that can be taken to harden the cluster and prevent attackers from easily gaining persistence and pivoting.
- Disable anonymous access. Pass the flag –anonymous-auth=false to the API server and kubelet services when starting them. It is important to do this for both the API server and kubelet as they can be equally leveraged by attackers to run malicious API calls.
- Implement Role-based Access Control (RBAC). Assign the least privileges required for service accounts. Pay close attention to privileges for creation (pods, executing commands) and listing resources (secrets) to ensure they are only assigned when needed and properly limited in scope by account or namespace.
- Encrypt secrets at rest. While secrets are still decrypted when accessed through the API, it is still helpful to encrypt them when at rest in the etcd datastore. Attackers could use the utility etcdctl to access the datastore directly and read the plaintext secrets.
The key to enabling all these attack vectors to succeed comes down to one thing: privileges. There have been very few exploitable vulnerabilities found in the Kubernetes’ code itself, so most attackers will have to rely on compromised credentials. And all the techniques above use built-in Kubernetes features, not bugs, so it is unlikely that these abuse vectors will be closed anytime soon. This puts the burden on Kubernetes administrators to be the first line of defense.
If or when a container gets compromised, it will be the privileges of the Pod accounts and others that determine how easily an attacker can take over the cluster and might be the difference between a simple Pod reimage and a cluster-wide compromise.
Detecting Threats with ReliaQuest GreyMatter
Detect these common threats faster with ReliaQuest GreyMatter’s custom content, continuously tuned to integrate into your environment. As threats continue to evolve and even specialize to target specific industries, vulnerabilities, and individuals, your threat detection cannot rely on generic threat content. Increase alert fidelity and environment visibility across SIEM, EDR, cloud services, and third-party apps with personalized content tuned to your environment.
Want to receive threat intel and use cases directly in your inbox? Sign up for our Rapid Response Resources Series!