Azure Kubernetes Cluster Security - azure-aks

I found some findings related to Azure Kubernetes Cluster in the Azure Security Center Recommendations.
Immutable (read-only) root filesystem should be enforced for containers.
Services should listen on allowed ports only.
Containers should listen on allowed ports only.
Running containers as root user should be avoided.
Container with privilege escalation should be avoided.
Container CPU and memory limits should be enforced.
If anybody has an idea how to remediate these issues let me know.

These all due to limitations of Azure Security Center policies and how they identify the vulnerabilities. FOr example, ASC will only check the security context of the pods template to understand whether it is running with root user or not. Evenn if your container running with non-root user also will reflect in the affected pod list, if the property is not set to the pods specific eventhough your image has already set a non-root user. The same limitations are there for your other mentioned alerts also.
So we have options like, disable the alerts which are not related the cluster or as they recommended in the remediation steps, just follow to add the properties just to remediate eventhough that doesnt make much sense.

Related

How do you ensure Kubernetes Deployment file does not override secure settings in DOCKERFILE?

Let's assume you want to run a container under a rootless user context using Kubernetes and Docker runtime. Hence, you specify in the DOCKERFILE the USER directive to be a non-root user (e.g. uid 1000). However, this setting can be overwritten by the Deployment file using the runasuser flag.
If the above scenario is possible (correct me if I am wrong), the security team would potentially scan the DOCKERFILE and container image for vulnerabilities and find it to be safe. Only to be exposed to risk when deploying when a K8S Deployment file specifies runasuser: 0 - which they are not aware of.
What do you think is the best way to mitigate this risk? Obviously, we can place a gate for scanning Deployment files as the final check or just check for both artefacts, or deploy a PodSecurityPolicy checking for this - but was keen to hear whether there are more efficient ways especially in an Agile development space.

AWS CodeBuild - Security Implications of Enabling Docker Layer Cache

When creating a Codebuild project it's possible to configure a cache in the Artifacts section to speed up subsequent builds.
Docker layer cache is one of the options there. AWS documentation says:
LOCAL_DOCKER_LAYER_CACHE mode caches existing Docker layers. This mode is a good choice for projects that build or pull large Docker images. It can prevent the performance issues caused by pulling large Docker images down from the network.
Note
You can use a Docker layer cache in the Linux environment only.
The privileged flag must be set so that your project has the required Docker permissions.
You should consider the security implications before you use a Docker layer cache.
The question is: What are those security implications?
I believe the AWS docs have been improved since the question was raised, but maybe this also would be useful.
A container in the privileged mode does not differ from any other running process with all the capabilities on the host machine. It undermines the whole idea of container isolation.
Privileged mode opens a possibility for a container to escape from its namespaces and have read/write access to the root partition, and/or access network devices (any sort of direct interaction with the system).
In case a container has an exploit, the security implications could be
disk partitions encryption/deletion
.ssh/authorized_keys modifications

Is it possible to run Kubernetes nodes on hosts that can be physically compromised?

Currently I am working on a project where we have a single trusted master server, and multiple untrusted (physically in an unsecured location) hosts (which are all replicas of each other in different physical locations).
We are using Ansible to automate the setup and configuration management however I am very unimpressed in how big of a gap we have in our development and testing environments, and production environment, as well as the general complexity we have in configuration of the network as well as containers themselves.
I'm curious if Kubernetes would be a good option for orchestrating this? Basically, multiple unique copies of the same pod(s) on all untrusted hosts must be kept running, and communication should be restricted between the hosts, and only allowed between specific containers in the same host and specific containers between the hosts and the main server.
There's a little bit of a lack of info here. I'm going to make the following assumptions:
K8s nodes are untrusted
K8s masters are trusted
K8s nodes cannot communicate with each other
Containers on the same host can communicate with each other
Kubernetes operates on the model that:
all containers can communicate with all other containers without NAT
all nodes can communicate with all containers (and vice-versa) without NAT
the IP that a container sees itself as is the same IP that others see it as
Bearing this in mind, you're going to have some difficulty here doing what you want.
If you can change your physical network requirements, and ensure that all nodes can communicate with each other, you might be able to use Calico's Network Policy to segregate access at the pod level, but that depends entirely on your flexibility.

Multi-host Docker network with Swarm-mode and without swarm

I am migrating legacy application deployed on two physical servers[web-app(node1) and DB(node2)].
Though following blog fullfilled my requirement. but still some questions
https://codeblog.dotsandbrackets.com/multi-host-docker-network-without-swarm/#comment-2833
1- For mentioned scenario web-app(node1) and DB(node2), we can use expose port options and webapp will use that port, why to create overlay network?
2- By using swarm-mode with replica=1 we can achieve same, so what advantage we will get by using creating overlay network without swarm mode?
3- if node on which consul is installed, it goes down our whole application is no more working.(correct if understanding is wrong)
4- In swarm-mode if manager node goes down(which also have webapp) my understanding is swarm will launch both containers on available host? please correct me if my understanding is not correct?
That article is describing an outdated mode of operation for 'Swarm'. What's described is 'Classic Swarm' that needed an external kv store (like consul) but now Docker primarily uses 'Swarm mode' (which is an orchestration capability built in to the engine itself). To answer what I think your questions are:
I think you're asking, if we can expose a port for a service on a host, why do we need an overlay network? If so, what happens if the host goes down and the container gets re-scheduled to another node? The overlay network takes care of that by keeping track of where containers are and routing traffic appropriately.
Not sure what you mean by this.
If consul was a key piece of discovery infra then yes, it would be a single point of failure so you'd want to run it HA. This is one of the reasons that the dependency on an external kv was removed with 'Swarm Mode'.
Not sure what you mean by this, but maybe about rebalancing? If so then yes, if a host (with containers) goes down, those containers will be re-scheduled on another node.

Is it useful to run publicly-reachable applications as Docker containers just for the sake of security?

There are many use-cases found for docker, and they all have something to do with portability, testing, availability, ... which are especially useful for large enterprise applications.
Considering a single Linux server in the internet, that acts as mail- web- and application server - mostly for private use. No cluster, no need to migrate services, no similar services, that could be created from the same image.
Is it useful to consider wrapping each of the provided services in a Docker container, instead of just running them directly on the server (in a chroot environment) when considering the security of the whole server, or would that be using a sledgehammer to crack a nut?
As far as I would understand, the security would really be increased, as the services would be really isolated, and even gaining root privileges wouldn't allow to escape the chroot, but the maintenance requirements would increase, as I would need to maintain several independent operations system (security updates, log analysis, ...).
What would you propose, and what experiences have you made with Docker in small environments?
From my point of security is, or will be, one of the strengths of linux containers and Docker. But there is a long way to get a secure environment and completely isolated inside a container. Docker and some other big collaborators like RedHat have shown a lot of efforts and interest in securing containers, and any public security flag (about isolation) in Docker has been fixed. Today Docker is not a replacement in terms of isolation to hardware virtualization, but there are projects working in Hypervisors running container that will help in this area. This issue is more related to companies offering IAAS or PAAS where they use virtualization to isolate each client.
In my opinion for a case as you propose, running each service inside a Docker container provides one more layer in your security scheme. If one of the service is compromised there will be one extra lock to gain access to all your server and the rest of services. Maybe the maintenance of the services increases a little, but if you organize your Dockerfiles to use a common Docker image as base, and you (or somebody else) update that base image regularly, you don't need to update all the Docker container one by one. And also if you use a base image that is update regularly (i.e.: Ubuntu, CentOS) the security issues that affect those images will be updated fixed rapidly and you'd only have to rebuild and relaunch your containers to update them. Maybe is an extra work but if security is a priority, Docker may be an added value.

Resources