How do you ensure Kubernetes Deployment file does not override secure settings in DOCKERFILE? - docker

Let's assume you want to run a container under a rootless user context using Kubernetes and Docker runtime. Hence, you specify in the DOCKERFILE the USER directive to be a non-root user (e.g. uid 1000). However, this setting can be overwritten by the Deployment file using the runasuser flag.
If the above scenario is possible (correct me if I am wrong), the security team would potentially scan the DOCKERFILE and container image for vulnerabilities and find it to be safe. Only to be exposed to risk when deploying when a K8S Deployment file specifies runasuser: 0 - which they are not aware of.
What do you think is the best way to mitigate this risk? Obviously, we can place a gate for scanning Deployment files as the final check or just check for both artefacts, or deploy a PodSecurityPolicy checking for this - but was keen to hear whether there are more efficient ways especially in an Agile development space.

Related

Is it possible to manage Dockerfile for a project externally

Is it possible to manage Dockerfile for a project externally
So instead of ProjectA/Dockerfile, ProjectB/Dockerfile
Can we do something like project-deploy/Dockerfile.ProjectA, project-deploy/Dockerfile.ProjectB which somehow will know how to build ProjectA, ProjectB docker images.
We would like to allow separation of the developer, devops roles
Yes this is possible, though not recommended (I'll explain why in a second). First, how you would accomplish what you asked:
Docker Build
The command to build an image in its simplest form is docker build . which performs a build with a build context pulled from the current directory. That means the entire current directory is sent to the docker service, and the service will use it to build an image. This build context should contain all of the local resources docker needs to build your image. In this case, docker will also assume the existence of a file called Dockerfile inside of this context, and use it for the actual build.
However, we can override the default behavior by specifying a the -f flag in our build command, e.g. docker build -f /path/to/some.dockerfile . This command uses your current directory as the build context, but uses it's own Dockerfile that can be defined elsewhere.
So in your case, let's we assume the code for ProjectA is housed in the directory project-a and project-deploy in project-deploy. You can build and tag your docker image as project-a:latest like so:
docker build -f project-deploy/Dockerfile.ProjectA -t project-a:latest project-a/
Why this is a bad idea
There are many benefits to using containers over traditional application packaging strategies. These benefits stem from the extra layer of abstraction that a container provides. It enables operators to use a simple and consistent interface for deploying applications, and it empowers developers with greater control and ownership of the environment their application runs in.
This aligns well with the DevOps philosophy, increases your team's agility, and greatly alleviates operational complexity.
However, to enjoy the advantages containers bring, you must make the organizational changes to reflect them or all your doing is making thing more complex, and further separating operations and development:
If your operators are writing your dockerfiles instead of your developers, then you're just adding more complexity to their job with few tangible benefits;
If your developers are not in charge of their application environments, there will continue to be conflict between operations and development, accomplishing basically nothing for them either.
In short, Docker is a tool, not a solution. The real solution is to make organizational changes that empower and accelerate the individual with logically consistent abstractions, and docker is a great tool designed to complement that organizational change.
So yes, while you could separate the application's environment (the Dockerfile) from its code, it would be in direct opposition to the DevOps philosophy. The best solution would be to treat the docker image as an application resource and keep it in the application project, and allow operational configuration (like environment variables and secrets) to be accomplished with docker's support for volumes and variables.

How can I present environmental information (like external service URLs or passwords) to an OpenShift / Kubernetes deployment?

I have a front-end (React) application. I want to build it and deploy to 3 environments - dev, test and production. As every front-end app it needs to call some APIs. API addresses will vary between the environments. So they should be stored as environment variables.
I utilize S2I Openshift build strategy to create the image. The image should be built and kind of sealed for changes, then before deployment to each particular environment the variables should be injected.
So I believe the proper solution is to have chained, two-stage build. First one S2I which compiles sources and puts it into Nginx/Apache/other container, and second which takes the result of the first, adds environment variables and produces final images, which is going to be deployed to dev, test and production.
Is it correct approach or maybe simpler solution exists?
I would not bake your environmental information into your runtime container image. One of the major benefits of containerization is to use the same runtime image in all of your environments. Generating a different image for each environment would increase the chance that your production deployments behave differently that those you tested in your lower environments.
For non-sensitive information the typical approach for parameterizing your runtime image is to use one or more of:
ConfigMaps
Secrets
Pod Environment Variables
For sensitive information the typical approach is to use:
Secrets (which are not very secret as anyone with root accesss to the hosts or cluster-admin in the cluster rbac can read them)
A vault solution like Hashicorp Vault or Cyberark
A custom solution you develop in-house that meets your security needs

AWS CodeBuild - Security Implications of Enabling Docker Layer Cache

When creating a Codebuild project it's possible to configure a cache in the Artifacts section to speed up subsequent builds.
Docker layer cache is one of the options there. AWS documentation says:
LOCAL_DOCKER_LAYER_CACHE mode caches existing Docker layers. This mode is a good choice for projects that build or pull large Docker images. It can prevent the performance issues caused by pulling large Docker images down from the network.
Note
You can use a Docker layer cache in the Linux environment only.
The privileged flag must be set so that your project has the required Docker permissions.
You should consider the security implications before you use a Docker layer cache.
The question is: What are those security implications?
I believe the AWS docs have been improved since the question was raised, but maybe this also would be useful.
A container in the privileged mode does not differ from any other running process with all the capabilities on the host machine. It undermines the whole idea of container isolation.
Privileged mode opens a possibility for a container to escape from its namespaces and have read/write access to the root partition, and/or access network devices (any sort of direct interaction with the system).
In case a container has an exploit, the security implications could be
disk partitions encryption/deletion
.ssh/authorized_keys modifications

Is it possible to use a kernel module built from within Docker?

I have a custom kernel module I need to build for a specific piece of hardware. I want to automate setting up my system so I have been containerizing several applications. One of the things I need is this kernel module. Assuming the kernel headers et al in the Docker container and the kernel on the host are for the exact same version, is it possible to have my whole build process containerized and allow the host to use that module?
Many tasks that involve controlling the host system are best run directly on the host, and I would avoid Docker here.
At a insmod(8) level, Docker containers generally run with a restricted set of permissions and can’t make extremely invasive changes like this over the host. There’s probably a docker run --cap-add option that would theoretically make it possible, but a significant design statement of Docker is that container processes aren’t supposed to be able to impact other containers or the host like this.
At an even broader Linux level, the build version of custom kernel modules has to match the host’s kernel exactly. This means, if you update the host kernel (for a routine security update for example) you have to also rebuild and reinstall any custom modules. Mainstream Linux distributions have support for this, but if you’ve boxed away management of this into a container, you have to remember how to rebuild the container with the newer kernel headers and make sure it doesn’t get restarted until you reboot the host. That can be tricky.
At a Docker level, you’re in effect building an image that can only be used on one very specific system. Usually the concept is to build an image that can be reused in multiple contexts; you want to be able to push the image to a registry and run it on another system with minimal configuration. It’s hard to do this if an image is tied to an extremely specific kernel version or other host-level dependency.

Does kubernetes supports log retention?

How can one define log retention for kubernetes pods?
For now it seems like the log file size is not limited, and it is uses the host machine complete resources.
According to Logging Architecture from kubernetes.io there are some options
First option
Kubernetes currently is not responsible for rotating logs, but rather
a deployment tool should set up a solution to address that. For
example, in Kubernetes clusters, deployed by the kube-up.sh script,
there is a logrotate tool configured to run each hour. You can also
set up a container runtime to rotate application’s logs automatically,
e.g. by using Docker’s log-opt. In the kube-up.sh script, the latter
approach is used for COS image on GCP, and the former approach is used
in any other environment. In both cases, by default rotation is
configured to take place when log file exceeds 10MB.
Also
Second option
Sidecar containers can also be used to rotate log files that cannot be rotated by the application itself. An example of this approach is a small container running logrotate periodically. However, it’s recommended to use stdout and stderr directly and leave rotation and retention policies to the kubelet.
You can always set the logging retention policy on your docker nodes
See: https://docs.docker.com/config/containers/logging/json-file/#examples
I've just got this working by changing the ExecStart line in /etc/default/docker and adding the line --log-opt max-size=10m
Please note, that this will affect all containers running on a node, which makes it ideal for a Kubernetes setup (because my real-time logs are uploaded to an external ELK stack)

Resources