How can I manage the SELinux context of a docker container? - docker

tl;dr: Is there a way to run a docker container in the callers SELinux context?
I am running a bunch of different docker images on different (mostly RH-based) servers for building/testing purposes.
Imagine an automated flow like this:
checkout sources
run on CentoOS 7
run on Ubuntu 18.04
run on Fedora 30
One particular feature of this setup is that all docker containers work on the same (versioned) source files, bind-mounted into /src. Earlier on, I discovered I had to supply the SELinux mount options :Z or :z for the container to get access to the files checked out on the host.
Due to a design decision, the containers also have access to the other container's build artifacts through src. And apparently the relabeling of /src/ can take minutes on a host system that has a rather large build history.
I could of course try to restructure things or use --security-opt label:disable on the containers. But I wondered why the relabeling is necessary in the first place. Could I not simply run the containers in my context? They basically work like a sophisticated chroot in that setup and do not expose any public services.
Bonus question: What, exactly, does --security-opt label:disable do?

Related

docker: /opt/docker folder not created

I am trying to configure my project to dockerize it. I can test it locally in my wsl environment, and it works fine. Inside docker, /opt/docker folder is created, and I can access my application from host machine.
But on dev server, I observe that /opt/docker is not even created.
I am not able to diagnose the root cause. Shouldn't docker behave similarly on all machines?
Not necessarily, no. You shouldn't care about 'docker', how its implemented or what directories it uses. You should only care that it works.
For example, on my WSL installation, I have /opt/containerd, not /opt/docker. I think this is because I locally install docker in wsl (because I refuse to use Docker Desktop). It's different again when I deploy to my k8s cluster, which doesn't use docker at all.
You should care about your images and containers. As long as your container runs the same, then the rest is an implementation detail that should be transparent to you.

Should I create a docker container or docker start a stopped container?

From the docker philosophy's point of view it is more advisable:
create a container every time we need to use a certain environment and then remove it after use (docker run <image> all the time); or
create a container for a specific environment (docker run <image>), stop it when it is not necessary and whenever it is initialized again (docker start <container>);
If you docker rm the old container and docker run a new one, you will always get a clean filesystem that starts from exactly what's in the original image (plus any volume mounts). You will also fairly routinely need to delete and recreate a container to change basic options: if you need to change a port mapping or an environment variable, or if you need to update the image to have a newer version of the software, you'll be forced to delete the container.
This is enough reason for me to make my standard process be to always delete and recreate the container.
# docker build -t the-image . # can be done first if needed
docker stop the-container # so it can cleanly shut down and be removed
docker rm the-container
docker run --name the-container ... the-image
Other orchestrators like Docker Compose and Kubernetes are also set up to automatically delete and recreate the container (or Kubernetes pod) if there's a change; their standard workflows do not generally involve restarting containers in-place.
I almost never use docker start. In a Compose-based workflow I generally use only docker-compose up -d, letting it restart things if needed; docker-compose down if I need the CPU/memory resources the container stack was using but not in routine work.
I'm talking with regards to my experience in the industry so take my answer with a grain of salt, because there might be no hard evidence or reference to the theory.
Here's the answer:
TL;DR:
In short, you never need the docker stop and docker start because taking this approach is unreliable and you might lose the container and all the data inside if no proper action is applied beforehand.
Long answer:
You should only work with images and not the containers. Whenever you need some specific data or you need the image to have some customization, you better use docker save to have the image for future use.
If you're just testing out on your local machine, or in your dev virtual machine on a remote host, you're free to use either one you like. I personally take each of the approaches on different scenarios.
But if you're talking about a production environment, you'd better use some orchestration tool; it could be as simple and easy to work with as docker-compose or docker swarm or even Kubernetes on more complex environments.
You better not take the second approach (docker run, docker stop & docker start) in those environments because at any moment in time you might lose that container and if you are solely dependent on that specific container or it's data, then you're gonna have a bad weekend.

Docker and jenkins

I am working with docker and jenkins, and I'm trying to do two main tasks :
Control and manage docker images and containers (run/start/stop) with jenkins.
Set up a development environment in a docker image then build and test my application which is in the container using jenkins.
While I was surfing the net I found many solutions :
Run jenkins as container and link it with other containers.
Run jenkins as service and use the jenkins plugins provided to support docker.
Run jenkins inside the container which contain the development environment.
So my question is what is the best solution or you can suggest an other approach.
One more question I heard about running a container inside a container. Is it a good practice or better avoid it ?
To run Jenkins as a containerized service is not a difficult task. There are many images out there that allow you to do just that. It took me just a couple minutes to make Jenkins 2.0-beta-1 run in a container, compiling from source (image can be found here). Particularity I like this approach, you just have to make sure to use a data volume or a data container as jenkins_home to make your data persist.
Things become a little bit trickier when you want to use this Jenkins - in a container - to build and manage containers itself. To achieve that, you need to implement something called docker-in-docker, because you'll need a docker daemon and client available inside the Jenkins container.
There is a very good tutorial explaining how to do it: Docker in Docker with Jenkins and Supervisord.
Basically, you will need to make the two processes (Jenkins and Docker) run in the container, using something like supervisord. It's doable and proclaims to have good isolation, etc... But can be really tricky, because the docker daemon itself has some dependencies, that need to be present inside the container as well. So, only using supervisord and running both processes is not enough, you'll need to make use of the DIND project itself to make it work... AND you'll need to run the container in privileged mode... AND you'll need to deal with some strange DNS problems...
For my personal taste, it sounded too much workarounds to make something simple work and having two services running inside one container seems to break docker good practices and the principle of separation of concerns, something I'd like to avoid.
My opinion got even stronger when I read this: Using Docker-in-Docker for your CI or testing environment? Think twice. It's worth to mention that this last post is from the DIND author himself, so he deserves some attention.
My final solution is: run Jenkins as a containerized service, yes, but consider the docker daemon as part of the provisioning of the underlying server, even because your docker cache and images are data that you'll probably want to persist and they are fully owned and controlled by the daemon.
With this setup, all you need to do is mount the docker daemon socket in your Jenkins image (which also needs the docker client, but not the service):
$ docker run -p 8080:8080 -v /var/run/docker.sock:/var/run/docker.sock -v local/folder/with/jenkins_home:/var/jenkins_home namespace/my-jenkins-image
Or with a docker-compose volumes directive:
---
version: '2'
services:
jenkins:
image: namespace/my-jenkins-image
ports:
- '8080:8080'
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- local/folder/with/jenkins_home:/var/jenkins_home
# other services ...

Jenkins-node as docker container

The jenkins-node is a docker-container on which the jobs are run. A jenkins-job running in the dockerized jenkins-node checks the project of svn/git and runs the build and test in other docker-containers launched by the job. In doing so the jenkins-job mounts via "docker run -v : ..." files/directories from the checked out project into the build-container. This sounds like docker-in-docker, but according to http://jpetazzo.github.io/2015/09/03/do-not-use-docker-in-docker-for-ci/ docker-in-docker is not good in ci. With the recommended approach (mount the hosts docker-socket into the jenkins-node container) I'm facing the problem that the mounted files in the build-container appear as empty direcotries. I think it's because these files are not known in the host (they are checked out inside the jenkins-node container). Providing the --privileged flag doesn't help this way.
However the 'evil' docker-in-docker approach works fine with this scenario. Am I doing s.th. wrong or is docker-in-docker the way to go here?
With "expose the docker socket" approach all volume paths are going to be relative to the host. So if you need to access something in the jenkins-node container you have two options:
make sure the checkout directory is a volume, and use --volumes-from jenkins-node as an argument to all the other docker containers. From your question it sounds like the containers created by the test suite would be configured from the app repos, so this is probably not a good option.
make the checkout directory a host mounted volume -v /git/checkouts:/path/in/jenkins-node/container when you start jenkins-node. That way the files will actually end up on the host (not in the jenkins-node container), and you'll be able to access them a the host path.
I would also say that the article you're referencing is more of a caution. dind is still done quite a bit, sometimes it's even necessary. It's not the worst thing ever, just be aware that it's not a silver bullet and does come with it's own set of issues/problems.

Is there a "multi-user" Docker mode, e.g. for scientific clusters?

I want to use Docker for isolating scientific applications for the use in a HPC Unix cluster. Scientific software often has exotic dependencies so isolating them with Docker appears to be a good idea. The programs are to be run as jobs and not as services.
I want to have multiple users use Docker and the users should be isolated from each other. Is this possible?
I performed a local Docker installation and had two users in the docker group. The call to docker images showed the same results for both users.
Further, the jobs should be run under the calling users's UID and not as root.
Is such a setup feasible? Has it been done before? Is this documented anywhere?
Yes there is! It's called Singularity and it was designed with scientific applications and multi user HPCs. More at http://singularity.lbl.gov/
OK, I think there will be more and more solutions pop up for this. I'll try to update the following list in the future:
udocker for executing Docker containers as users
Singularity (Kudos to Filo) is another Linux container based solution
Don't forget about DinD (Docker in Docker): jpetazzo/dind
You could dedicate one Docker per user, and within one of those docker containers, the user could launch a job in a docker container.
I'm also interested in this possibility with Docker, for similar reasons.
There are a few of problems I can think of:
The Docker Daemon runs as root, providing anyone in the docker group
with effective host root permissions (e.g. leak permissions by
mounting host / dir as root).
Multi user Isolation as mentioned
Not sure how well this will play with any existing load balancers?
I came across Shifter which may be worth a look an partly solves #1:
http://www.nersc.gov/research-and-development/user-defined-images/
Also I know there is discussion to use kernel user namespaces to provide mapping container:root --> host:non-privileged user but I'm not sure if this is happening or not.
There is an officially supported Docker image that allows one to run Docker in Docker (dind), available here: https://hub.docker.com/_/docker/. This way, each user can have their own Docker daemon. First, start the daemon instance:
docker run --privileged --name some-docker -d docker:stable-dins
Note that the --privileged flag is required. Next, connect to that instance from a second container:
docker run --rm --link some-docker:docker docker:edge version

Resources