Control a container from another one - jenkins

I have found a lot of articles which speaks about communication between docker containers (docker network, docker link). But i don't Know if it exists a good practice to control a container from another one, like run and stop a container.
If the only way is to use the rest api on the host, have you got a good article which explains that ? About the rest api i have found too much articles which explain that, most of them outdated.
To precise my intention, i have a jenkins container which builds and moves the built into an other folder for a second container which executes the built code. Basicaly, before the move i want to stop the container and after restart it.
Thanks for help.

i don't Know if it exists a good practice to control a container from
another one, like run and stop a container.
It's a "good enough" practice, and plenty of people do this. CoreOS's /usr/bin/toolbox is basically this, a few others like RancherOS do this as well.
If the only way is to use the rest api on the host have you got a good article which explains that ?
No, it is not. You can mount docker's socket into another docker container and then run docker commands on the host directly from inside the container. This practice is called "docker in docker", "dind", "nested containers" etc. There is a variation of this where people run full fledged versions of docker (docker engine/daemon + client) within an existing container, but that is not what you want to do here.
The gist of it is usually the same, the docker unix socket - /var/run/docker.sock is exposed/mounted within the "controlling container" i.e the container you want to use to control the docker daemon. You then install the docker command line client and use docker commands as normal; docker ps, docker start/stop/run should all work as expected.
It's not trivial to set it up [1], and there are associated security concerns [2][3], but there are plenty of people doing it.
Here are your references:
[1] https://jpetazzo.github.io/2015/09/03/do-not-use-docker-in-docker-for-ci/ , See the section under Solution, everything before that is what you should not be doing.
[2] https://www.lvh.io/posts/dont-expose-the-docker-socket-not-even-to-a-container.html
[3] https://raesene.github.io/blog/2016/03/06/The-Dangers-Of-Docker.sock/

Related

Can I get a docker container to restart itself from an image?

Background
I have a container, its running alot of stuff including a frontend that is exposed to other developers.
Users are going to be uploading some of their shell/python scripts unto my container to be run.
To keep the my container working, my plan is to send the script to a sibling container which will then run them and send me back the response. The user's scripts should be able to download external packages etc.
Then I want the sibling container to be "cleaned"
Question
Can I have that sibling container restart itself from its source image once it is done running the user's script? This way users can get a consistently clean container to run their scripts on.
Note
If I am completely barking up the wrong tree with this solution, please let me know. I am trying to get some weird functionalities going and could be approaching this from the wrong angle.
EDIT 1 (other approaches and why I don't think I like them)
Two alternatives that I have thought of is having the container with the frontend run containers on it. Or have the sibling container run docker containers on it. But these two solutions run into the difficulty of Docker-in-docker. The other solution may be to heighten my frontend container's permissions until it can make sibling containers on the fly for running scripts. But, I am worried that this may result in giving my frontend container unnecessarily high permissions.
EDIT 2 (all the documentation I have found on having a container restart itself)
I am aware of the documentation on autorestart, but I don't believe that this will clean the containers contents. For instance if a file was downloaded onto it.
My answer has some severe security implications.
You could control your sibling containers from your main container, if you map the docker socket from the host into your main container.
docker run -v /var/run/docker.sock:/var/run/docker.sock ...
Now you have complete control over the docker engine from inside your main container. You can start, stop, etc your sibling containers, and spawn new (clean) siblings.
But remember that this is effectively granting host root rights to your main container.

Any reasons to not use Docker Swarm (instead of Docker-Compose) on a single node?

There's Docker Swarm (now built into Docker) and Docker-Compose. People seem to use Docker-Compose when running containers on a single node only. However, Docker-Compose doesn't support any of the deploy config values, see https://docs.docker.com/compose/compose-file/#deploy, which include mem_limit and cpus, which seems like nice/important to be able to set.
So therefore maybe I should use Docker Swarm? although I'm deploying on a single node only. Also, then the installation instructions will be simpler for other people to follow (they won't need to install Docker-Compose).
But maybe there are reasons why I should not use Swarm on a single node?
I'm posting an answer below, but I'm not sure if it's correct.
Edit: Please note that this is not an opinion based question. If you have a look at the answer below, you'll see that there are "have-to" and "cannot-do" facts about this.
For development, use Docker-Compose. Because only Docker-Compose is able to read your Dockerfiles and build images for you. Docker Stack instead needs pre-built images. Also, with Docker-Compose, you can easily start and stop single containers, with docker-compose kill ... and ... start .... This is useful, during development (in my experience). For example, to see how the app server reacts if you kill the database. Then you don't want Swarm to auto-restart the database directly.
In production, use Docker Swarm (unless: see below), so you can configure mem limits. Docker-Compose has less functionality that Docker Swarm (no mem or cpu limits for example) and doesn't have anything that Swarm does not have (right?). So no reason to use Compose in production. (Except maybe if you know how Compose works already and don't want to spend time reading about the new Swarm commands.)
Docker Swarm doesn't, however, support .env files like Docker-Compose does. So you cannot have e.g. IMAGE_VERSION=1.2.3 in an .env file and then in the docker-compose.yml file have: image: name:${IMAGE_VERSION}. See https://github.com/moby/moby/issues/29133 — instead you'll need to set env vars "manually": IMAGE_VERSION=SOMETHING docker stack up ... (this actually made me stick with Docker-Compose. + that I didn't reasonably quickly find out how to view a container's log, via Swarm; Swarm seemed more complicated.)
In addition to #KajMagnus answer I should note that Docker Swarm still don't support Linux Capabilities as Docker [Compose] do. You can learn about this issue and dive into Docker community discussions here.

Docker and jenkins

I am working with docker and jenkins, and I'm trying to do two main tasks :
Control and manage docker images and containers (run/start/stop) with jenkins.
Set up a development environment in a docker image then build and test my application which is in the container using jenkins.
While I was surfing the net I found many solutions :
Run jenkins as container and link it with other containers.
Run jenkins as service and use the jenkins plugins provided to support docker.
Run jenkins inside the container which contain the development environment.
So my question is what is the best solution or you can suggest an other approach.
One more question I heard about running a container inside a container. Is it a good practice or better avoid it ?
To run Jenkins as a containerized service is not a difficult task. There are many images out there that allow you to do just that. It took me just a couple minutes to make Jenkins 2.0-beta-1 run in a container, compiling from source (image can be found here). Particularity I like this approach, you just have to make sure to use a data volume or a data container as jenkins_home to make your data persist.
Things become a little bit trickier when you want to use this Jenkins - in a container - to build and manage containers itself. To achieve that, you need to implement something called docker-in-docker, because you'll need a docker daemon and client available inside the Jenkins container.
There is a very good tutorial explaining how to do it: Docker in Docker with Jenkins and Supervisord.
Basically, you will need to make the two processes (Jenkins and Docker) run in the container, using something like supervisord. It's doable and proclaims to have good isolation, etc... But can be really tricky, because the docker daemon itself has some dependencies, that need to be present inside the container as well. So, only using supervisord and running both processes is not enough, you'll need to make use of the DIND project itself to make it work... AND you'll need to run the container in privileged mode... AND you'll need to deal with some strange DNS problems...
For my personal taste, it sounded too much workarounds to make something simple work and having two services running inside one container seems to break docker good practices and the principle of separation of concerns, something I'd like to avoid.
My opinion got even stronger when I read this: Using Docker-in-Docker for your CI or testing environment? Think twice. It's worth to mention that this last post is from the DIND author himself, so he deserves some attention.
My final solution is: run Jenkins as a containerized service, yes, but consider the docker daemon as part of the provisioning of the underlying server, even because your docker cache and images are data that you'll probably want to persist and they are fully owned and controlled by the daemon.
With this setup, all you need to do is mount the docker daemon socket in your Jenkins image (which also needs the docker client, but not the service):
$ docker run -p 8080:8080 -v /var/run/docker.sock:/var/run/docker.sock -v local/folder/with/jenkins_home:/var/jenkins_home namespace/my-jenkins-image
Or with a docker-compose volumes directive:
---
version: '2'
services:
jenkins:
image: namespace/my-jenkins-image
ports:
- '8080:8080'
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- local/folder/with/jenkins_home:/var/jenkins_home
# other services ...

Jenkins-node as docker container

The jenkins-node is a docker-container on which the jobs are run. A jenkins-job running in the dockerized jenkins-node checks the project of svn/git and runs the build and test in other docker-containers launched by the job. In doing so the jenkins-job mounts via "docker run -v : ..." files/directories from the checked out project into the build-container. This sounds like docker-in-docker, but according to http://jpetazzo.github.io/2015/09/03/do-not-use-docker-in-docker-for-ci/ docker-in-docker is not good in ci. With the recommended approach (mount the hosts docker-socket into the jenkins-node container) I'm facing the problem that the mounted files in the build-container appear as empty direcotries. I think it's because these files are not known in the host (they are checked out inside the jenkins-node container). Providing the --privileged flag doesn't help this way.
However the 'evil' docker-in-docker approach works fine with this scenario. Am I doing s.th. wrong or is docker-in-docker the way to go here?
With "expose the docker socket" approach all volume paths are going to be relative to the host. So if you need to access something in the jenkins-node container you have two options:
make sure the checkout directory is a volume, and use --volumes-from jenkins-node as an argument to all the other docker containers. From your question it sounds like the containers created by the test suite would be configured from the app repos, so this is probably not a good option.
make the checkout directory a host mounted volume -v /git/checkouts:/path/in/jenkins-node/container when you start jenkins-node. That way the files will actually end up on the host (not in the jenkins-node container), and you'll be able to access them a the host path.
I would also say that the article you're referencing is more of a caution. dind is still done quite a bit, sometimes it's even necessary. It's not the worst thing ever, just be aware that it's not a silver bullet and does come with it's own set of issues/problems.

Is there a "multi-user" Docker mode, e.g. for scientific clusters?

I want to use Docker for isolating scientific applications for the use in a HPC Unix cluster. Scientific software often has exotic dependencies so isolating them with Docker appears to be a good idea. The programs are to be run as jobs and not as services.
I want to have multiple users use Docker and the users should be isolated from each other. Is this possible?
I performed a local Docker installation and had two users in the docker group. The call to docker images showed the same results for both users.
Further, the jobs should be run under the calling users's UID and not as root.
Is such a setup feasible? Has it been done before? Is this documented anywhere?
Yes there is! It's called Singularity and it was designed with scientific applications and multi user HPCs. More at http://singularity.lbl.gov/
OK, I think there will be more and more solutions pop up for this. I'll try to update the following list in the future:
udocker for executing Docker containers as users
Singularity (Kudos to Filo) is another Linux container based solution
Don't forget about DinD (Docker in Docker): jpetazzo/dind
You could dedicate one Docker per user, and within one of those docker containers, the user could launch a job in a docker container.
I'm also interested in this possibility with Docker, for similar reasons.
There are a few of problems I can think of:
The Docker Daemon runs as root, providing anyone in the docker group
with effective host root permissions (e.g. leak permissions by
mounting host / dir as root).
Multi user Isolation as mentioned
Not sure how well this will play with any existing load balancers?
I came across Shifter which may be worth a look an partly solves #1:
http://www.nersc.gov/research-and-development/user-defined-images/
Also I know there is discussion to use kernel user namespaces to provide mapping container:root --> host:non-privileged user but I'm not sure if this is happening or not.
There is an officially supported Docker image that allows one to run Docker in Docker (dind), available here: https://hub.docker.com/_/docker/. This way, each user can have their own Docker daemon. First, start the daemon instance:
docker run --privileged --name some-docker -d docker:stable-dins
Note that the --privileged flag is required. Next, connect to that instance from a second container:
docker run --rm --link some-docker:docker docker:edge version

Resources