sharing docker.sock or docker in docker (dind) - jenkins

We're running a mesos cluster and jenkins for continuous integration workflow.
Jenkins is configured with the mesos plugin.
Previously we built our docker images in mesos containers. Now we are switching to docker containers for building our docker images.
I've been searching for the advantage of building our docker images inside a docker container with dind image like this one "dind-jenkins-slave" found on docker hub.
With dind you lose the caching opportunities when sharing the docker.sock of the host. And with dind you also have to push the privileged parameter.
What is the downside of just sharing the docker.sock of the host?

I'm using sharing docker.sock approach. The only downside which I see is security - you could do everything what you want with the host when you could run any docker containers. But if you trust people who create jobs or could control which docker containers with which options could be run from jenkins then giving access to main docker daemon is easy solution.

It depends on what you're doing, really. To get our jenkins jobs truly isolated so that we can run as many as we want in parallel, we switched to DinD. If you share the host socket you still only have a single docker daemon- port conflicts, pulling/pushing multiple images from separate jobs, and one job relying on an image or build that is also being messed with by another job are all issues.
To get around the caching issue, I create the dind container and leave it around. I run
docker start -a dindslave || docker run -v ${WORKSPACE}:/data my/dindimage jenkinscommands.sh
Then jenkins just writes its commands to jenkinscommands.sh and restarts the container every time. When you remove the container you remove your cache as well, and you don't share caches between jobs if that is something you want... but you don't have to think about jobs interfering with one another or making sure they are running on the same host.

Related

Use docker socket during image building [duplicate]

I would like to run integration test while I'm building docker image. Those tests need to instantiate docker containers.
Is there a possibility to access docker inside such multi stage docker build?
No, you can't do this.
You need access to your host's Docker socket somehow. In a standalone docker run command you'd do something like docker run -v /var/run/docker.sock:/var/run/docker.sock, but there's no way to pass that option (or any other volume mount) into docker build.
For running unit-type tests (that don't have external dependencies) I'd just run them in your development or core CI build environment, outside of Docker, and run run docker build until they pass. For integration-type tests (that do) you need to set up those dependencies, maybe with a Docker Compose file, which again will be easier to do outside of Docker. This also avoids needing to build your test code and its additional dependencies into your image.
(Technically there are two ways around this. The easier of the two is the massive security disaster that is opening up a TCP-based Docker socket; then your Dockerfile could connect to that ["remote"] Docker daemon and launch containers, stop them, kill itself off, impersonate the host for inbound SSH connections, launch a bitcoin miner that lives beyond the container build, etc...actually it allows any process on the host to do any of these things. The much harder, as #RaynalGobel suggests in a comment, is to try to launch a separate Docker daemon inside the container; the DinD image link there points out that it requires a --privileged container, which again you can't have at build time.)

difference between docker service and docker container

I can create a docker container by command
docker run <<image_name>>
I can create a service by command
docker service create <<image_name>>
What is the difference between these two in behaviour?
When would I need to create a service over container?
docker service command in a docker swarm replaces the docker run. docker run has been built for single host solutions. Its whole idea is to focus on local containers on the system it is talking to. Whereas in a cluster the individual containers are irrelevant. We simply use swarm services to manage the multiple containers in a cluster. Swarm will orchestrate the containers of the services for us.
docker service create is mainly to be used in docker swarm mode. docker run does not have the concept of scaling up/down. With docker service create you can specify the number of replicas to be created using the --replicas command. This will create and manage multiple replicas of a containers in many different nodes. There are several such options for managing multiple containers using docker service create and other commands under docker service ...
One more note: docker services are for container orchestration systems(swarm). It has built in facility for failure recovery. ie. it recreates a container on failure. docker runwould never recreate a container if it fails. When the docker service commands are used we are not directly asking to perform action like "create a single container", rather we are saying to the orchestration system to "put this job in your queue and when you can get to it perform that action on the swarm". This means it has rollback facilities, failure mitigation and lots of intelligence built in.
You need to consider using docker service create when in swarm mode and docker run when not in swarm mode. You can lookup on docker swarms to understand docker services.
There is no real difference. In the official documentation you can read "Services are really just containers in production".
Services can be declared in "docker-compose.yml" and can be started from it. Once started, they will run as containers.
It is just a common way to name parts of your stack.

Access docker within Dockerfile?

I would like to run integration test while I'm building docker image. Those tests need to instantiate docker containers.
Is there a possibility to access docker inside such multi stage docker build?
No, you can't do this.
You need access to your host's Docker socket somehow. In a standalone docker run command you'd do something like docker run -v /var/run/docker.sock:/var/run/docker.sock, but there's no way to pass that option (or any other volume mount) into docker build.
For running unit-type tests (that don't have external dependencies) I'd just run them in your development or core CI build environment, outside of Docker, and run run docker build until they pass. For integration-type tests (that do) you need to set up those dependencies, maybe with a Docker Compose file, which again will be easier to do outside of Docker. This also avoids needing to build your test code and its additional dependencies into your image.
(Technically there are two ways around this. The easier of the two is the massive security disaster that is opening up a TCP-based Docker socket; then your Dockerfile could connect to that ["remote"] Docker daemon and launch containers, stop them, kill itself off, impersonate the host for inbound SSH connections, launch a bitcoin miner that lives beyond the container build, etc...actually it allows any process on the host to do any of these things. The much harder, as #RaynalGobel suggests in a comment, is to try to launch a separate Docker daemon inside the container; the DinD image link there points out that it requires a --privileged container, which again you can't have at build time.)

What is the purpose of creating a volume between two docker.sock files in Docker?

I see this in many docker apps that are somehow related hardly on networking, but I can't understand this
Got it.
https://jpetazzo.github.io/2015/09/03/do-not-use-docker-in-docker-for-ci/
Let’s take a step back here. Do you really want Docker-in-Docker? Or
do you just want to be able to run Docker (specifically: build, run,
sometimes push containers and images) from your CI system, while this
CI system itself is in a container?
I’m going to bet that most people want the latter. All you want is a
solution so that your CI system like Jenkins can start containers.
And the simplest way is to just expose the Docker socket to your CI
container, by bind-mounting it with the -v flag.
Simply put, when you start your CI container (Jenkins or other),
instead of hacking something together with Docker-in-Docker, start it
with:
docker run -v /var/run/docker.sock:/var/run/docker.sock
Now this
container will have access to the Docker socket, and will therefore be
able to start containers. Except that instead of starting “child”
containers, it will start “sibling” containers.

Docker Container management in production environment

Maybe I missed something in the Docker documentation, but I'm curious and can't find an answer:
What mechanism is used to restart docker containers if they should error/close/etc?
Also, if many functions have to be done via a docker run command, say for instance volume mounting or linking, how does one bring up an entire hive of containers which complete an application without using docker compose? (as they say it is not production ready)
What mechanism is used to restart docker containers if they should error/close/etc?
Docker restart policies, as set with the --restart option to docker run. From the docker-run(1) man page:
--restart=""
Restart policy to apply when a container exits (no, on-fail‐
ure[:max-retry], always)
Also, if many functions have to be done via a docker run command, say for instance volume mounting or linking, how does one bring up an entire hive of containers which complete an application without using docker compose?
Well, you can of course use docker-compose if that is the best match for your requirements, even if it is not labelled as "production ready".
You can investigate larger container management solutions like Kubernetes or even OpenStack (although I would not recommend the latter unless you are already familiar with OpenStack).
You could craft individual systemd unit files for each container.

Resources