Rancher Performance (Docker in Docker?) - jenkins

Looking at Rancher, what is the performance like? I guess my main question, is everything deployed in Rancher docker in docker? After reading http://jpetazzo.github.io/2015/09/03/do-not-use-docker-in-docker-for-ci/ I trying to stay away from that idea. It looks like the Rancher CI pipeline with Docker/Jenkins is docker in docker, but what about the rest? If i setup a docker-compose or deploy something from their catalog, is it all docker in docker? I've read through their documentation and this simple question has still just flown over my head. Any guidance would be much appreciated.
Thank you

Rancher itself is not deployed with Docker in Docker (DinD). The main components of Rancher, rancher/server and rancher/agent are both normal containers. The server, in a normal deployment, runs the orchestration piece and a few other key services for the catalog, Docker Machine provisioning, websocket-proxy and MySQL. All of these can be broken out if desired, but for simplicity of getting started, its all in one. We use s6 to manage the orchestration and database processes.
The rancher/agent container is privileged and requires the user to bind mount the hosts Docker socket. We package a Docker binary in the container and use it to communicate with the host on startup. It is similar to the way a Mac talks to Boot2docker, the binary is just a client talking to a remote Docker daemon. Once the agent is bootstrapped, it communicates back to the Rancher server container over a websocket connection. When containers and stacks are deployed Rancher server sends events to the agents which then call the hosts Docker daemon for deployment. The deployed containers are running as normal Docker containers on the host, just as if the user typed docker run .... In fact, a neat feature of Rancher is that if you do type docker run ... on the host, the resulting container will show up in the Rancher UI.
The Jenkins entry in the Rancher catalog, when using the Swarm plugin is doing a host bind mount of the Docker socket as well. We have some early experiments that used DinD to test out some concepts with Jenkins, but those were not released.

Related

nested docker setup: child exposes parent

I' have the following docker setup:
Ubuntu server
-> running Jenkins (started with docker-compose)
-> running a pipeline which starts a node-alpine image
-> which then calls a new docker-compose up (needed for tests)
If I call docker ps from the node-alpine container, I see all the containers from the ubuntu server. I would have expected to only see the newly started containers.
Is this an indication that my setup is flawed? Or just the way docker works?
That's just the way Docker works. There's no such thing as a hierarchy of containers.
Typically with setups like this you give an orchestrator (like Jenkins) access to the host's Docker socket. That means containers launched from Jenkins are indistinguishable from containers launched directly from the host, and it means that Jenkins can do anything with Docker that you could have done from the host.
Remember that access to the Docker socket means reading and modifying arbitrary files as root on the host, along with starting, stopping, deleting, and replacing other containers. In this setup you might re-evaluate how much you really need that lowest level container to start further containers, since it is a significant security exposure.

Docker in docker and docker compose block one port for no reason

Right now I am setting up an application that has a deployment based upon docker images.
I use gitlab ci to:
Test each service
Build each service
Dockerize each image (create docker container)
Run integration tests (start docker compose that starts all services on special ports, run integration tests)
Stop prod images and run new images
I did this for each service, but I ran into an issue.
When I start my docker container for integration tests then it is setup within a gitlab ci task. For each task a docker based runner is used. I also mount my host docker socket to be able to use docker in docker.
So my gradle docker image is started by the gitlab runner. Then docker will be installed and all images will be started using docker compose.
One microservice listens to port 10004. Within the docker compose file there is a 11004:10004 port mapping.
My integration tests try to connect to port 11004. But this does not work right now.
When I attach to the image that run docker compose while it tries to execute the integration test then I am not able to do it manually by calling
wget ip: port
I just get the message connected and waiting for response. Either my tests can connect successfully. My service does not log any message about a new connection.
When I execute this wget command within  my host shell then it works.
It's a public ip and within my container I can also connect to other ports using telnet and wget. Just one port of one service is broken when I try to connect from my docker in docker instance.
When I do not use docker compose then it works. Docker compose seems to setup a special default network that does something weird.
Setting network to host also works...
So did anyone also make such an experience when using docker compose?
The same setup works flawless in docker for mac, but my server runs on Debian 8.
My solution for now is to use a shell runner to avoid docker in docker issues. It works there as well.
So docker in docker combined with docker compose seems to have an ugly bug.
I'm writing while I am sitting in the subway but I hope describing my issue is also sufficient to talk about experiences. I don't think we need some sourcecode to find bad configurations because it works without docker in docker and on Mac.
I figured out that docker in docker has still some weird behaviors. I fixed my issue by adding a new gitlab ci runner that is a shell runner. Therefore docker-compose is run on my host and everything works flawless.
I can reuse the same runner for starting docker images in production as I do for integration testing. So the easy fix has another benefit for me.
The result is a best practice to avoid pitfalls:
Only use docker in docker when there is a real need.
For example to make sure fast io communication between your host docker image and your docker image of interest.
Have fun using docker (in docker (in docker)) :]

What are benefits of having jenkins master in a docker container?

I saw couple of tutorials on continuous deployment (on docker.com, on codecentric.de, on devopscube.com).
Overall I saw two approaches:
Set two types of jenkins server (master and slave). Master is in a docker container and slave on the host machine.
Jenkins server in docker container. They set up the link to the host and using that link the jenkins can create or recreate docker images.
In the first approach - I do not understand why they set up additional jenkins server residing inside the docker container. Is not it enough just to have jenkins server on host machine alongside with docker container?
The second approach seems to me a bit insecure because process from container is accessing host OS. Does it have any benefits?
Thanks for any useful info.

Can I put kubernetes in a docker container?

I've used Docker Swarm - I can put the management and the agents in docker containers. Can I do the same with Kubernetes?
I don't want to pollute my machine.
All of the master components in Kubernetes run inside of containers.
Due to limitations of Docker, the kubelet agent has been difficult to get running in a container. The Kubernetes folks have been working on this for the last year (see kubernetes#4869), and with Docker 1.10 it looks like it is getting close to working.

Deploying docker swarm without using docker machine

Currently I have a bunch of RHEL7 VMs running on RackSpace and want to deploy docker swarm for testing purpose. The Docker Docs only describes the method to deploy docker swarm by using docker machine.
Question:
Since VirtualBox cannot be used in VMs, are any other ways such that I can directly deploy docker swarm on my VMs without using docker machine?
In fact Docker documentation offers you how to set up a swarm cluster 'manually' without using docker-machine: Create a swarm for development
I think that this full step-by-step tutorial might be useful.
It details how to deploy Swarm with a multi-hosts network, without Docker-machine by using consul and suggest two different means for the Swarm agent discovery (static file and token).

Resources