Can I put kubernetes in a docker container? - docker

I've used Docker Swarm - I can put the management and the agents in docker containers. Can I do the same with Kubernetes?
I don't want to pollute my machine.

All of the master components in Kubernetes run inside of containers.
Due to limitations of Docker, the kubelet agent has been difficult to get running in a container. The Kubernetes folks have been working on this for the last year (see kubernetes#4869), and with Docker 1.10 it looks like it is getting close to working.

Related

Why minikube runs as a container itself?

While playing around with Docker and orchestration (kubernetes) I had to install and use minikube to create a simple sandbox environment. At the beginning I thought that minikube installs some kind of VM and run the "minified" kubernetes environment inside the same, however, after the installation listing my local Docker running containers I found minikube running as a container!!
Why minikube itself run as a Docker container? and how can it runs other containers?
Experimental Docker support looks to have been added in minikube 1.7.0, and started becoming the default runtime in minikube 1.9.0. As I'm writing this, current is 1.15.1.
The minikube documentation on the "docker" driver notes, particularly on a native-Linux host, there is not an intermediate virtual machine: if you can run Kubernetes in a container, it can use the entire host system's resources without special configuration or partitioning. The previous minikube-on-VirtualBox installation required preallocating memory and disk to the VM, and it was easy to get those settings wrong. Even on non-Linux hosts, if you're running Docker Desktop, sharing its hidden Linux VM can improve resource utilization, and you don't need to decide to allocate exactly 2 GB RAM to Docker Desktop and exactly 4 GB to the minikube VM.
For a long time it's been possible, but discouraged, to run a separate Docker daemon inside a Docker container; similarly, it's possible, but usually discouraged, to run a multi-process init system in a container. If you do both of these things then you can have the core Kubernetes components (etcd, apiserver, kubelet, ...) inside a single container pretending to be a Kubernetes node. It also helps here that Kubernetes already knows how to pull Docker images, which minimizes some of the confusing issues with running Docker in Docker.

Kubernetes consumes more memory, why?

I have been working with docker to run my scripts on chrome-node and firefox -node and debug with the selenium-hub image where it runs smoothly, but when I use the same with k8s the whole system slows down. Why is this happening, any idea. I am using minikubes for kubernetes and docker toolbox and docker compose for docker.
Thanks,
There would definitely be an additional overhead when you start Kubernetes using minikube locally, compared to just starting a Docker container on the host.
In order to have a Kubernetes cluster, minikube creates a VM on the machine where the Kubernetes components will run in addition to the Docker container.
Anyway, minikube is not a production way for running Kubernetes. It is mostly meant for local development and testing. Therefore, you shouldn't evaluated kubernetes performance based on a minikube installation.

What are benefits of having jenkins master in a docker container?

I saw couple of tutorials on continuous deployment (on docker.com, on codecentric.de, on devopscube.com).
Overall I saw two approaches:
Set two types of jenkins server (master and slave). Master is in a docker container and slave on the host machine.
Jenkins server in docker container. They set up the link to the host and using that link the jenkins can create or recreate docker images.
In the first approach - I do not understand why they set up additional jenkins server residing inside the docker container. Is not it enough just to have jenkins server on host machine alongside with docker container?
The second approach seems to me a bit insecure because process from container is accessing host OS. Does it have any benefits?
Thanks for any useful info.

Rancher Performance (Docker in Docker?)

Looking at Rancher, what is the performance like? I guess my main question, is everything deployed in Rancher docker in docker? After reading http://jpetazzo.github.io/2015/09/03/do-not-use-docker-in-docker-for-ci/ I trying to stay away from that idea. It looks like the Rancher CI pipeline with Docker/Jenkins is docker in docker, but what about the rest? If i setup a docker-compose or deploy something from their catalog, is it all docker in docker? I've read through their documentation and this simple question has still just flown over my head. Any guidance would be much appreciated.
Thank you
Rancher itself is not deployed with Docker in Docker (DinD). The main components of Rancher, rancher/server and rancher/agent are both normal containers. The server, in a normal deployment, runs the orchestration piece and a few other key services for the catalog, Docker Machine provisioning, websocket-proxy and MySQL. All of these can be broken out if desired, but for simplicity of getting started, its all in one. We use s6 to manage the orchestration and database processes.
The rancher/agent container is privileged and requires the user to bind mount the hosts Docker socket. We package a Docker binary in the container and use it to communicate with the host on startup. It is similar to the way a Mac talks to Boot2docker, the binary is just a client talking to a remote Docker daemon. Once the agent is bootstrapped, it communicates back to the Rancher server container over a websocket connection. When containers and stacks are deployed Rancher server sends events to the agents which then call the hosts Docker daemon for deployment. The deployed containers are running as normal Docker containers on the host, just as if the user typed docker run .... In fact, a neat feature of Rancher is that if you do type docker run ... on the host, the resulting container will show up in the Rancher UI.
The Jenkins entry in the Rancher catalog, when using the Swarm plugin is doing a host bind mount of the Docker socket as well. We have some early experiments that used DinD to test out some concepts with Jenkins, but those were not released.

sharing docker.sock or docker in docker (dind)

We're running a mesos cluster and jenkins for continuous integration workflow.
Jenkins is configured with the mesos plugin.
Previously we built our docker images in mesos containers. Now we are switching to docker containers for building our docker images.
I've been searching for the advantage of building our docker images inside a docker container with dind image like this one "dind-jenkins-slave" found on docker hub.
With dind you lose the caching opportunities when sharing the docker.sock of the host. And with dind you also have to push the privileged parameter.
What is the downside of just sharing the docker.sock of the host?
I'm using sharing docker.sock approach. The only downside which I see is security - you could do everything what you want with the host when you could run any docker containers. But if you trust people who create jobs or could control which docker containers with which options could be run from jenkins then giving access to main docker daemon is easy solution.
It depends on what you're doing, really. To get our jenkins jobs truly isolated so that we can run as many as we want in parallel, we switched to DinD. If you share the host socket you still only have a single docker daemon- port conflicts, pulling/pushing multiple images from separate jobs, and one job relying on an image or build that is also being messed with by another job are all issues.
To get around the caching issue, I create the dind container and leave it around. I run
docker start -a dindslave || docker run -v ${WORKSPACE}:/data my/dindimage jenkinscommands.sh
Then jenkins just writes its commands to jenkinscommands.sh and restarts the container every time. When you remove the container you remove your cache as well, and you don't share caches between jobs if that is something you want... but you don't have to think about jobs interfering with one another or making sure they are running on the same host.

Resources