What does docker daemon do in kubernetes after container(s) is started? - docker

I see that docker daemon use a lot of CPU. As I understand the kubelet and the dockerd communicate with each other to maintain the state of the cluster. But does dockerd for some reason do extra runtime work after containers are started that would spike CPU? To get information to report to kubelet?

But does dockerd for some reason do extra runtime work after containers are started that would spike CPU?
Not really unless you have another container or process constantly calling the docker API or running a docker command from the CLI.
The kubelet talks to the docker daemon through a docker shim to do everything that it needs to run containers, so I would check if the kubelet is doing some extra works, maybe scheduling and then evicting/stopping containers.

Related

Gracefully stop docker container when shutting down Google Compute Engine VM

When I delete a GCE VM I need my docker container to get stopped gracefully before the VM shuts down.
I am using Compute Engine Container Optimized OS (COS) and would expect my containers to be managed properly, but this is not what I am experiencing.
I tried a shutdown-script calling docker stop $(docker ps -a -q) but it doesn't make a difference at all. I can see it runs, but it seems the container is already gone by then.
I've tried trapping SIGTERM in my application. In the VM it's not trapping the signal, but on my local machine it does.
I am a bit lost and don't know what else to try. Any idea?
Take a look at Stopping Docker Containers Gracefully and also Gracefully Stopping Docker Containers

Why minikube runs as a container itself?

While playing around with Docker and orchestration (kubernetes) I had to install and use minikube to create a simple sandbox environment. At the beginning I thought that minikube installs some kind of VM and run the "minified" kubernetes environment inside the same, however, after the installation listing my local Docker running containers I found minikube running as a container!!
Why minikube itself run as a Docker container? and how can it runs other containers?
Experimental Docker support looks to have been added in minikube 1.7.0, and started becoming the default runtime in minikube 1.9.0. As I'm writing this, current is 1.15.1.
The minikube documentation on the "docker" driver notes, particularly on a native-Linux host, there is not an intermediate virtual machine: if you can run Kubernetes in a container, it can use the entire host system's resources without special configuration or partitioning. The previous minikube-on-VirtualBox installation required preallocating memory and disk to the VM, and it was easy to get those settings wrong. Even on non-Linux hosts, if you're running Docker Desktop, sharing its hidden Linux VM can improve resource utilization, and you don't need to decide to allocate exactly 2 GB RAM to Docker Desktop and exactly 4 GB to the minikube VM.
For a long time it's been possible, but discouraged, to run a separate Docker daemon inside a Docker container; similarly, it's possible, but usually discouraged, to run a multi-process init system in a container. If you do both of these things then you can have the core Kubernetes components (etcd, apiserver, kubelet, ...) inside a single container pretending to be a Kubernetes node. It also helps here that Kubernetes already knows how to pull Docker images, which minimizes some of the confusing issues with running Docker in Docker.

Cron job to kill all hanging docker containers

I am new to docker containers but we have containers being deployed and due to some internal application network bugs the process running in the container hangs and the docker container is not terminated. While we debug this issue I would like a way to find all those containers and setup a cron job to periodically check and kill those relevant containers.
So how would I determine from "docker ps -a" which containers should be dropped and how would I go about it? Any ideas? We are eventually moving to kubernetes which will help with these issues.
Docker already have a command to cleanup the docker environment, you can use it manually or maybe setup a job to run the following command:
$ docker system prune
Remove all unused containers, networks, images (both dangling and
unreferenced), and optionally, volumes.
refer to the documentation for more details on advanced usage.

Kubernetes consumes more memory, why?

I have been working with docker to run my scripts on chrome-node and firefox -node and debug with the selenium-hub image where it runs smoothly, but when I use the same with k8s the whole system slows down. Why is this happening, any idea. I am using minikubes for kubernetes and docker toolbox and docker compose for docker.
Thanks,
There would definitely be an additional overhead when you start Kubernetes using minikube locally, compared to just starting a Docker container on the host.
In order to have a Kubernetes cluster, minikube creates a VM on the machine where the Kubernetes components will run in addition to the Docker container.
Anyway, minikube is not a production way for running Kubernetes. It is mostly meant for local development and testing. Therefore, you shouldn't evaluated kubernetes performance based on a minikube installation.

How kubelet - docker container communication happens?

I wondered about how kubelet communicates with docker containers. Where this configuration has defined? I searched a lot but didn't find anything informative. I am using https kube API server. I am able to create pods but containers are not getting spawned ? Any one knows what may be the cause ? Thanks in advance.
Kubelet talks to the docker daemon using the docker API over the docker socket. You can override this with --docker-endpoint= argument to the kubelet.
Pods may not be being spwaned for any number of reasons. Check the logs of your scheduler, controller-manager and kubelet.

Resources