I wondered about how kubelet communicates with docker containers. Where this configuration has defined? I searched a lot but didn't find anything informative. I am using https kube API server. I am able to create pods but containers are not getting spawned ? Any one knows what may be the cause ? Thanks in advance.
Kubelet talks to the docker daemon using the docker API over the docker socket. You can override this with --docker-endpoint= argument to the kubelet.
Pods may not be being spwaned for any number of reasons. Check the logs of your scheduler, controller-manager and kubelet.
Related
Kubernetes is stopping the support of Docker. What exactly does this mean for my Docker containers in my Kubernetes cluster?
https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.20.md#deprecation
No need to worry. Kubernetes will not be using the docker engine to manage containers. For more info take a look at this thread:
https://twitter.com/Dixie3Flatline/status/1334188913724850177
If you are a user, just ignore it. If you are an operator, change to CRI-O or containerd.
I see that docker daemon use a lot of CPU. As I understand the kubelet and the dockerd communicate with each other to maintain the state of the cluster. But does dockerd for some reason do extra runtime work after containers are started that would spike CPU? To get information to report to kubelet?
But does dockerd for some reason do extra runtime work after containers are started that would spike CPU?
Not really unless you have another container or process constantly calling the docker API or running a docker command from the CLI.
The kubelet talks to the docker daemon through a docker shim to do everything that it needs to run containers, so I would check if the kubelet is doing some extra works, maybe scheduling and then evicting/stopping containers.
is there any way to disable/leave the swarm mode of docker when starting the daemon manually, e.g. dockerd --leave-swarm, instead of starting the daemon and leave the swarm mode afterwards, e.g. using docker swarm leave?
Many thanks in advance,
Aljoscha
I don't think this is anticipated by docker developers. When node leaves swarm, it needs to notify swarm managers, that it will not be available anymore.
Leaving swarm is a one time action and passing this as an configuration option to the daemon is weird. You may try to suggest that on docker's github, but I don't think it will have much supporters.
Perhaps more intuitive option would be to have ability to start dockerd in a way that communication to docker swarm manager would be suspended - so your dockerd is running only locally, but if you start without that flag (--local?) it would reconnect to swarm that it was attached before.
I have been playing around with Hyperledger to make it run on Kubernetes. And I was successful to do so. The only thing which I was not happy with the solution/work-around for the container that was spun up when chaincode is instantiated by the peer.
Kubernetes is simply not aware of this container as it was not started by Kubernetes and by the peer. And to make the peer and chaincode talk to each other I had to update the docker daemon running on the kubernetes node with dns server ip address of the kube-dns service.
Is it possible to instantiate a chaincode in a way where kubernetes is aware of the container of the chaincode.
And also chaincode container is able to talk to peer in a seamless fashion rather than updating docker daemon process of the node within kubernetes cluster
I have been investigating the same issue you are having. One alternative to using the docker daemon on your kubernetes node is spinning up a new container in your Pod using DnD (Docker in docker) technique. In this way you can successfully instantiate the chaincode container in a natural way (you will be able to use KubeDNS for example) as it will be sharing the same network space as the kubernetes Pod. I couldn't find any tutorial on the internet showing the implementation of this theory but if you find one (or do it yourself) please share it on this thread.
Thank you
Reference:
https://medium.com/kokster/simpler-setup-for-hyperledger-fabric-on-kubernetes-using-docker-in-docker-8346f70fbe80
I have a Kubernetes deployment containing a very simple Spring Boot web application. I am experiencing random timeouts trying to connect to this application externally.
Some requests return instantly whereas others hang for minutes.
I am unable to see any issues in the logs.
When connecting to the pod directly, I am able to curl the application and get a response immediately so it feels more like a networking issue.
I also have other applications with the identical configuration running in the same cluster which are experiencing no problems.
I am still quite new to Kubernetes so my question would be:
Where and how should I go about diagnosing network issues?
Can provide more information if it helps.
As you have narrow down the issue to networking which means components of the cluster are healthy such as Kubelet, Kube-proxy and etc.
You can check their status by using systemctl utility. For example
systemctl status kubelet
systemctl status kube-proxy
You can get more detail by using journalctl utility. for example
journalctl -xeu kubelet
journalctl -f -u docker
Now If you want to know what's the destiny of the packets then you need to use iptables utility. It's the one who decides forwarding, routing, and verdict of the packets (incoming or outgoing packetes).
My plan of action is Do Not make any assumptions.I follow following utilities to clear the doubts.
Kubectl
Kubectl describe pod/svc podName/svcName
systemctl
journalctl
etcdctl
curl
iptables
If I still could not solve the issue it means I have made an assumption.
please let me know any other tools I would love to put it on my utility-set