How to make container shutdown a host machine in kubernetes? - docker

I have a kubernetes setup in which one is master node and two worker nodes. After the deployment, which is a daemonset, it starts pods on both the worker nodes. These pods contain 2 containers. These containers have a python script running in them. The python scripts runs normally but at a certain point, after some time, it needs to send a shutdown command to the host. I can directly issue command shutdown -h now but this will run on the container not on the host and gives below error:
Failed to connect to bus: No such file or directory
Failed to talk to init daemon.
To resolve this, I can get the ip address of the host and then I can ssh into it and then run the command to safely shutdown the host.
But is there any other way I can issue command to the host in kubernetes/dockers.?

You can access your cluster using kube api.
https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/
Accessing the API from a Pod When accessing the API from a pod,
locating and authenticating to the apiserver are somewhat different.
The recommended way to locate the apiserver within the pod is with the
kubernetes.default.svc DNS name, which resolves to a Service IP which
in turn will be routed to an apiserver.
The recommended way to authenticate to the apiserver is with a service
account credential. By kube-system, a pod is associated with a service
account, and a credential (token) for that service account is placed
into the filesystem tree of each container in that pod, at
/var/run/secrets/kubernetes.io/serviceaccount/token.
Draining the node you can use this
The Eviction API
https://kubernetes.io/docs/tasks/administer-cluster/safely-drain-node/
But i dont really sure about on pod can drain own node. Workaround can be controlling other pod from different node.

Related

docker kubernetes duplicate pods

Why does docker kubernetes duplicate pods? I see on the dashboard some pods with k8s and with k8s_POD even my deployments.yaml has replica=1
Does anyone have any ideas on this?
All containers: in a kubernetes Pod share the same cluster's Pod IP address, and for each one of them 127.0.0.1 is the same as the others. The way that magic happens is via that k8s_POD_ container, which is the one running the pause image and is the only container which is assigned a kubernetes Pod IP via CNI. All containers in that Pod then use its network_namespace(7) to send and receive traffic within the cluster. That's also why one can restart a container without it losing the IP address, unlike deleting a Pod which gets a fresh one
To the best of my knowledge, those sandbox containers can exist even without any of the other containers in cases where the main container workloads cannot start due to pending volumes (or other resources, such a GPUs), since the CNI allocation process happens very early in the Pod lifecycle
I could have sworn it was in an existing question but I wasn't able to readily find it

K3d DNS issue with pod

With k3d, I am receiving a DNS error when the pod tries to access a URL over the internet.
ERROR:
getaddrinfo EAI_AGAIN DNS could not be resolved
How can I get past this error?
It depends on your context, OS, version.
For instance, you will see various proxy issue in k3d-io/k3d issue 209
this could be related to the way k3d creates the docker network.
Indeed, k3d creates a custom docker network for each cluster and when this happens resolving is done through the docker daemon.
The requests are actually forwarded to the DNS servers configured in your host's resolv.conf. But through a single DNS server (the embedded one of docker).
This means that if your daemon.json is, like mine, not configured to provide extra DNS servers it defaults to 8.8.8.8 which does not resolve any company address for example.
It would be useful to have a custom options to provide to k3d when it starts the cluster and specify the DNS servers there
Which is why there is "v3/networking: --network flag to attach to existing networks", referring to Networking.
Before that new flag:
For those who have the problem, a simple fix is to mount your /etc/resolve.conf onto the cluster:
k3d cluster create --volume /etc/resolv.conf:/etc/resolv.conf

How to access k3d Kubernetes cluster from inside a docker container?

I have a running k3d Kubernetes cluster:
$ kubectl cluster-info
Kubernetes master is running at https://0.0.0.0:6550
CoreDNS is running at https://0.0.0.0:6550/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Metrics-server is running at https://0.0.0.0:6550/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy
I have a python script that uses the kubernetes client api and manages namespaces, deployments, pod, etc. This works just fine in my local environment because I have all the necessary python modules installed and have direct access to my local k8s cluster. My goal is to containerize so that this same script is successfully run for my colleagues on their systems.
While running the same python script in a docker container, I receive connection errors:
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='172.17.0.1', port=6550): Max retries exceeded with url: /api/v1/namespaces (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f8b637c5d68>: Failed to establish a new connection: [Errno 113] No route to host',))
172.17.0.1 is my docker0 bridge address so assumed that would resolve or forward traffic to my localhost. I have tried loading k8s configuration from my local .kube/config which references server: https://0.0.0.0:6550 and also creating a separate config file with server: https://172.17.0.1:6550 and both give the same No route to host error (with the respective ip address in the HTTPSConnectionPool(host=...))
One idea I was pursing was running a socat process outside the container and tunnel traffic from inside the container across a bridge socket mounted in from the outside, but looks like the docker image I need to use does not have socat installed. However, I get the feeling like the real solution should be much simplier than all of this.
Certainly there have been other instances of a docker container needing access to a k8s cluster served outside of the docker network. How is this connection typically established?
Use docker network command to create a predefined network
You can pass --network to attach k3d to an existing Docker network and also to docker run to do the same for another container
https://k3d.io/internals/networking/

Kubernetes pod application connectivity with Mysql database which is not on container

can i connect k8's POD with non container application ,where my kubernetes POD is running on 10.200.x.x subnet and my mysql is running on simple linux server other than container
how can i connect with the database ?
As im working in a organization where there are so many network restrictions and i have to open ports and IPs to access
do i have possibility to connect container application with non container database as subnet masks are different too
If you can reach mysql from worker node then you should also be able to reach it from pod running on this node.
Check you company firewall and make sure that packets from worker node can reach the instance with mysql running. Also make sure that these networks are not separated in some other way.
Usually packets sent from your application pod to mysql instance will have source ip set to worker nodes ip (so you want to allow for traffic from k8s nodes to mysql instance).
This is due to fact that k8s network (with most CNIs) is sort of a virtual network that only k8s nodes aware of and for external traffic to by able to come back to the pod, routers in your network need to know where to route the traffic to. This is why pod traffic going outside of k8s network is NATed.
This is true for most CNIs that encapsulate internal traffic in k8s but remeber that there are also some CNIs that don't encapsulate traffic and it makes possible to access pods directly from anywhere inside of a private network and not only from k8s nodes (e.g Azure CNI).
In first case with NATed network make sure that you enable access to mysql instance from all worker nodes, not just one because when this one specific node goes down and pod gets rescheduled to other node it wont be able to connect to the database.
In second case where you are using CNI that is using direct netwoking (without NAT) its more complicated because when pod gets rescheduled it gets different ip every time and I can't help you with that as it all depends on specific CNI.

Link between docker container and Minikube

Is is possible to link a docker container with a service running in minikube? I have a mysql container which I want to access using PMA pod in minikube. I have tried adding PMA_HOST is the yaml file while creating pod but getting an error on the PMA GUI page mentioning -
mysqli_real_connect(): (HY000/2002): Connection refused
If I understand you correctly, you want to access a service (mysql) running outside kube cluster (minikube) from that kube cluster.
You have two ways to achieve this:
make sure your networking is configured in a way allowinf traffic passing both ways correctly. Then you should be able to access that mysql service directly by it's address or by creating external service inside kube cluster (create Service with no selector and manualy configure external Endpoints
Use something like ie. telepresence.io to expose localy developed service inside remote kubernetes cluster

Resources