azure cni NATing to DST node address - azure-aks

I have a two node cluster in AKS using advanced networking and the azure CNI network plugin. The VNET that the cluster is installed into already exists. The cluster only has two nodes with a few pods deployed. Our problem is that during pod to pod connectivity, the clients pod SRC ip address seems to get natted to the ip address of node that the destination pod is deployed to.
that is:
ip address of node0 is 100.64.24.4 ip address of node1 is 100.64.24.35.
pod A has an ip address of 100.64.24.63 and is deployed on node1
pod B runs nginx has an ip address of 100.64.24.21 and is deployed on node0
when ever I do a call from pod A to pod B we see that the pod sees the SRC address of the call as 100.64.24.4(node0) and not 100.64.24.63(podA).
Is this normal for this network plugin? Is there anyway to change this behaviour?
Currently this breaks inter-pod TLS client authentication as the client certificate has a CN or SAN that is resolvable to the src pods ip but the server side pod sees the call as coming from the node ip. This means it wont TLS client auth handshake because that ip doesnt resolve to the CN or any SAN in the cert

Related

Kubernetes pod application connectivity with Mysql database which is not on container

can i connect k8's POD with non container application ,where my kubernetes POD is running on 10.200.x.x subnet and my mysql is running on simple linux server other than container
how can i connect with the database ?
As im working in a organization where there are so many network restrictions and i have to open ports and IPs to access
do i have possibility to connect container application with non container database as subnet masks are different too
If you can reach mysql from worker node then you should also be able to reach it from pod running on this node.
Check you company firewall and make sure that packets from worker node can reach the instance with mysql running. Also make sure that these networks are not separated in some other way.
Usually packets sent from your application pod to mysql instance will have source ip set to worker nodes ip (so you want to allow for traffic from k8s nodes to mysql instance).
This is due to fact that k8s network (with most CNIs) is sort of a virtual network that only k8s nodes aware of and for external traffic to by able to come back to the pod, routers in your network need to know where to route the traffic to. This is why pod traffic going outside of k8s network is NATed.
This is true for most CNIs that encapsulate internal traffic in k8s but remeber that there are also some CNIs that don't encapsulate traffic and it makes possible to access pods directly from anywhere inside of a private network and not only from k8s nodes (e.g Azure CNI).
In first case with NATed network make sure that you enable access to mysql instance from all worker nodes, not just one because when this one specific node goes down and pod gets rescheduled to other node it wont be able to connect to the database.
In second case where you are using CNI that is using direct netwoking (without NAT) its more complicated because when pod gets rescheduled it gets different ip every time and I can't help you with that as it all depends on specific CNI.

How to make container shutdown a host machine in kubernetes?

I have a kubernetes setup in which one is master node and two worker nodes. After the deployment, which is a daemonset, it starts pods on both the worker nodes. These pods contain 2 containers. These containers have a python script running in them. The python scripts runs normally but at a certain point, after some time, it needs to send a shutdown command to the host. I can directly issue command shutdown -h now but this will run on the container not on the host and gives below error:
Failed to connect to bus: No such file or directory
Failed to talk to init daemon.
To resolve this, I can get the ip address of the host and then I can ssh into it and then run the command to safely shutdown the host.
But is there any other way I can issue command to the host in kubernetes/dockers.?
You can access your cluster using kube api.
https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/
Accessing the API from a Pod When accessing the API from a pod,
locating and authenticating to the apiserver are somewhat different.
The recommended way to locate the apiserver within the pod is with the
kubernetes.default.svc DNS name, which resolves to a Service IP which
in turn will be routed to an apiserver.
The recommended way to authenticate to the apiserver is with a service
account credential. By kube-system, a pod is associated with a service
account, and a credential (token) for that service account is placed
into the filesystem tree of each container in that pod, at
/var/run/secrets/kubernetes.io/serviceaccount/token.
Draining the node you can use this
The Eviction API
https://kubernetes.io/docs/tasks/administer-cluster/safely-drain-node/
But i dont really sure about on pod can drain own node. Workaround can be controlling other pod from different node.

Difference between NodePort, HostPort and Cluster IP

Rancher 2 provides 4 options in the "Ports" section when deploying a new workload:
NodePort
HostPort
Cluster IP
Layer-4 Load Balancer
What are the differences? Especially between NodePort, HostPort and Cluster IP?
HostPort (nodes running a pod): Similiar to docker, this will open a port on the node on which the pod is running (this allows you to open port 80 on the host). This is pretty easy to setup an run, however:
Don’t specify a hostPort for a Pod unless it is absolutely necessary. When you bind a Pod to a hostPort, it limits the number of places the Pod can be scheduled, because each combination must be unique. If you don’t specify the hostIP and protocol explicitly, Kubernetes will use 0.0.0.0 as the default hostIP and TCP as the default protocol.
kubernetes.io
NodePort (On every node): Is restricted to ports between port 30,000 to ~33,000. This usually only makes sense in combination with an external loadbalancer (in case you want to publish a web-application on port 80)
If you explicitly need to expose a Pod’s port on the node, consider using a NodePort Service before resorting to hostPort.
kubernetes.io
Cluster IP (Internal only): As the description says, this will open a port only available for internal applications running in the same cluster. A service using this option is accessbile via the internal cluster-ip.
Host Port
Node Port
Cluster IP
When a pod is using a hostPort, a connection to the node’s port is forwarded directly to the pod running on that node
With a NodePort service, a connection to the node’s port is forwarded to a randomly selected pod (possibly on another node)
Exposes the Service on an internal IP in the cluster. This type makes the Service only reachable from within the cluster.
pods using a hostPort, the node’s port is only bound on nodes that run such pods
NodePort services bind the port on all nodes, even on those that don’t run such a pod
NA
The hostPort feature is primarily used for exposing system services, which are deployed to every node using DaemonSets
NA
NA
General Ask Question
Q: What happens when many pods running on the same node whit NodePort?
A: With NodePort it doesn't matter if you have one or multiple nodes, the port is available on every node.

Access a VM in the same network as the nodes of my cluster from a pod

I have a kubernetes cluster with some nodes and a VM in the same network as the nodes. I need to execute a command via SSH from one of my Pods in this VM. Is it even possible?
I do not control the cluster or the VM, I just have access to them.
Well, this is a network level issue. When you have a kubernetes cluster onthe same network as your target there is a potential issue that might or might not show up - origin IP on the tcp connection. IF your nodes will MASQ/SNAT all of the outgoing traffic then you are fine, but... for a vm in the same domain as kube nodes it might actually be excluded from that MASQ/SNAT. The reason for that is that kube nodes do know how to route traffic based on POD IP cause they have the overlay networking installed (flannel, calico, weave etc.).
To round this up, you need to either have the traffic to your destination node on MASQ/SNAT at some point, or the target node has to be able to route traffic back to your POD, usualy meaning that it needs overlay networking installed (with the exception of setups that are implemented on higher networking level then nodes them selves, like ie. AWS VPC routing tables)

Is it possible to host Kubernetes node from network with dynamic ip?

I would like to host a Kubernetes master node in AWS (or other cloud provider) and then add nodes from home to that cluster. I do however not have a static IP from my internet provider, so the question is: will this work and what happens when my IP address change?
Here could get some info about Master-Node communication in kubernetes.
For communication from Node to Mater, it will use kube-apiserver to do requests. So normally it should be work, and when your node IP is changed, node info in ETCD for your node will be update, and you could check your nodes status with command kubectl get nodes -o wide
But if some specific kubernetes feature may be affected, such as NodePort for Service.
Hope this could help !

Resources