AKS How to access pods on node with public IP - azure-aks

We have a nodepool created with --enable-node-public-ip option (Read here). Essentially each node within this nodepool has a public IP.
I can see there are pods running on these nodes. Can we expose these pods through these public IPs?
How can we access the Pods running on these nodes? It is even possible?

Related

How pods on different kubernates clusters can communicate?

CASE 1: Suppose there is a pod running locally (running some workload/app )on a device1 and another pod running in EC2 instance running on AWS EKS instance . How can both of them communicate
CASE 2: Suppose there is a pod running locally (running some workload/app )on a device1 and another device 2 . How can both of them communicate ?
Pods can run locally using minikube or even directly using kubectl commands.
Problem :I know that pods within the same cluster can communicate with another Pod by directly addressing its IP address but how can pods on different clusters can communicate and what protocols they can use? .Please help me with this .
You can expose the service publicly if you are looking for easy option and setup.
You can use the External IP (service type load balancer) or you can use the ingress controllers(nginx, kong, traefik) to expose the services.
For multi-cluster communication and service discovery you can use the service mesh like Istio and linkerD :
https://istio.io/latest/blog/2020/multi-cluster-mesh-automation/
LinkerD multicluster east-west setup: https://linkerd.io/2.11/features/multicluster/
The pods can communicate with each other if they are exposed publicly via
External IP e.g https://kubernetes.io/docs/tutorials/stateless-application/expose-external-ip-address/
User Service to expose your pod publicly https://kubernetes.io/docs/tutorials/kubernetes-basics/expose/expose-intro/
If you require the pods to communicate using private ip using internal network, you can consider using VPC peering (connect all 3 networks that Device 1, Device 2 and Cloud/AWS VPC). This will require some networking knowledge to setup.

Jenkins installation and configuration on Gcp without exposing the public ip of the VM's

Use case: I wish to install a Jenkins master/slave setup on GCP.
Restrictions: The VM's are not allowed to have any public ip's associated to them. They are in a private VPC that consists of 1 public and 1 private subnet.
I was able to install Jenkins on the VM however, I am unable to view it from the browser. Any suggestions around this?

Kubernetes pod application connectivity with Mysql database which is not on container

can i connect k8's POD with non container application ,where my kubernetes POD is running on 10.200.x.x subnet and my mysql is running on simple linux server other than container
how can i connect with the database ?
As im working in a organization where there are so many network restrictions and i have to open ports and IPs to access
do i have possibility to connect container application with non container database as subnet masks are different too
If you can reach mysql from worker node then you should also be able to reach it from pod running on this node.
Check you company firewall and make sure that packets from worker node can reach the instance with mysql running. Also make sure that these networks are not separated in some other way.
Usually packets sent from your application pod to mysql instance will have source ip set to worker nodes ip (so you want to allow for traffic from k8s nodes to mysql instance).
This is due to fact that k8s network (with most CNIs) is sort of a virtual network that only k8s nodes aware of and for external traffic to by able to come back to the pod, routers in your network need to know where to route the traffic to. This is why pod traffic going outside of k8s network is NATed.
This is true for most CNIs that encapsulate internal traffic in k8s but remeber that there are also some CNIs that don't encapsulate traffic and it makes possible to access pods directly from anywhere inside of a private network and not only from k8s nodes (e.g Azure CNI).
In first case with NATed network make sure that you enable access to mysql instance from all worker nodes, not just one because when this one specific node goes down and pod gets rescheduled to other node it wont be able to connect to the database.
In second case where you are using CNI that is using direct netwoking (without NAT) its more complicated because when pod gets rescheduled it gets different ip every time and I can't help you with that as it all depends on specific CNI.

Forward all service ports to a singe container

I would like to run a container in kubernetes with a static ip. I found out that only a service can provide an ip address.
Is it possible to map a Service to one pod and forward all ports?
A service discovers pods based on labels and selectors. So it is not necessary to use an IP Address to statically reference a pod from a service. However, if you so wish, you can override the autonomy behind this and manually configure your own ClusterIP for the service.
Once the Pod and Service have been created, other pods in your cluster will be able to interact with the pod via the Name of the Service provided they are in the same namespace. If they are not, you will need to pass the FQDN of the service.
If you are trying to access the pod from outside of Kubernetes, then you will need to use a Service with a different type than ClusterIP. For example, a NodePort or a LoadBalancer. Alternatively, if you have an Ingress Controller with a gateway already provisioned you could use that.
With regards to you desire to forward all ports, this is not possible as port declarations in Service files must be statically mapped. It is not currently possible to pass a Port Range but there is a long standing feature request for it.

Kubernetes: multiple pods in a node when each pod exposes a port

I was following along with the Hello, World example in Kubernetes getting started guide.
In that example, a cluster with 3 nodes/instances is created on Google Container Engine.
The container to be deployed is a basic nodejs http server, which listens on port 8080.
Now when I run
kubectl run hello-node --image <image-name> --port 8080
it creates a pod and a deployment, deploying the pod on one of nodes.
Running the
kubectl scale deployment hello-node --replicas=4
command increases the number of pods to 4.
But since each pod exposes the 8080 port, will it not create a port conflict on the pod where two nodes are deployed?
I can see 4 pods when I do kubernetes get pods, however what the behaviour will be in this case?
Got some help in #kubernetes-users channel on slack :
The port specified in kubectl run ... is that of a pod. And each pod has its unique IP address. So, there are no port conflicts.
The pods won’t serve traffic until and unless you expose them as a service.
Exposing a service by running kubectl expose ... assigns a NodePort (which is in range 30000-32000) on every node. This port must be unique for every service.
If a node has multiple pods kube-proxy balances the traffic between those pods.
Also, when I accessed my service from the browser, I was able to see logs in all the 4 pods, so the traffic was served from all the 4 pods.
There is a difference between the port that your pod exposes and the physical ports on your node. Those need to be linked by for instance a kubernetes service or a loadBalancer as discussed a bit further in the hello-world documentation http://kubernetes.io/docs/hellonode/#allow-external-traffic

Resources