AKS Load Balancer IP Address Not Accessible - azure-aks

I created a load balancer for service in AKS, and the load balancer got approved to be accessible from AKS Network Subnet Group. The load balancer has an external IP address corresponding to an internal service. But I'm not able to access the IP address provided by the load balancer.

Please perform a kubectl get service -n <namespace> on the AKS cluster:
If you see something like the following, where the External-IP is a Public IP address:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
my-service LoadBalancer 10.0.192.89 20.69.145.115 80:31541/TCP 6s
then the Service was allocated a Public IP address from the Frontend IP addresses of the AKS public Load Balancer. Please ensure that all Network Security Groups associated with the AKS cluster subnet or the node virtual machines' network interfaces effectively Allow Inbound traffic from the Internet or the Public IP address (range) from which you are trying to connect. Please also ensure that there are no Firewalls, Network Virtual Appliances etc. which blocks inbound traffic to the AKS cluster subnet and node virtual machines.
If you see something like the following where the External-IP is a private IP address:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
my-service LoadBalancer 10.0.184.168 10.240.0.25 80:30225/TCP 4m
then you have created an Azure Internal Load Balancer Service on the AKS cluster and a private IP address from the associated virtual network was associated to the Service. Please ensure that you are connecting to the Service from a device inside the AKS cluster's virtual network or a connected network(like peered virtual networks, virtual networks connected over a VPN gateway, on-premise network connected to the Azure Virtual network). Default Network Security Group rules allow connectivity inside the virtual network and connected networks, however if custom rules are added please ensure that the effective rules allow traffic between the source and the AKS cluster subnet and node Virtual machines.
In a third scenario, you might see the External-IP <pending> for a very long time as in the following:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
my-service LoadBalancer 10.0.192.89 <pending> 80:31541/TCP 45m
In this case please describe the Service using kubectl describe service. Under the events section of the output, you might find errors during EnsuringLoadBalancer. Please ensure that annotations are correctly set in the service manifest and correct permissions are granted to the AKS cluster's managed identity or service p[rincipal as described in:
https://learn.microsoft.com/en-us/azure/aks/internal-lb and/or https://learn.microsoft.com/en-us/azure/aks/static-ip

Related

MacVlan network with kubernetes

I setup the kubernetes cluster using k3s. I have one master and two nodes. I created docker macvlan network on one of the node.
I want to achieve below mentioned scenario.
Assign IP to container/pod.(user defined IP, not cluster IP).
q1.Is there any alternative option for docker macvlan?
q2.Can we run command on node (not on pod/container)? (while deploying the pod/service)
q3.can we create kubernetes network with user defined IP? (I don`t think LB/NP/Ingress will help for user defined IP, correct me if I am wrong!)
Kubernetes has its own very specialized network implementation. It can't easily assign a unique externally accessible IP address to each process the way the Docker MacVLAN setup can. Kubernetes also can't reuse the Docker networking infrastructure. Generally the cluster takes responsibility for assigning IP addresses to pods and services, and you can't specify them yourself.
So, in Kubernetes:
You can't manually assign IP addresses to things;
The cluster-internal IP addresses aren't directly accessible from outside the cluster;
The Kubernetes constructs can only launch containers on arbitrarily chosen nodes (possibly with some constraints; possibly on every node), but you don't usually launch a container on a single specific node, and you can't run a non-container command on a node.
Given what you're describing, a more general-purpose cluster automation tool like Salt Stack, Ansible, or Chef might meet your needs better. This will let you launch processes directly on managed nodes, and if those are server-type processes, they'll be accessible using the host's IP address as normal.
You can look into MetalLB, specifically into the Layer2 & Local traffic policy
(https://metallb.universe.tf/usage/)
You cannot assign IPs to the Pods but when you'll create a service type LoadBalancer (for example a http routing service like Traefik), MetalLB will help bind that service to the IP of the node.
As an example, you can see the external IP of the service Trafik is reported as my node's address - 192.168.1.201
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
node02 Ready <none> 8d v1.20.2+k3s1 192.168.1.202 <none> Alpine Linux v3.13 5.10.10-0-virt containerd://1.4.3-k3s1
node01 Ready control-plane,master 8d v1.20.2+k3s1 192.168.1.201 <none> Alpine Linux v3.13 5.10.10-0-virt containerd://1.4.3-k3s1
For q2:
Of course you can, k8s doesn't take over the node. You ssh into it and run whatever you like.
For q1:
No.
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
default service/kubernetes ClusterIP 10.43.0.1 <none> 443/TCP
kube-system service/kube-dns ClusterIP 10.43.0.10 <none> 53/UDP,53/TCP,9153/TCP
kube-system service/metrics-server ClusterIP 10.43.254.20 <none> 443/TCP
kube-system service/traefik LoadBalancer 10.43.130.1 192.168.1.201 80:31666/TCP,443:31194/TCP,8080:31199/TCP
default service/whoami ClusterIP 10.43.61.10 <none> 80/TCP

Accessing a k8s service with cluster IP in default namespace from a docker container

I have a server that is orchestrated using k8s it's service looks like below
➜ installations ✗ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
oxd-server ClusterIP 10.96.124.25 <none> 8444/TCP,8443/TCP 3h32m
and it's pod.
➜ helm git:(helm-rc1) ✗ kubectl get po
NAME READY STATUS RESTARTS AGE
sam-test-oxd-server-6b8f456cb6-5gwwd 1/1 Running 0 3h2m
Now, I have a docker image with an env variable that requires the URL of this server.
I have 2 questions from here.
How can the docker image get the URL or access the URL?
How can I access the same URL in my terminal so I make some curl commands through it?
I hope I am clear on the explanation.
If your docker container is outside the kubernetes cluster, then it's not possible to access you ClusterIP service.
As you could guess by its name, ClusterIP type services are only accessible from within the cluster.
By within the cluster I mean any resource managed by Kubernetes.
A standalone docker container running inside a VM which is part of your K8S cluster is not a resource managed by K8S.
So, in order to achieve what you want, you'll have those possibilities :
Set a hostPort inside your pod. This is not recommanded and is listed as a bad practice in the doc. Keep this usage for very specific case.
Switch your service to NodePort instead of ClusterIP. This way, you'll be able to access it using a node IP + the node port.
Use a LoadBalancer type of service, but this solution needs some configuration and is not straightforward.
Use an Ingress along with an IngressController but just like the load balancer, this solution needs some configuration and is not that straightforward.
Depending on what you do and if this is critical or not, you'll have to choose one of these solutions.
1 & 2 for debug/dev
3 & 4 for prod, but you'll have to work with your k8s admin
You can use the name of the service oxd-server from any other pod in the same namespace to access it i.e., if the service is backed by pods that are serving HTTPS, you can access the service at https://oxd-server:8443/.
If the client pod that wants to access this service is in a different namespace, then you can use oxd-server.<namespace> name. In your case that would be oxd-server.default since your service is in default namespace.
To access this service from outside the cluster(from your terminal) for local debugging, you can use port forwarding.
Then you can use the URL localhost:8443 to make any requests and request would be port forwarded to the service.
kubectl port-forward svc/oxd-server 8443:8443
If you want to access this service from outside the cluster for production use, you can make the service as type: NodePort or type: LoadBalancer. See service types here.

Pod to Pod communication for a NodePort type service in kubernetes

I have a statfulset application which has a server running on port 1000 and has 3 replicas.
Now, I want to expose the application so I have used type: NodePort.
But, I also want 2 replicas to communicate with each other at the same port.
When I do nslookup in case of NodePort type application it gives only one dns name <svc_name>.<namespace>.svc.cluster.local (individual pods don't get a dns) and the application is exposed.
When I do clusterIP: None I get node specific DNS <statfulset>.<svc_name>.<namespace>.svc.cluster.local but application is not exposed. But both do not work together.
How can I achieve both, expose the same port for inter replica communication and expose same port externally?
LoadBalancer: Exposes the service externally using a cloud provider’s load balancer. NodePort and ClusterIP services, to which the external load balancer will route, are automatically created.

How to allocate a static IP for an internal load balancer in Azure AKS

The document here describes how to create an AKS service with an internal load balancer associated with it. It explains how to assign an explicit IP address to this load balancer and states that the chosen IP "must not already be assigned to a resource." My question is how do I allocate this IP? The CLI command
az network public-ip create
can be used to allocate a public IP but there is no equivalent command
az network private-ip create
What is the correct procedure for allocating a private static IP in Azure?
Peter
There is no such command to create a static private IP for an internal load balancer in Azure AKS as Azure Networking has no visibility into the service IP range of the Kubernetes cluster, see here.
Actually, you could add the loadBalancerIP property to the load balancer YAML manifest to specific a private IP for an internal load balancer. When you do that, the specified IP address must reside in the same subnet as the AKS cluster and must not already be assigned to a resource. You could check the subnets where your deploy the aks cluster, then select one of the available addresses from the subnet address range, which should not overlap with the other IP address from connected devices.
Hope this will help you.

Make Kubernetes Service Accessible Externally

We have a private kubernetes cluster running on a baremetal CoreOS cluster (with Flannel for network overlay) with private addresses.
On top of this cluster we run a kubernetes ReplicationController and Service for elasticsearch. To enable load-balancing, this service has a ClusterIP defined - which is also a private IP address: 10.99.44.10 (but in a different range to node IP addresses).
The issue that we face is that we wish to be able to connect to this ClusterIP from outside the cluster. As far as we can tell this private IP is not contactable from other machines in our private network...
How can we achieve this?
The IP addresses of the nodes are:
node 1 - 192.168.77.102
node 2 - 192.168.77.103
.
and this is how the Service, RC and Pod appear with kubectl:
NAME LABELS SELECTOR IP(S) PORT(S)
elasticsearch <none> app=elasticsearch 10.99.44.10 9200/TCP
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
elasticsearch elasticsearch elasticsearch app=elasticsearch 1
NAME READY STATUS RESTARTS AGE
elasticsearch-swpy1 1/1 Running 0 26m
You need to set the type of your Service.
http://docs.k8s.io/v1.0/user-guide/services.html#external-services
If you are on bare metal, you don't have a LoadBalancer integrated. You can use NodePort to get a port on each VM, and then set up whatever you use for load-balancing to aim at that port on any node.
You can use nodeport, but also use hostport for some daemonsets and deployments and hostnetwork to give a pod total node network access
IIRC, if you have a recent enough kubernetes, each node can forward traffic to the internal network, so if you create the correct routing in your clients/switch, you can access the internal network by delivering those TCP/IP packages to one node. The node will then receive the package and SNAT+forward to the clusterIP or podIP.
Finally, barebone can use now MetalLB for kubernetes loadbalancer, that is mostly using this last feature in a more automatic and redundant way

Resources