I have Kuberenets cluster with AWS-EKS, Suddenly on my cluster existing pods were unable to resolve DNS host name when i try to check NSLookup internally in pods.
Can someone please suggest me like
1. How to resolve this DNS resolution among pods
2. What change causes my cluster to go like this all of sudden
Chek youre namespaces.
The kube-dns works in the following way:
get pod in the same namespace: curl http://servicename
get pod in a different namespace: curl http://servicename.namespace
Here and here
Understanding namespaces and DNS
When you create a Service, it creates a corresponding DNS entry. This entry is of the form ..svc.cluster.local, which means that if a container just uses it will resolve to the service which is local to a namespace. This is useful for using the same configuration across multiple namespaces such as Development, Staging and Production. If you want to reach across namespaces, you need to use the fully qualified domain name (FQDN).
Related
Setup
I'm playing around with K8s and I set up a small, single-node, bare metal cluster. For this cluster I pulled the NGINX Ingress Controller config from here, which is coming from the official getting started guide.
Progress
Ok, so pulling this set up a bunch of things, including a LoadBalancer in front. I like that.
For my app (single pod, returns the caller IP) I created a bunch of things to play around with. I now have SSL enabled and another ingress controller, which I pointed to my app's service, which then points to the deployed pod. This all works perfectly, I can browse the page with https. See:
BUT...
My app is not getting the original IP from the client. All client requests end up as coming from 10.42.0.99... here's the controller config from describe:
Debugging
I tried like 50 solutions that were proposed online, none of them worked (ConfigMaps, annotations, proxy mode, etc). And I debugged in-depth, there's no X-Forwarder-For or any similar header in the request that reaches the pod. Previously I tested the same app on apache directly, and also in a docker setup, it works without any issues.
It's also worth mentioning that I looked into the ingress controller's pod and I already saw the same internal IP in there. I don't know how to debug the controller's pod further.
Happy to share more information and config if it helps.
UPDATE 2021-12-15
I think I know what the issue is... I didn't mention how I installed the cluster, assuming it's irrelevant. Now I think it's the most important thing 😬
I set it up using K3S, which has its own LoadBalancer. And through debugging, I see now that all of my requests in NGINX have the IP of the load balancer's pod...
I still don't know how to make this Klipper LB give the source IP address though.
UPDATE 2021-12-17
Opened an issue with the Klipper LB.
Make sure your Nginx ingress configmap have enabled user IP real-ip-header: proxy_protocol try updating this line into configmap.
apiVersion: v1
kind: ConfigMap
metadata:
labels:
app.kubernetes.io/component: controller
name: ingress-nginx-controller
namespace: ingress-nginx
data:
compute-full-forwarded-for: "true"
use-forwarded-headers: "false"
real-ip-header: proxy_protocol
still if that not work you can just inject this config as annotation your ingress configuration and test once.
nginx.ingress.kubernetes.io/configuration-snippet: |
more_set_headers "X-Forwarded-For $http_x_forwarded_for";
#milosmns - one of the ways i have been trying is to not install servicelb (--no-deploy=servicelb) and remove traefik (--no-deploy=traefik).
Instead deploy haproxy ingress (https://forums.rancher.com/t/how-to-access-rancher-when-k3s-is-installed-with-no-deploy-servicelb/17941/3) and enable proxy protocol. When you do this, all requests that hit the haproxy ingress will be injected with proxy protocol and no matter how they are routed you will be able to pick them up from anywhere. you can also get haproxy to inject X-Real-IP headers.
the important thing is that haproxy should be running on all master nodes. since there is no servicelb, your haproxy will always get the correct ip address.
Just set externalTrafficPolicy to "Local" if using GCP
we have a system that is having 2 endpoint based on geo-location. e.g (east_url, west_url).
One of our application need to load balance between those 2 urls. In the consumer application, created 2 deployment with the same image but different environment variables such as "url=east_url", "url=west_url".
after the deployment, i have following running pod, each of them will have label: "app=consumer-app" and "region=east" or "region=west"
consumer-east-pod-1
consumer-east-pod-2
consumer-west-pod-1
consumer-west-pod-2
when i create a clusterIP service with selector: app=consumer-app, somehow it only picks up one replicaSet. I am just curious if this is actually possible in kubernates to allow Service backed up by different deployments?
Another way of doing this i can think of is to create 2 services, and have ingress controller to loadbalance it, is this possible? we are using Kong as the ingress controller. I am looking for something like openshift which can have "alternativeBackends" to serve the Route. https://docs.openshift.com/container-platform/4.1/applications/deployments/route-based-deployment-strategies.html
I was missing a label for the east replicaSets, after i add the app:consumerAPP, it works fine now.
Thanks
TL;DR: use ISTIO
With ISTIO you can create Virtual Services:
A VirtualService defines a set of traffic routing rules to apply when
a host is addressed. Each routing rule defines matching criteria for
traffic of a specific protocol. If the traffic is matched, then it is
sent to a named destination service (or subset/version of it) defined
in the registry.
The VirtualService will let you send traffic to different backends based on the URI.
Now, if you plan to perform like an A/B TEST, you can use ISTIO's (Destination Rule):[https://istio.io/docs/reference/config/networking/destination-rule/].
DestinationRule defines policies that apply to traffic intended for a
service after routing has occurred.
Version specific policies can be specified by defining a named subset
and overriding the settings specified at the service level
1.- If you are using GKE, the process to install ISTIO can be located in here
2.- If you are using K8s running on a Virtual Machine, the installation process can be found here
I'm running Jenkins on my K8s cluster, and it's currently accessible externally by node_name:port. Some of my users are bothered by accessing the service using a port name, is there a way I could just assign the service a name? for instance: jenkins.mydomain
Thank you.
Have a look at Kubernetes Ingress.
You can define rules that point internally to the Kubernetes Service in front of Jenkins.
https://kubernetes.io/docs/concepts/services-networking/ingress/
You could use an Ingress or a Service of type LoadBalancer that listens on port 80 and forwards to the Jenkins Pods with the custom port. Then you could just create a DNS, for example for jenkins.mydomain.com, record pointing to the IP address of the Service.
Thank you so much for your suggestions, I forgot to mention that my k8s is running on bare-metal so, a solution like ingress on its own won't work.
I ended up using metallb for this.
https://metallb.universe.tf/
Thanks again :)
This question has been asked and answered before on stackoverflow but because I'm new to K8, I don't understand the answer.
Assuming I have two containers with each container in a separate POD (because I believe this is the recommend approach), I think I need to create a single service for my two pods to be apart of.
How does my java application code get the IP address of the service?
How does my java application code get the IP addresses of another POD/container (from the service)?
This will be a list of IP address because these are stateless and they might be replicated. Is this correct?
How do I select the least busy instance of the POD to communicate with?
Thanks
Siegfried
How does my java application code get the IP address of the service?
You need to create a Service to expose the Pod's port and then you just need to use the Service name and kube-dns will resolve the Pod's IP address
How does my java application code get the IP addresses of another
POD/container (from the service)?
Yes, using the service's name
This will be a list of IP address because these are stateless and they
might be replicated. Is this correct?
The Service will load balance between all pods that matches the selector, so it could be 0, 1 or any number of Pods
How do I select the least busy instance of the POD to communicate with?
Common way is round robin policy but here are other specific balancing policies
https://kubernetes.io/docs/concepts/services-networking/service/#proxy-mode-ipvs
Cheers ;)
You don't need to get any IP, you use the service name (DNS). So if you called your service "java-service-1" and exposed port 80, you can access it this way from inside the cluster:
http://java-service-1
If the service is in a different namespace, you have to add that as well (see https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/)
You also don't select the least busy instance yourself, a service can be configured as LoadBalancer, Kubernetes does all of this for you (see https://kubernetes.io/docs/concepts/services-networking/)
I have an existing web application with frontend and a backend which runs on the same port (HTTPS/443) on different sub domains. Do I really need a load balancer in my pod to handle all incoming web traffic or does Kubernetes has something already build in, which I missed so far?
I would encurage getting familiar with the concept of Ingress and IngressController http://kubernetes.io/docs/user-guide/ingress/
Simplifying things a bit, you can look at ingress as a sort of vhost/path service router/revproxy.