Kubernetes: Frontend-Pod cannot resolve dns of Backend-Service (using Minikube) - docker

I am learning Kubernetes and i run into trouble reaching an API in my local Minikube (Docker driver).
I have a pod running an angluar-client which tries to reach a backend pod. The frontend Pod is exposed by a NodePort Service. The backend pod is exposed to the Cluster by a ClusterIP Service.
But when i try to reach the clusterip service from the frontend the dns transpile-svc.default.svc.cluster.local cannot get resolved.
error message in the client
the dns should work properly. i followed this https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/ and deployed a dnsutils pod from where i can nslookup.
winpty kubectl exec -i -t dnsutils -- nslookup transpile-svc.default
Server: 10.96.0.10
Address: 10.96.0.10#53
Name: transpile-svc.default.svc.cluster.local
Address: 10.99.196.82
This is the .yaml file for the clusterIP Service
apiVersion: v1
kind: Service
metadata:
name: transpile-svc
labels:
app: transpile
spec:
selector:
app: transpile
ports:
- port: 80
targetPort: 80
Even if i hardcode the IP into the request of the frontend i am getting an empty response.
I verified, that the backend pod is working correctly and when i expose it as a NodePort i can reach the api with my browser.
What am i missing here? Im stuck with this problems for quite some time now and i dont find any solution.

Since your frontend application is calling your application from outside the cluster you need to expose your backend application to outside network too.
There are two ways: either expose it directly by changing transpile-svc service to loadbalancer type or introduce an ingress controller(eg Nginx ingress controller with an Ingres object) which will handle all redirections
Steps to expose service as loadbalancer in minikube
1.Change your service transpile-svc type to LoadBalancer
2.Run command minikube service transpile-svc to expose the service ie an IP will be allocated.
3. Run kubectl get services to get external IP assigned. Use IP:POST to call from frontend application

DNS hostnames *.*.svc.cluster.local is only resolvable from within the kubernetes cluster. You should use http://NODEIP:NODEPORT or url provided by minikube service transpile-svc --url in the frontend javascript code which is running in a browser outside the kubernetes cluster.
If the frontend pod is nginx then you can configure the backend service name as below in the nginx configuration file as described in the docs
upstream transpile {
server transpile;
}
server {
listen 80;
location / {
proxy_pass http://transpile-svc;
}
}

Related

Google Compute kubernetes only can be access when nodePort on NodePort service is 80

Somehow, I am trying to start a kubernetes project on Google Compute (not GKE). After all installation (read docker-ce, kubelet, kubeadm) I create a Service and a Deployment inside as follows :
apiVersion : v1
kind : Service
metadata:
name: client-node-port
spec:
type: NodePort
ports:
- port: 90
targetPort: 80
nodePort: 31515
selector:
component: web
It was working until I change the targetPort inside service to any port beside 80 (along with the Deployment containerPort).
I already tried enabling the port on the instance firewall-cmd --permanent --add-port=(any port beside 80)/tcp
Beside that I also already enable the firewall rule in google Google Firewall Setting
Is there anything that I missed ? Why I can only access the NodePort when nodePort setting in the service is 80 ?
Thanks
PS : If it is relevant, I am using flannel network
May I know why you are trying to change targetPort ?
TargetPortis the port on the POD where the service is running.
Nodeportis the port on which the service can be accessed from external users by nodeip:nodeport.
Port: the same as Nodeport but can be used by cluster users using clusterip:port.
Again, in your case port 80 represents the service is actually running on port 80.
You should change targetPortin case you will set the service in the pod that is running on a different port.
Review this question for more details.

communication between two PODs in a single node(minikube )

I have to communicate between two PODs in minikube which are exposed in two different ports but are in a single node.
For example:
POD A uses 8080 port and which is the landing page.
From POD A we access POD B via hyperlink which uses 8761 port.
Now, in kubernetes it assigns a port dynamically eg: POD A: 30069 and POD B: 30070
Problem here is: it does not automatically map kubernetes port for POD B (30070) while accessing POD B from POD A(30069). Instead POD B tries to open in 8761 port.
Apologies if my description is confusing. Please feel free to recheck if you could not relate to my question.
Thanks for your help
I have to communicate between two PODs in minikube which are exposed in two different ports but are in a single node.
Based on the facts that you want inter-pod communication and that pods reside on the same node, you could have several (rather questionable and fragile) approaches such as hostname and nodePort exposures. In order to be more in line with kubernetes approach and recommendations I'd advise to use Service instead of exposing ports directly from Pod level.
You can read more about Services in the official documenatation and example for Service usage would be like so:
kind: Service
apiVersion: v1
metadata:
name: my-pod-b-service
spec:
selector:
app: MyPodBApp
ports:
- protocol: TCP
port: 80
targetPort: 8761
This specification will create a new Service object named my-pod-b-service which targets TCP port 8761 on any Pod with the app=MyPodBApp label. With that any request coming from pod A for host: my-pod-b-service and port: 80 would be served by some pod B on port 8761 (note that port and targetPort can be the same, this is just an example).
As a side note, for pod A you would have something like:
kind: Service
apiVersion: v1
metadata:
name: my-pod-a-service
spec:
selector:
app: MyPodAApp
ports:
- protocol: TCP
port: 80
targetPort: 8080
Since you target services, you can map same incoming port (80) to both services, and kubernetes is taking care that each comes to appropriate pods, as long as pod selector is properly set on pods.
If there is a correct mapping between the deployment and service name then just a curl request to name:port can be used for communication.
For example,
create a deployment
kubectl create deployment hello-node --image=gcr.io/hello-minikube-zero-install/hello-node
expose it on port 8080
kubectl expose deployment hello-node --type=NodePort --port=8080
another deployment with same image but different name
create deployment hello-node2 --image=gcr.io/hello-minikube-zero-install/hello-node
expose it on port 8080 expose deployment hello-node2 --type=NodePort --port=8080
get pods and start a terminal inside hello-node2 deployment
kubectl get pods
kubectl exec -it <hello-node2-pod-name> -- /bin/bash
you will enter in the container of hello world 2 pod
curl hello-node:8080 returns Hello World!
Also if you have a close look, kubectl describe service hello-node
gives you with an IP field (which is different form Endpoints). This is actually an exposed IP for communication to the pod. Which means inside hello world 2 container if you run
curl <IP from service> : 8080
returns Hello World!
Hope this helps.

Access service on subdomain in Kubernetes

I have following setup:
Private OpenStack Cloud - o̲n̲l̲y̲ Web UI (Horizon) is accessible
(API is restricted but maybe I could get access)
I have used CoreOS with a setup of one master and three nodes
Resources are standardized (as default of OpenStack)
I followed the getting-started guide for CoreOS (i.e. I'm using the default YAMLs for cloud-config provided) on GitHub
As I read extensions such like Web UI (kube-ui) can be added as Add-On - which I have added (only kube-ui).
Now if I run a test such like simple-nginx I get following output:
creating pods:
$ kubectl run my-nginx --image=nginx --replicas=2 --port=80
creating service:
$ kubectl expose rc my-nginx --port=80 --type=LoadBalancer
NAME LABELS SELECTOR IP(S) PORT(S)
my-nginx run=my-nginx run=my-nginx 80/TCP
get service info:
$ kubectl describe service my-nginx
Name: my-nginx
Namespace: default
Labels: run=my-nginx
Selector: run=my-nginx
Type: LoadBalancer
IP: 10.100.161.90
Port: <unnamed> 80/TCP
NodePort: <unnamed> 31170/TCP
Endpoints: 10.244.19.2:80,10.244.44.3:80
Session Affinity: None
No events.
I can access my service from every(!) external IP of the nodes.
My question now is as follows:
How can access any started service ether with a subdomain and therefore how can I set this configuration (for example I have domain.com as example) or could it be printed out on which node-IP I have to access my service (although I have only two replicas(?!))?
To describe my thoughts more understandable I mean following:
given domain: domain.com (pointing to master)
start service simple-nginx
service can be accessed with simple-nginx.domain.com
Does your OpenStack cloud provider implementation support services of type LoadBalancer?
If so, the service controller should assign an ingress IP or hostname to the service, which should eventually show up in kubectl describe svc output. You could then set up external DNS for it.
If not, just use type=NodePort, and you'll still get a NodePort on each node. You can then follow the advice in the comment to create an Ingress resource, which can do the port and host remapping.

Kubernetes, Flannel and exposing services

I have a kubernetes setup running nicely, but I can't seem to expose services externally. I'm thinking my networking is not set up correctly:
kubernetes services addresses: --service-cluster-ip-range=172.16.0.1/16
flannel network config: etcdctl get /test.lan/network/config {"Network":"172.17.0.0/16"}
docker subnet setting: --bip=10.0.0.1/24
Hostnode IP: 192.168.4.57
I've got the nginx service running and I've tried to expose it like so:
[root#kubemaster ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-px6uy 1/1 Running 0 4m
[root#kubemaster ~]# kubectl get services
NAME LABELS SELECTOR IP(S) PORT(S) AGE
kubernetes component=apiserver,provider=kubernetes <none> 172.16.0.1 443/TCP 31m
nginx run=nginx run=nginx 172.16.84.166 9000/TCP 3m
and then I exposed the service like this:
kubectl expose rc nginx --port=9000 --target-port=9000 --type=NodePort
NAME LABELS SELECTOR IP(S) PORT(S) AGE
nginx run=nginx run=nginx 9000/TCP 292y
I'm expecting now to be able to get to the nginx container on the hostnodes IP (192.168.4.57) - have I misunderstood the networking? If I have, can explanation would be appreciated :(
Note: This is on physical hardware with no cloud provider provided load balancer, so NodePort is the only option I have, I think?
So the issue here was that there's a missing piece of the puzzle when you use nodePort.
I was also making a mistake with the commands.
Firstly, you need to make sure you expose the right ports, in this case 80 for nginx:
kubectl expose rc nginx --port=80 --type=NodePort
Secondly, you need to use kubectl describe svc nginx and it'll show you the NodePort it's assigned on each node:
[root#kubemaster ~]# kubectl describe svc nginx
Name: nginx
Namespace: default
Labels: run=nginx
Selector: run=nginx
Type: NodePort
IP: 172.16.92.8
Port: <unnamed> 80/TCP
NodePort: <unnamed> 32033/TCP
Endpoints: 10.0.0.126:80,10.0.0.127:80,10.0.0.128:80
Session Affinity: None
No events.
You can of course assign one when you deploy, but I was missing this info when using randomly assigned ports.
yes, you would need to use NodePort.
When you hit the service, the destPort should be equal to NodePort.
The destIP for the service should be considered local by the nodes. E.g. you could use the hostIP of one of the nodes..
A load-balancer helps because it would handle situations where your node went down, but other nodes could still process the service..
if you're running a cluster on bare metal or not at a provider that provides the load balancer, you can also define the port to be a hostPort on your pod
you define your container, and ports
containers:
- name: ningx
image: nginx
ports:
- containerPort: 80
hostPort: 80
name: http
this will bind the container to the host networking and use the port defined.
The 2 limitations here are obviously:
1) You can only have one of these pods on each host maximum.
2) The IP is the host IP of the node it binds to
this is essentially how the cloud provider load balancers work in a way.
Using the new DaemonSet features, it's possible to define what node the pod will land on and fix the IP. However that necessarily impair the high availability aspect, but at some point there is not much choice as DNS load balancing will not avoid forwarding to a dead nodes

How can I access the Kubernetes service through ClusterIP

I am trying to create Kubernetes cluster using three VMs(Master – 10.x.x.4, Node1 – 10.x.x.150, Node2 – 10.x.x.160).
I was able to create the guestbook application successfully following this link: http://kubernetes.io/v1.0/examples/guestbook/. Only one change I made to frontend-service.yaml: to use NodePort. I can access the frontend service using nodes IP and port number(10.x.x.150:30724 or 10.x.x.160:30724). So everything is working as expected but I am not able to access the frontend service using ClusterIP address(in my case 10.x.x.79).
My understanding of NodePort is that the service can be accessed through cluster IP and also on a port on each node of the cluster. How can I access the service through ClusterIP so that I don’t have to access the each node? Am I missing something here?
service and pod details
$sudo kubectl describe service frontend
Name: frontend
Namespace: default
Labels: name=frontend
Selector: name=frontend
Type: NodePort
IP: 10.x.x.79
Port: <unnamed> 80/TCP
NodePort: <unnamed> 30724/TCP
Endpoints: 172.x.x.13:80,172.x.x.14:80,172.x.x.11:80
Session Affinity: None
No events.
$sudo kubectl describe pod frontend-2b5us
Name: frontend-2b5us
Namespace: default
Image(s): gcr.io/google_samples/gb-frontend:v3
Node: 10.x.x.150/10.x.x.150
Labels: name=frontend
Status: Running
Reason:
Message:
IP: 172.x.x.11
Replication Controllers: frontend (3/3 replicas created)
Containers:
php-redis:
Image: gcr.io/google_samples/gb-frontend:v3
State: Running
Started: Fri, 30 Oct 2015 04:00:40 -0500
Ready: True
Restart Count: 0
I tried to search but would not find any solution for my exact problem but I did find similar problem that looks like for GCE.
Why can't I access my Kubernetes service via its IP?
You do not have ClusterIP service. You do have a NodePort service. To access it, you connect to the NodePort on any of your nodes in the cluster, as you've already discovered. You do get load-balancing here. Even though you connect to a cluster node, the pod you get does not necessarily run on that particular node.
Read the relevant section in the documentation at https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services---service-types to learn about additional service types. You probably do not want NodePort on GCP.
Talking about ClusterIP. To access a ClusterIP service for debugging purposes, you can run kubectl port-forward. You will not actually access the service, but you will directly connect to one of the pods.
For example
kubectl port-forward frontend-2b5us 80 8080
Now connect to localhost:8080
More sophisticated command, which discovers the port on its own, given namespace -n weave and a selector. Taken from https://www.weave.works/docs/scope/latest/installing/
kubectl port-forward -n weave \
"$(kubectl get -n weave pod \
--selector=weave-scope-component=app \
-o jsonpath='{.items..metadata.name}')" \
4040
From where are you trying to access clusterIP? The clusterIP (by default) only works from within the cluster. It is a virtual IP, not routed.

Resources