communication between two PODs in a single node(minikube ) - docker

I have to communicate between two PODs in minikube which are exposed in two different ports but are in a single node.
For example:
POD A uses 8080 port and which is the landing page.
From POD A we access POD B via hyperlink which uses 8761 port.
Now, in kubernetes it assigns a port dynamically eg: POD A: 30069 and POD B: 30070
Problem here is: it does not automatically map kubernetes port for POD B (30070) while accessing POD B from POD A(30069). Instead POD B tries to open in 8761 port.
Apologies if my description is confusing. Please feel free to recheck if you could not relate to my question.
Thanks for your help

I have to communicate between two PODs in minikube which are exposed in two different ports but are in a single node.
Based on the facts that you want inter-pod communication and that pods reside on the same node, you could have several (rather questionable and fragile) approaches such as hostname and nodePort exposures. In order to be more in line with kubernetes approach and recommendations I'd advise to use Service instead of exposing ports directly from Pod level.
You can read more about Services in the official documenatation and example for Service usage would be like so:
kind: Service
apiVersion: v1
metadata:
name: my-pod-b-service
spec:
selector:
app: MyPodBApp
ports:
- protocol: TCP
port: 80
targetPort: 8761
This specification will create a new Service object named my-pod-b-service which targets TCP port 8761 on any Pod with the app=MyPodBApp label. With that any request coming from pod A for host: my-pod-b-service and port: 80 would be served by some pod B on port 8761 (note that port and targetPort can be the same, this is just an example).
As a side note, for pod A you would have something like:
kind: Service
apiVersion: v1
metadata:
name: my-pod-a-service
spec:
selector:
app: MyPodAApp
ports:
- protocol: TCP
port: 80
targetPort: 8080
Since you target services, you can map same incoming port (80) to both services, and kubernetes is taking care that each comes to appropriate pods, as long as pod selector is properly set on pods.

If there is a correct mapping between the deployment and service name then just a curl request to name:port can be used for communication.
For example,
create a deployment
kubectl create deployment hello-node --image=gcr.io/hello-minikube-zero-install/hello-node
expose it on port 8080
kubectl expose deployment hello-node --type=NodePort --port=8080
another deployment with same image but different name
create deployment hello-node2 --image=gcr.io/hello-minikube-zero-install/hello-node
expose it on port 8080 expose deployment hello-node2 --type=NodePort --port=8080
get pods and start a terminal inside hello-node2 deployment
kubectl get pods
kubectl exec -it <hello-node2-pod-name> -- /bin/bash
you will enter in the container of hello world 2 pod
curl hello-node:8080 returns Hello World!
Also if you have a close look, kubectl describe service hello-node
gives you with an IP field (which is different form Endpoints). This is actually an exposed IP for communication to the pod. Which means inside hello world 2 container if you run
curl <IP from service> : 8080
returns Hello World!
Hope this helps.

Related

Kubernetes: Frontend-Pod cannot resolve dns of Backend-Service (using Minikube)

I am learning Kubernetes and i run into trouble reaching an API in my local Minikube (Docker driver).
I have a pod running an angluar-client which tries to reach a backend pod. The frontend Pod is exposed by a NodePort Service. The backend pod is exposed to the Cluster by a ClusterIP Service.
But when i try to reach the clusterip service from the frontend the dns transpile-svc.default.svc.cluster.local cannot get resolved.
error message in the client
the dns should work properly. i followed this https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/ and deployed a dnsutils pod from where i can nslookup.
winpty kubectl exec -i -t dnsutils -- nslookup transpile-svc.default
Server: 10.96.0.10
Address: 10.96.0.10#53
Name: transpile-svc.default.svc.cluster.local
Address: 10.99.196.82
This is the .yaml file for the clusterIP Service
apiVersion: v1
kind: Service
metadata:
name: transpile-svc
labels:
app: transpile
spec:
selector:
app: transpile
ports:
- port: 80
targetPort: 80
Even if i hardcode the IP into the request of the frontend i am getting an empty response.
I verified, that the backend pod is working correctly and when i expose it as a NodePort i can reach the api with my browser.
What am i missing here? Im stuck with this problems for quite some time now and i dont find any solution.
Since your frontend application is calling your application from outside the cluster you need to expose your backend application to outside network too.
There are two ways: either expose it directly by changing transpile-svc service to loadbalancer type or introduce an ingress controller(eg Nginx ingress controller with an Ingres object) which will handle all redirections
Steps to expose service as loadbalancer in minikube
1.Change your service transpile-svc type to LoadBalancer
2.Run command minikube service transpile-svc to expose the service ie an IP will be allocated.
3. Run kubectl get services to get external IP assigned. Use IP:POST to call from frontend application
DNS hostnames *.*.svc.cluster.local is only resolvable from within the kubernetes cluster. You should use http://NODEIP:NODEPORT or url provided by minikube service transpile-svc --url in the frontend javascript code which is running in a browser outside the kubernetes cluster.
If the frontend pod is nginx then you can configure the backend service name as below in the nginx configuration file as described in the docs
upstream transpile {
server transpile;
}
server {
listen 80;
location / {
proxy_pass http://transpile-svc;
}
}

Kubernetes : Micro services running on same port?

I am building a microservice full stack web application as (so far) :
ReactJS (client microservice) : listens on 3000
Authentication (Auth microservice) : listens on 3000 // accidently assigned the same port
Technically, what I have heard/learned so far is that we cannot have two Pods running on the same port.
I am really confused how am I able to run the application (perfectly) like this with same ports on different applications/pods?
ingress-nginx config:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-service
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: 'true'
spec:
## our custom routing rules
rules:
- host: ticketing.dev
http:
paths:
- path: /api/users/?(.*)
backend:
serviceName: auth-srv
servicePort: 3000
- path: /?(.*)
backend:
serviceName: client-srv
servicePort: 3000
I am really curious, am I missing something here?
Each Pod has its own network namespace and its own IP address, though the Pod-specific IP addresses aren't reachable from outside the cluster and aren't really discoverable inside the cluster. Since each Pod has its own IP address, you can have as many Pods as you want all listening to the same port.
Each Service also has its own IP address; again, not reachable from outside the cluster, though they have DNS names so applications can find them. Since each Service has its own IP address, you can have as many Services as you want all listening to the same port. The Service ports can be the same or different from the Pod ports.
The Ingress controller is reachable from outside the cluster via HTTP. The Ingress specification you show defines HTTP routing rules. If I set up a DNS service with a .dev TLD and define an A record for ticketing.dev that points at the ingress controller, then http://ticketing.dev/api/users/anything gets forwarded to http://auth-srv.default.svc.cluster.local:3000/ within the cluster, and http://ticketing.dev/otherwise goes to http://client-srv.default.svc.cluster.local:3000/. Those in turn will get forwarded to whatever Pods they're connected to.
There's no particular prohibition against multiple Pods or Services having the same port. I tend to like setting all of my HTTP Services to listen on port 80 since it's the standard HTTP port, even if the individual Pods are listening on port 3000 or 8000 or 8080 or whatever else.
You have two different services in the backend: auth-srv and client-srv. Therefore, you have two different addresses and then can use any port you want in each of it. that means you can get the same port in the two different services.

GKE - Bypass Pod LoadBalancer (Pod's external IP) to Pod's container's IP at runtime for WebSocket purpose

I have the following situation:
I have a couple of microservices, only 2 are relevant right now.
- Web Socket Service API
- Dispatcher Service
We have 3 users that we'll call respectively 1, 2, and 3. These users connect themselves to the web socket endpoint of our backend. Our microservices are running on Kubernetes and each services can be replicated multiple times inside Pods. For this situation, we have 1 running container for the dispatcher, and 3 running containers for the web socket api. Each pod has its Load Balancer and this will be each time the entry point.
In our situation, we will then have the following "schema":
Now that we have a representation of our system (and a legend), our 3 users will want to use the app and connect.
As we can see, the load balancer of our pod forwarded the web socket connection of our users across the different containers. Each container, once it gets a new connection, will let to know the Dispatcher Service, and this one will save it in its own database.
Now, 3 users are connected to 2 different containers and the Dispatcher service knows it.
The user 1 wants to message user 2. The container A will then get a message and tell the Dispatcher Service: Please, send this to the user 2.
As the dispatcher knows to which container the user 2 is connected, I would like to send a request directly to my Container instead of sending it to the Pod. Sending it to the Pod is resulting in sending a request to a load balancer which actually dispatches the request to the most available container instance...
How could I manage to get the container IP? Can it be accessed by another container from another Pod?
To me, the best approach would be that, once the app start, it gets the current container's IP and then send it within the register request to the dispatcher, so the dispatcher would know that ContainerID=IP
Thanks!
edit 1
There is my web-socket-service-api.yaml
apiVersion: v1
kind: Service
metadata:
name: web-socket-service-api
spec:
ports:
# Port that accepts gRPC and JSON/HTTP2 requests over HTTP.
- port: 8080
targetPort: 8080
protocol: TCP
name: grpc
# Port that accepts gRPC and JSON/HTTP2 requests over HTTP.
- port: 8081
targetPort: 8081
protocol: TCP
name: rest
# Port that accepts WebSockets.
- port: 8082
targetPort: 8082
protocol: TCP
name: websocket
selector:
app: web-socket-service-api
type: LoadBalancer
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: web-socket-service-api
spec:
replicas: 3
template:
metadata:
labels:
app: web-socket-service-api
spec:
containers:
- name: web-socket-service-api
image: gcr.io/[PROJECT]/web-socket-service-api:latest
ports:
- containerPort: 8080
- containerPort: 8081
- containerPort: 8082
Dispatcher ≈ Message broker
As how I understand your design, your Dispatcher is essentially a message broker for the pods of your Websocket Service. Let all Websocket pods connect to the broker and let the broker route messages. This is a stateful service and you should use a StatefulSet for this in Kubernetes. Depending on your requirements, a possible solution could be to use a MQTT-broker for this, e.g. mosquitto. Most MQTT brokers have support for websockets.
Scale out: Multiple replicas of pods
each services can be replicated multiple times inside Pods. For this situation, we have 1 running container for the dispatcher, and 3 running containers for the web socket api.
This is not how Kubernetes is intented to be used. Use multiple replicas of pods instead of multiple containers in the pod. I recommend that you create a Deployment for your Websocket Service with as many replicas you want.
Service as Load balancer
Each pod has its Load Balancer and this will be each time the entry point.
In Kubernetes you should create a Service that load balance traffic to a set of pods.
Your solution
To me, the best approach would be that, once the app start, it gets the current container's IP and then send it within the register request to the dispatcher, so the dispatcher would know that ContainerID=IP
Yes, I mostly agree. That is similar to what I have described here. But I would let the Websocket Service establish a connection to the Broker/Dispatcher.
Any pod, has some information about itself. And one of the info, is it own IP address. As an example:
apiVersion: v1
kind: Pod
metadata:
name: envars-fieldref
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
command: [ "sh", "-c"]
args:
- while true; do
echo -en '\n';
printenv MY_POD_IP;
sleep 10;
done;
env:
- name: MY_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
Within the container, MY_POD_IP would contain the IP address of the pod. You can let the dispatcher know about it.
$ kubectl logs envars-fieldref
10.52.0.3
$ kubectl get po -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
envars-fieldref 1/1 Running 0 31s 10.52.0.3 gke-klusta-lemmy-3ce02acd-djhm <none> <none>
Note that it is not a good idea to rely on pod IP address. But this should do the trick.
Also, it is exactly the same thing to send a request to the pod or to the container.

How to make request to Kubernetes service?

When I am trying to send an HTTP request from one pod to another pod within my cluster, how do I target it? By the cluster IP, service IP, serivce name? I can not seem to find any documentation on this even though it seems like such a big part. Any knowledge would help. Thanks!
DNS for Services and Pods should help you here.
apiVersion: v1
kind: Service
metadata:
name: myservice
namespace: mynamespace
spec:
selector:
name: myapp
type: ClusterIP
ports:
- name: http
port: 80
targetPort: 80
Lets say you have a service defined as such and you are trying to call the service from the same namespace. You can call http://myservice.svc.cluster.local:80. If you want to call the service from another namespace you can use http://myservice.mynamespace.svc.cluster.local:80
As #David Maze mentioned,
you can find more information about:
"Connecting Applications with Services and Exposing the Service",
"testing and debugging",
"Publishing services and service types"
Shortly:
Please exec into your pod:
kubectl exec -it <your_pod> -- /bin/bash
perform:
nslookup <your_service>
In that way you can check if your service is working using DNS (assuming your service is working in default namespace) you should see:
<your_service>.default.svc.cluster.local
than you can check:
curl http://<your_service>
or
curl http://<your_service>.default.svc.cluster.local

Google Compute kubernetes only can be access when nodePort on NodePort service is 80

Somehow, I am trying to start a kubernetes project on Google Compute (not GKE). After all installation (read docker-ce, kubelet, kubeadm) I create a Service and a Deployment inside as follows :
apiVersion : v1
kind : Service
metadata:
name: client-node-port
spec:
type: NodePort
ports:
- port: 90
targetPort: 80
nodePort: 31515
selector:
component: web
It was working until I change the targetPort inside service to any port beside 80 (along with the Deployment containerPort).
I already tried enabling the port on the instance firewall-cmd --permanent --add-port=(any port beside 80)/tcp
Beside that I also already enable the firewall rule in google Google Firewall Setting
Is there anything that I missed ? Why I can only access the NodePort when nodePort setting in the service is 80 ?
Thanks
PS : If it is relevant, I am using flannel network
May I know why you are trying to change targetPort ?
TargetPortis the port on the POD where the service is running.
Nodeportis the port on which the service can be accessed from external users by nodeip:nodeport.
Port: the same as Nodeport but can be used by cluster users using clusterip:port.
Again, in your case port 80 represents the service is actually running on port 80.
You should change targetPortin case you will set the service in the pod that is running on a different port.
Review this question for more details.

Resources