Test Pod's external IP on a Kubernetes one node cluster - docker

I am working with the docker desktop version and have got a one node Kubernetes cluster.
I have a containerized web api that is represented by the pod.
I want to give the Pod an IP adress other than the localhost.
I created a service to expose my pod and used an external IP, here is the yaml file :
apiVersion: v1
kind: Service
metadata:
name: pred-entrypoint
namespace: default
spec:
type: LoadBalancer
selector:
app: predictim
ports:
- port: 1080
targetPort: 1080
nodePort: 30001
externalIPs:
- 80.11.12.10
When I launch a service describe I get as external IPs : localhost, 80.11.12.10
When I test the pod with PowerShell :
using Invoke-RestMethod -Method POST -Uri http://localhost:1080/predict --> it works fine
using Invoke-RestMethod -Method POST -Uri http://80.11.12.10:1080/predict --> bad address error
I don't know how to do test the assigned External-IP.
I'm certainly missing something, can you please help me to understand.
Thank you so much in advance.

Easiest way to achieve is to use hostPort. This will basically mimic the -p 1080:1080 option on docker run command. You should be able to use your public IP too since it's only one node. This has to be done in pod template.
ports:
- containerPort: 1080
hostPort: 1080
A more elegant way is using MetalLB. With MetalLB installed you can just make your service type LoadBalancer even if you are not on cloud.

Related

Can't get UDP packets to my container in kubernetes [MiniKube]

I'm new to Kubernetes, but have managed to build myself a container that runs perfectly directly under Docker and also seems to run up fine in a k8s deployment.
The container is an implementation of a couple of UDP packet replicators from github installed on Ubuntu.
When it's running directly under Docker on my machine, I can send UDP packets to the container and have them replicated back to different ports on my machine proving that the replication works. (Sending and receiving the packets with netcat).
However, I think I am missing something in the k8s networking side of things.
I am using MiniKube on my machine and I am using the following k8s manifest to create the deployment with just one container.
apiVersion: apps/v1
kind: Deployment
metadata:
name: samplicator-deployment
labels:
app: samplicator
spec:
replicas: 1
selector:
matchLabels:
app: samplicator
template:
metadata:
labels:
app: samplicator
spec:
containers:
- name: samplicator-container-01
image: dgbes/udp-fan-out-tools:latest
command: ["samplicate"]
args: ["-p3160","192.168.1.159/3161","192.168.1.159/3162"]
ports:
- name: receiver
protocol: UDP
containerPort: 3160
I then create the deployment with: kubectl apply -f create-samplicator-deployment.yaml
I then set up a couple of UDP listeners with netcat on my host machine with nc -ulk -p 3161 and nc -ulk -p 3162.
If I then connect to the running container with kubectl exec --stdin --tty samplicator-deployment-{randomPodName} -- /bin/bash and manually use netcat send packets to my host machine I can see those arriving no problem.
I find the container/pod IP address with kubectl get pod -o wide.
When I try to send a packet to the samplicator process in the pod, though, I see nothing coming back to my host machine.
So, I then spawned a shell in the container/pod, checked that the samplicator process was running correctly (it was), and installed netcat in the container instance.
Using netcat -u {my host machine IP} 3161 I can send packets from the container to my host machien and they are received no problem.
So, it seems that the issue is getting the packets TO the container.
I confirmed this by running nc -ulk -p 3600 in the container shell and sending a packet from my host to that port in the container - nothing is received in the container.
I am aware that the ports need to be exposed on the container and that 'services' are used for this, but I thought that that was what the ports: section in the template spec of the deployment was doing.
That didn't create a service to expose the port, so I added a service definition to the end of my deployment manifest YAML as follows:
---
apiVersion: v1
kind: Service
metadata:
name: samplicator-service
spec:
selector:
app: samplicator
type: LoadBalancer
ports:
- name: receiver-service
protocol: UDP
port: 3160
targetPort: 3160
I'm obviously missing something here, and my apologies if my k8s terminology is a bit mangled - as I say I'm completely new to k8s.
Any pointers to how to correctly make that UDP port reachable?
I figured it out...
Service type=LoadBalancer doesn't work with MiniKube as it relies on an external cloud service apparently.
I noticed that my service was sat in a pending state with the ExternalIP never being allocated, which lead me to this answer.
The magic being to simply run the following command in a separate terminal window which then allowed the external IP to be allocated to the service.
minikube tunnel
Once that was run and the ExternalIP allocated, sending packets to that ExternalIP works perfectly.

Kubernetes: Frontend-Pod cannot resolve dns of Backend-Service (using Minikube)

I am learning Kubernetes and i run into trouble reaching an API in my local Minikube (Docker driver).
I have a pod running an angluar-client which tries to reach a backend pod. The frontend Pod is exposed by a NodePort Service. The backend pod is exposed to the Cluster by a ClusterIP Service.
But when i try to reach the clusterip service from the frontend the dns transpile-svc.default.svc.cluster.local cannot get resolved.
error message in the client
the dns should work properly. i followed this https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/ and deployed a dnsutils pod from where i can nslookup.
winpty kubectl exec -i -t dnsutils -- nslookup transpile-svc.default
Server: 10.96.0.10
Address: 10.96.0.10#53
Name: transpile-svc.default.svc.cluster.local
Address: 10.99.196.82
This is the .yaml file for the clusterIP Service
apiVersion: v1
kind: Service
metadata:
name: transpile-svc
labels:
app: transpile
spec:
selector:
app: transpile
ports:
- port: 80
targetPort: 80
Even if i hardcode the IP into the request of the frontend i am getting an empty response.
I verified, that the backend pod is working correctly and when i expose it as a NodePort i can reach the api with my browser.
What am i missing here? Im stuck with this problems for quite some time now and i dont find any solution.
Since your frontend application is calling your application from outside the cluster you need to expose your backend application to outside network too.
There are two ways: either expose it directly by changing transpile-svc service to loadbalancer type or introduce an ingress controller(eg Nginx ingress controller with an Ingres object) which will handle all redirections
Steps to expose service as loadbalancer in minikube
1.Change your service transpile-svc type to LoadBalancer
2.Run command minikube service transpile-svc to expose the service ie an IP will be allocated.
3. Run kubectl get services to get external IP assigned. Use IP:POST to call from frontend application
DNS hostnames *.*.svc.cluster.local is only resolvable from within the kubernetes cluster. You should use http://NODEIP:NODEPORT or url provided by minikube service transpile-svc --url in the frontend javascript code which is running in a browser outside the kubernetes cluster.
If the frontend pod is nginx then you can configure the backend service name as below in the nginx configuration file as described in the docs
upstream transpile {
server transpile;
}
server {
listen 80;
location / {
proxy_pass http://transpile-svc;
}
}

GKE - Bypass Pod LoadBalancer (Pod's external IP) to Pod's container's IP at runtime for WebSocket purpose

I have the following situation:
I have a couple of microservices, only 2 are relevant right now.
- Web Socket Service API
- Dispatcher Service
We have 3 users that we'll call respectively 1, 2, and 3. These users connect themselves to the web socket endpoint of our backend. Our microservices are running on Kubernetes and each services can be replicated multiple times inside Pods. For this situation, we have 1 running container for the dispatcher, and 3 running containers for the web socket api. Each pod has its Load Balancer and this will be each time the entry point.
In our situation, we will then have the following "schema":
Now that we have a representation of our system (and a legend), our 3 users will want to use the app and connect.
As we can see, the load balancer of our pod forwarded the web socket connection of our users across the different containers. Each container, once it gets a new connection, will let to know the Dispatcher Service, and this one will save it in its own database.
Now, 3 users are connected to 2 different containers and the Dispatcher service knows it.
The user 1 wants to message user 2. The container A will then get a message and tell the Dispatcher Service: Please, send this to the user 2.
As the dispatcher knows to which container the user 2 is connected, I would like to send a request directly to my Container instead of sending it to the Pod. Sending it to the Pod is resulting in sending a request to a load balancer which actually dispatches the request to the most available container instance...
How could I manage to get the container IP? Can it be accessed by another container from another Pod?
To me, the best approach would be that, once the app start, it gets the current container's IP and then send it within the register request to the dispatcher, so the dispatcher would know that ContainerID=IP
Thanks!
edit 1
There is my web-socket-service-api.yaml
apiVersion: v1
kind: Service
metadata:
name: web-socket-service-api
spec:
ports:
# Port that accepts gRPC and JSON/HTTP2 requests over HTTP.
- port: 8080
targetPort: 8080
protocol: TCP
name: grpc
# Port that accepts gRPC and JSON/HTTP2 requests over HTTP.
- port: 8081
targetPort: 8081
protocol: TCP
name: rest
# Port that accepts WebSockets.
- port: 8082
targetPort: 8082
protocol: TCP
name: websocket
selector:
app: web-socket-service-api
type: LoadBalancer
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: web-socket-service-api
spec:
replicas: 3
template:
metadata:
labels:
app: web-socket-service-api
spec:
containers:
- name: web-socket-service-api
image: gcr.io/[PROJECT]/web-socket-service-api:latest
ports:
- containerPort: 8080
- containerPort: 8081
- containerPort: 8082
Dispatcher ≈ Message broker
As how I understand your design, your Dispatcher is essentially a message broker for the pods of your Websocket Service. Let all Websocket pods connect to the broker and let the broker route messages. This is a stateful service and you should use a StatefulSet for this in Kubernetes. Depending on your requirements, a possible solution could be to use a MQTT-broker for this, e.g. mosquitto. Most MQTT brokers have support for websockets.
Scale out: Multiple replicas of pods
each services can be replicated multiple times inside Pods. For this situation, we have 1 running container for the dispatcher, and 3 running containers for the web socket api.
This is not how Kubernetes is intented to be used. Use multiple replicas of pods instead of multiple containers in the pod. I recommend that you create a Deployment for your Websocket Service with as many replicas you want.
Service as Load balancer
Each pod has its Load Balancer and this will be each time the entry point.
In Kubernetes you should create a Service that load balance traffic to a set of pods.
Your solution
To me, the best approach would be that, once the app start, it gets the current container's IP and then send it within the register request to the dispatcher, so the dispatcher would know that ContainerID=IP
Yes, I mostly agree. That is similar to what I have described here. But I would let the Websocket Service establish a connection to the Broker/Dispatcher.
Any pod, has some information about itself. And one of the info, is it own IP address. As an example:
apiVersion: v1
kind: Pod
metadata:
name: envars-fieldref
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
command: [ "sh", "-c"]
args:
- while true; do
echo -en '\n';
printenv MY_POD_IP;
sleep 10;
done;
env:
- name: MY_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
Within the container, MY_POD_IP would contain the IP address of the pod. You can let the dispatcher know about it.
$ kubectl logs envars-fieldref
10.52.0.3
$ kubectl get po -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
envars-fieldref 1/1 Running 0 31s 10.52.0.3 gke-klusta-lemmy-3ce02acd-djhm <none> <none>
Note that it is not a good idea to rely on pod IP address. But this should do the trick.
Also, it is exactly the same thing to send a request to the pod or to the container.

How to make request to Kubernetes service?

When I am trying to send an HTTP request from one pod to another pod within my cluster, how do I target it? By the cluster IP, service IP, serivce name? I can not seem to find any documentation on this even though it seems like such a big part. Any knowledge would help. Thanks!
DNS for Services and Pods should help you here.
apiVersion: v1
kind: Service
metadata:
name: myservice
namespace: mynamespace
spec:
selector:
name: myapp
type: ClusterIP
ports:
- name: http
port: 80
targetPort: 80
Lets say you have a service defined as such and you are trying to call the service from the same namespace. You can call http://myservice.svc.cluster.local:80. If you want to call the service from another namespace you can use http://myservice.mynamespace.svc.cluster.local:80
As #David Maze mentioned,
you can find more information about:
"Connecting Applications with Services and Exposing the Service",
"testing and debugging",
"Publishing services and service types"
Shortly:
Please exec into your pod:
kubectl exec -it <your_pod> -- /bin/bash
perform:
nslookup <your_service>
In that way you can check if your service is working using DNS (assuming your service is working in default namespace) you should see:
<your_service>.default.svc.cluster.local
than you can check:
curl http://<your_service>
or
curl http://<your_service>.default.svc.cluster.local

Access service on subdomain in Kubernetes

I have following setup:
Private OpenStack Cloud - o̲n̲l̲y̲ Web UI (Horizon) is accessible
(API is restricted but maybe I could get access)
I have used CoreOS with a setup of one master and three nodes
Resources are standardized (as default of OpenStack)
I followed the getting-started guide for CoreOS (i.e. I'm using the default YAMLs for cloud-config provided) on GitHub
As I read extensions such like Web UI (kube-ui) can be added as Add-On - which I have added (only kube-ui).
Now if I run a test such like simple-nginx I get following output:
creating pods:
$ kubectl run my-nginx --image=nginx --replicas=2 --port=80
creating service:
$ kubectl expose rc my-nginx --port=80 --type=LoadBalancer
NAME LABELS SELECTOR IP(S) PORT(S)
my-nginx run=my-nginx run=my-nginx 80/TCP
get service info:
$ kubectl describe service my-nginx
Name: my-nginx
Namespace: default
Labels: run=my-nginx
Selector: run=my-nginx
Type: LoadBalancer
IP: 10.100.161.90
Port: <unnamed> 80/TCP
NodePort: <unnamed> 31170/TCP
Endpoints: 10.244.19.2:80,10.244.44.3:80
Session Affinity: None
No events.
I can access my service from every(!) external IP of the nodes.
My question now is as follows:
How can access any started service ether with a subdomain and therefore how can I set this configuration (for example I have domain.com as example) or could it be printed out on which node-IP I have to access my service (although I have only two replicas(?!))?
To describe my thoughts more understandable I mean following:
given domain: domain.com (pointing to master)
start service simple-nginx
service can be accessed with simple-nginx.domain.com
Does your OpenStack cloud provider implementation support services of type LoadBalancer?
If so, the service controller should assign an ingress IP or hostname to the service, which should eventually show up in kubectl describe svc output. You could then set up external DNS for it.
If not, just use type=NodePort, and you'll still get a NodePort on each node. You can then follow the advice in the comment to create an Ingress resource, which can do the port and host remapping.

Resources