Kubernetes echo pod from different external IP - docker

I have k8s cluster with 3 nodes, each node run 4 pods.
I want that each pod get different external IP, How it possible to do that with K8s/docker?

You can't assign external IP to pod.
In order to expose your application outside of your cluster through an external IP, you need to create a service.
You can have an example here in the official doc.
Also you might want to read some documentation about services.

To expose your application to the outside world, You need to create a service & it will provide you an external IP address.
This Service can be applied to one or more pods. If you apply to more pods then service will randomly select any pod to satisfy your request.
Your case:
You want external IP for each pod then, create service for each pod. 4 pod requires 4 IP and that's why it requires 4 services.
On creating a successful service, you get this output,
Display information about the Service:
kubectl get services my-service
The output is similar to this:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
my-service LoadBalancer 10.3.245.137 104.198.205.71 8080/TCP 54s
you should see 4 services with 4 external IP addresses here.
Learn from official Kubernetes website how to create service.

You cannot assign external IP to the node.
what you can do:
Create 4 deployment with 1 replica or deploy 4 pods with different labels
Create 4 service to that 4 deployment or 4 specific labels above with specific external IP you desired.
Docs: external IP, deployment

Related

Kubernetes-Node monitors pods and executes related scripts

I have deployed a tool in the pod and need to enable port mapping on the node. But once the pod is rebuilt, the position of the pod may change and its IP will also change. Is there a corresponding resolution mechanism in k8s?
Services.
There are 3 options depending on how you want to expose them but the key point is that with that you maintain a single access endpoint/IP address
The 4 options:
ClusterIP - Accessible internally within the cluster.
NodePort - A port on all your nodes where you can point your own LB to.
LoadBalancer - Ties to an infra LB like AWS ELB, GCP LB, etc.
ExternalName - Something outside your cluster

Forward all service ports to a singe container

I would like to run a container in kubernetes with a static ip. I found out that only a service can provide an ip address.
Is it possible to map a Service to one pod and forward all ports?
A service discovers pods based on labels and selectors. So it is not necessary to use an IP Address to statically reference a pod from a service. However, if you so wish, you can override the autonomy behind this and manually configure your own ClusterIP for the service.
Once the Pod and Service have been created, other pods in your cluster will be able to interact with the pod via the Name of the Service provided they are in the same namespace. If they are not, you will need to pass the FQDN of the service.
If you are trying to access the pod from outside of Kubernetes, then you will need to use a Service with a different type than ClusterIP. For example, a NodePort or a LoadBalancer. Alternatively, if you have an Ingress Controller with a gateway already provisioned you could use that.
With regards to you desire to forward all ports, this is not possible as port declarations in Service files must be statically mapped. It is not currently possible to pass a Port Range but there is a long standing feature request for it.

Accessing a k8s service with cluster IP in default namespace from a docker container

I have a server that is orchestrated using k8s it's service looks like below
➜ installations ✗ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
oxd-server ClusterIP 10.96.124.25 <none> 8444/TCP,8443/TCP 3h32m
and it's pod.
➜ helm git:(helm-rc1) ✗ kubectl get po
NAME READY STATUS RESTARTS AGE
sam-test-oxd-server-6b8f456cb6-5gwwd 1/1 Running 0 3h2m
Now, I have a docker image with an env variable that requires the URL of this server.
I have 2 questions from here.
How can the docker image get the URL or access the URL?
How can I access the same URL in my terminal so I make some curl commands through it?
I hope I am clear on the explanation.
If your docker container is outside the kubernetes cluster, then it's not possible to access you ClusterIP service.
As you could guess by its name, ClusterIP type services are only accessible from within the cluster.
By within the cluster I mean any resource managed by Kubernetes.
A standalone docker container running inside a VM which is part of your K8S cluster is not a resource managed by K8S.
So, in order to achieve what you want, you'll have those possibilities :
Set a hostPort inside your pod. This is not recommanded and is listed as a bad practice in the doc. Keep this usage for very specific case.
Switch your service to NodePort instead of ClusterIP. This way, you'll be able to access it using a node IP + the node port.
Use a LoadBalancer type of service, but this solution needs some configuration and is not straightforward.
Use an Ingress along with an IngressController but just like the load balancer, this solution needs some configuration and is not that straightforward.
Depending on what you do and if this is critical or not, you'll have to choose one of these solutions.
1 & 2 for debug/dev
3 & 4 for prod, but you'll have to work with your k8s admin
You can use the name of the service oxd-server from any other pod in the same namespace to access it i.e., if the service is backed by pods that are serving HTTPS, you can access the service at https://oxd-server:8443/.
If the client pod that wants to access this service is in a different namespace, then you can use oxd-server.<namespace> name. In your case that would be oxd-server.default since your service is in default namespace.
To access this service from outside the cluster(from your terminal) for local debugging, you can use port forwarding.
Then you can use the URL localhost:8443 to make any requests and request would be port forwarded to the service.
kubectl port-forward svc/oxd-server 8443:8443
If you want to access this service from outside the cluster for production use, you can make the service as type: NodePort or type: LoadBalancer. See service types here.

2 ports for 1 Ingres / services / statefulsets/ pods

Requirement :
I have two docker containers where both expose to different ports. (For eg port 9001 and 9002)
From the requirement, I try to design the kubernetes objects and their relationships but I am unsure whether A or B is correct.
A) 1 Ingres connect to 1 service. And 1 service connect to 1 statefulset with 1 pod of 2 containers
B) 2 Ingres connect to 2 services. And 2 service connect to 2 statefulset with 2 pod. Every pod have 1 container.
I want to ask the following questions:
Can 1 Ingres or 1 service or 1 statefulset or 1 pod serves 2 ports? If can then probably A is correct, else B is correct.
Also base on my question, can anyone tell me whether my understanding of kubernetes is correct or wrong?
You can run two containers on the same pod,
The Java can run on port 8080
And the Eheterum can run on port 3306.
Then you can use localhost:8080 from within the container to reach the Java, and the java can reach the etherum on localhost:3306.
If no access from outside the cluster is required Ingress is not required.
Hope that it answers your question.
From what I understood, you only need 1 stateful app, the other one can be stateless and persist data in the first app. If you are following the microsservices paradigm you should separate your apps in stateless and stateful services.
It's also important to note that unless two containers are tightly coupled they shouldn't be on the same pod. The separations allows a more flexible scalability, as you don't need to have the same number of replicas of both containers.
In conclusion, I would do the following:
Create a Deployment containing only the image of your stateless app.
Create a StatefulSet for the stateful app.
Create 2 services, one for each app.
Create 1 ingress for your stateless app.
The ingress will allow routing from request coming from outside the cluster to your app's service. Even if yout statefull app doesn't communicate with the external world, creationg a service for him will allow easier communication between the apps inside the cluster (you can use a fixed IP or even DNS)
1 ingress can serve n number of services and routes external connections to services based on the host and hostPath.
1 service can serve multiple ports and it assigns different nodeports mapping to each port.
1 pod can aslo serve multiple ports and it depends on the port you expose in your manifest.
Statefulsets are more like pods manifest it’s just that they provide a addon functionality of persistence of volumes.

Kubernetes: multiple pods in a node when each pod exposes a port

I was following along with the Hello, World example in Kubernetes getting started guide.
In that example, a cluster with 3 nodes/instances is created on Google Container Engine.
The container to be deployed is a basic nodejs http server, which listens on port 8080.
Now when I run
kubectl run hello-node --image <image-name> --port 8080
it creates a pod and a deployment, deploying the pod on one of nodes.
Running the
kubectl scale deployment hello-node --replicas=4
command increases the number of pods to 4.
But since each pod exposes the 8080 port, will it not create a port conflict on the pod where two nodes are deployed?
I can see 4 pods when I do kubernetes get pods, however what the behaviour will be in this case?
Got some help in #kubernetes-users channel on slack :
The port specified in kubectl run ... is that of a pod. And each pod has its unique IP address. So, there are no port conflicts.
The pods won’t serve traffic until and unless you expose them as a service.
Exposing a service by running kubectl expose ... assigns a NodePort (which is in range 30000-32000) on every node. This port must be unique for every service.
If a node has multiple pods kube-proxy balances the traffic between those pods.
Also, when I accessed my service from the browser, I was able to see logs in all the 4 pods, so the traffic was served from all the 4 pods.
There is a difference between the port that your pod exposes and the physical ports on your node. Those need to be linked by for instance a kubernetes service or a loadBalancer as discussed a bit further in the hello-world documentation http://kubernetes.io/docs/hellonode/#allow-external-traffic

Resources