I have deployed a tool in the pod and need to enable port mapping on the node. But once the pod is rebuilt, the position of the pod may change and its IP will also change. Is there a corresponding resolution mechanism in k8s?
Services.
There are 3 options depending on how you want to expose them but the key point is that with that you maintain a single access endpoint/IP address
The 4 options:
ClusterIP - Accessible internally within the cluster.
NodePort - A port on all your nodes where you can point your own LB to.
LoadBalancer - Ties to an infra LB like AWS ELB, GCP LB, etc.
ExternalName - Something outside your cluster
Related
I have k8s cluster with 3 nodes, each node run 4 pods.
I want that each pod get different external IP, How it possible to do that with K8s/docker?
You can't assign external IP to pod.
In order to expose your application outside of your cluster through an external IP, you need to create a service.
You can have an example here in the official doc.
Also you might want to read some documentation about services.
To expose your application to the outside world, You need to create a service & it will provide you an external IP address.
This Service can be applied to one or more pods. If you apply to more pods then service will randomly select any pod to satisfy your request.
Your case:
You want external IP for each pod then, create service for each pod. 4 pod requires 4 IP and that's why it requires 4 services.
On creating a successful service, you get this output,
Display information about the Service:
kubectl get services my-service
The output is similar to this:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
my-service LoadBalancer 10.3.245.137 104.198.205.71 8080/TCP 54s
you should see 4 services with 4 external IP addresses here.
Learn from official Kubernetes website how to create service.
You cannot assign external IP to the node.
what you can do:
Create 4 deployment with 1 replica or deploy 4 pods with different labels
Create 4 service to that 4 deployment or 4 specific labels above with specific external IP you desired.
Docs: external IP, deployment
I am deploying docker containers on a kubernetes cluster with 2 nodes. The docker containers need to have port 50052 open. My understanding was that I just need to define a containerPort (50052) and have a service that points to this.
But when I deploy this, only the first 2 pods will spin up successfully. After that, I get the following message, presumably because the new pods are trying top open port 50052, which is already being used.
0/2 nodes are available: 2 node(s) didn't have free ports for the requested pod ports.
I thought that multiple pods with the same requested port could be scheduled on the same node? Or is this not right?
Thanks, I figured it out -- I had set host network to true in my kubernetes deployment. Changing this back to false fixed my issue.
You are right, multiple pods with the same port can exist in a cluster. They have to have the type: ClusterIP
ClusterIP: Exposes the Service on a cluster-internal IP. Choosing this value makes the Service only reachable from within the cluster. This is the default ServiceType.
To avoid port clashes you should not use NodePort as port type. Because if you have 2 nodes and 4 pods, more then one pod will exist on each node causing a port clash.
Depending on how you want to reach your cluster, you have then different options...
I would like to run a container in kubernetes with a static ip. I found out that only a service can provide an ip address.
Is it possible to map a Service to one pod and forward all ports?
A service discovers pods based on labels and selectors. So it is not necessary to use an IP Address to statically reference a pod from a service. However, if you so wish, you can override the autonomy behind this and manually configure your own ClusterIP for the service.
Once the Pod and Service have been created, other pods in your cluster will be able to interact with the pod via the Name of the Service provided they are in the same namespace. If they are not, you will need to pass the FQDN of the service.
If you are trying to access the pod from outside of Kubernetes, then you will need to use a Service with a different type than ClusterIP. For example, a NodePort or a LoadBalancer. Alternatively, if you have an Ingress Controller with a gateway already provisioned you could use that.
With regards to you desire to forward all ports, this is not possible as port declarations in Service files must be statically mapped. It is not currently possible to pass a Port Range but there is a long standing feature request for it.
Rancher 2 provides 4 options in the "Ports" section when deploying a new workload:
NodePort
HostPort
Cluster IP
Layer-4 Load Balancer
What are the differences? Especially between NodePort, HostPort and Cluster IP?
HostPort (nodes running a pod): Similiar to docker, this will open a port on the node on which the pod is running (this allows you to open port 80 on the host). This is pretty easy to setup an run, however:
Don’t specify a hostPort for a Pod unless it is absolutely necessary. When you bind a Pod to a hostPort, it limits the number of places the Pod can be scheduled, because each combination must be unique. If you don’t specify the hostIP and protocol explicitly, Kubernetes will use 0.0.0.0 as the default hostIP and TCP as the default protocol.
kubernetes.io
NodePort (On every node): Is restricted to ports between port 30,000 to ~33,000. This usually only makes sense in combination with an external loadbalancer (in case you want to publish a web-application on port 80)
If you explicitly need to expose a Pod’s port on the node, consider using a NodePort Service before resorting to hostPort.
kubernetes.io
Cluster IP (Internal only): As the description says, this will open a port only available for internal applications running in the same cluster. A service using this option is accessbile via the internal cluster-ip.
Host Port
Node Port
Cluster IP
When a pod is using a hostPort, a connection to the node’s port is forwarded directly to the pod running on that node
With a NodePort service, a connection to the node’s port is forwarded to a randomly selected pod (possibly on another node)
Exposes the Service on an internal IP in the cluster. This type makes the Service only reachable from within the cluster.
pods using a hostPort, the node’s port is only bound on nodes that run such pods
NodePort services bind the port on all nodes, even on those that don’t run such a pod
NA
The hostPort feature is primarily used for exposing system services, which are deployed to every node using DaemonSets
NA
NA
General Ask Question
Q: What happens when many pods running on the same node whit NodePort?
A: With NodePort it doesn't matter if you have one or multiple nodes, the port is available on every node.
We have a private kubernetes cluster running on a baremetal CoreOS cluster (with Flannel for network overlay) with private addresses.
On top of this cluster we run a kubernetes ReplicationController and Service for elasticsearch. To enable load-balancing, this service has a ClusterIP defined - which is also a private IP address: 10.99.44.10 (but in a different range to node IP addresses).
The issue that we face is that we wish to be able to connect to this ClusterIP from outside the cluster. As far as we can tell this private IP is not contactable from other machines in our private network...
How can we achieve this?
The IP addresses of the nodes are:
node 1 - 192.168.77.102
node 2 - 192.168.77.103
.
and this is how the Service, RC and Pod appear with kubectl:
NAME LABELS SELECTOR IP(S) PORT(S)
elasticsearch <none> app=elasticsearch 10.99.44.10 9200/TCP
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
elasticsearch elasticsearch elasticsearch app=elasticsearch 1
NAME READY STATUS RESTARTS AGE
elasticsearch-swpy1 1/1 Running 0 26m
You need to set the type of your Service.
http://docs.k8s.io/v1.0/user-guide/services.html#external-services
If you are on bare metal, you don't have a LoadBalancer integrated. You can use NodePort to get a port on each VM, and then set up whatever you use for load-balancing to aim at that port on any node.
You can use nodeport, but also use hostport for some daemonsets and deployments and hostnetwork to give a pod total node network access
IIRC, if you have a recent enough kubernetes, each node can forward traffic to the internal network, so if you create the correct routing in your clients/switch, you can access the internal network by delivering those TCP/IP packages to one node. The node will then receive the package and SNAT+forward to the clusterIP or podIP.
Finally, barebone can use now MetalLB for kubernetes loadbalancer, that is mostly using this last feature in a more automatic and redundant way