Rancher 2 provides 4 options in the "Ports" section when deploying a new workload:
NodePort
HostPort
Cluster IP
Layer-4 Load Balancer
What are the differences? Especially between NodePort, HostPort and Cluster IP?
HostPort (nodes running a pod): Similiar to docker, this will open a port on the node on which the pod is running (this allows you to open port 80 on the host). This is pretty easy to setup an run, however:
Don’t specify a hostPort for a Pod unless it is absolutely necessary. When you bind a Pod to a hostPort, it limits the number of places the Pod can be scheduled, because each combination must be unique. If you don’t specify the hostIP and protocol explicitly, Kubernetes will use 0.0.0.0 as the default hostIP and TCP as the default protocol.
kubernetes.io
NodePort (On every node): Is restricted to ports between port 30,000 to ~33,000. This usually only makes sense in combination with an external loadbalancer (in case you want to publish a web-application on port 80)
If you explicitly need to expose a Pod’s port on the node, consider using a NodePort Service before resorting to hostPort.
kubernetes.io
Cluster IP (Internal only): As the description says, this will open a port only available for internal applications running in the same cluster. A service using this option is accessbile via the internal cluster-ip.
Host Port
Node Port
Cluster IP
When a pod is using a hostPort, a connection to the node’s port is forwarded directly to the pod running on that node
With a NodePort service, a connection to the node’s port is forwarded to a randomly selected pod (possibly on another node)
Exposes the Service on an internal IP in the cluster. This type makes the Service only reachable from within the cluster.
pods using a hostPort, the node’s port is only bound on nodes that run such pods
NodePort services bind the port on all nodes, even on those that don’t run such a pod
NA
The hostPort feature is primarily used for exposing system services, which are deployed to every node using DaemonSets
NA
NA
General Ask Question
Q: What happens when many pods running on the same node whit NodePort?
A: With NodePort it doesn't matter if you have one or multiple nodes, the port is available on every node.
Related
I have deployed a tool in the pod and need to enable port mapping on the node. But once the pod is rebuilt, the position of the pod may change and its IP will also change. Is there a corresponding resolution mechanism in k8s?
Services.
There are 3 options depending on how you want to expose them but the key point is that with that you maintain a single access endpoint/IP address
The 4 options:
ClusterIP - Accessible internally within the cluster.
NodePort - A port on all your nodes where you can point your own LB to.
LoadBalancer - Ties to an infra LB like AWS ELB, GCP LB, etc.
ExternalName - Something outside your cluster
I am deploying docker containers on a kubernetes cluster with 2 nodes. The docker containers need to have port 50052 open. My understanding was that I just need to define a containerPort (50052) and have a service that points to this.
But when I deploy this, only the first 2 pods will spin up successfully. After that, I get the following message, presumably because the new pods are trying top open port 50052, which is already being used.
0/2 nodes are available: 2 node(s) didn't have free ports for the requested pod ports.
I thought that multiple pods with the same requested port could be scheduled on the same node? Or is this not right?
Thanks, I figured it out -- I had set host network to true in my kubernetes deployment. Changing this back to false fixed my issue.
You are right, multiple pods with the same port can exist in a cluster. They have to have the type: ClusterIP
ClusterIP: Exposes the Service on a cluster-internal IP. Choosing this value makes the Service only reachable from within the cluster. This is the default ServiceType.
To avoid port clashes you should not use NodePort as port type. Because if you have 2 nodes and 4 pods, more then one pod will exist on each node causing a port clash.
Depending on how you want to reach your cluster, you have then different options...
I have installed docker to host several containers on a server, using the host network - so ports are shared amongst all containers. If one container uses port 8000, no other ones can. Is there a tool - perhaps not so complex as k8s, though I've no idea whether that can do it - to assist me with selecting ports for each container? As the number of services on the host network grows, managing the list of available ports becomes unwieldy.
I remain confused as to why when I run docker ps, certain containers list no ports at all. It would be easier if the full list of ports were easily available, but I have two containers with a sizable list of exposed ports which show no ports at all. I suppose this is a separate question and a less important one.
Containers in a Pod are accessible via “localhost”; they use the same network namespace. Also, for containers, the observable host name is a Pod’s name. Because containers share the same IP address and port space, you should use different ports in containers for incoming connections. In other words, applications in a Pod must coordinate their usage of ports.
In the following example, we will create a multi-container Pod where nginx in one container works as a reverse proxy for a simple web application running in the second container.
Step 1. Create a ConfigMap with the nginx configuration file. Incoming HTTP requests to port 80 will be forwarded to port 5000 on localhost
Step 2. Create a multi-container Pod with the simple web app and nginx in separate containers. Note that for the Pod, we define only nginx port 80. Port 5000 will not be accessible outside of the Pod.
Step 3. Expose the Pod using the NodePort service:
$ kubectl expose pod mc3 --type=NodePort --port=80
service "mc3" exposed
Now you can use your browser (or curl) to navigate to your node’s port to access the web application.
it’s quite common for several containers in a Pod to listen on different ports — all of which need to be exposed. To make this happen, you can either create a single service with multiple exposed ports, or you can create a single service for every poirt you’re trying to expose.
I have a statfulset application which has a server running on port 1000 and has 3 replicas.
Now, I want to expose the application so I have used type: NodePort.
But, I also want 2 replicas to communicate with each other at the same port.
When I do nslookup in case of NodePort type application it gives only one dns name <svc_name>.<namespace>.svc.cluster.local (individual pods don't get a dns) and the application is exposed.
When I do clusterIP: None I get node specific DNS <statfulset>.<svc_name>.<namespace>.svc.cluster.local but application is not exposed. But both do not work together.
How can I achieve both, expose the same port for inter replica communication and expose same port externally?
LoadBalancer: Exposes the service externally using a cloud provider’s load balancer. NodePort and ClusterIP services, to which the external load balancer will route, are automatically created.
We have a private kubernetes cluster running on a baremetal CoreOS cluster (with Flannel for network overlay) with private addresses.
On top of this cluster we run a kubernetes ReplicationController and Service for elasticsearch. To enable load-balancing, this service has a ClusterIP defined - which is also a private IP address: 10.99.44.10 (but in a different range to node IP addresses).
The issue that we face is that we wish to be able to connect to this ClusterIP from outside the cluster. As far as we can tell this private IP is not contactable from other machines in our private network...
How can we achieve this?
The IP addresses of the nodes are:
node 1 - 192.168.77.102
node 2 - 192.168.77.103
.
and this is how the Service, RC and Pod appear with kubectl:
NAME LABELS SELECTOR IP(S) PORT(S)
elasticsearch <none> app=elasticsearch 10.99.44.10 9200/TCP
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
elasticsearch elasticsearch elasticsearch app=elasticsearch 1
NAME READY STATUS RESTARTS AGE
elasticsearch-swpy1 1/1 Running 0 26m
You need to set the type of your Service.
http://docs.k8s.io/v1.0/user-guide/services.html#external-services
If you are on bare metal, you don't have a LoadBalancer integrated. You can use NodePort to get a port on each VM, and then set up whatever you use for load-balancing to aim at that port on any node.
You can use nodeport, but also use hostport for some daemonsets and deployments and hostnetwork to give a pod total node network access
IIRC, if you have a recent enough kubernetes, each node can forward traffic to the internal network, so if you create the correct routing in your clients/switch, you can access the internal network by delivering those TCP/IP packages to one node. The node will then receive the package and SNAT+forward to the clusterIP or podIP.
Finally, barebone can use now MetalLB for kubernetes loadbalancer, that is mostly using this last feature in a more automatic and redundant way