I am deploying docker containers on a kubernetes cluster with 2 nodes. The docker containers need to have port 50052 open. My understanding was that I just need to define a containerPort (50052) and have a service that points to this.
But when I deploy this, only the first 2 pods will spin up successfully. After that, I get the following message, presumably because the new pods are trying top open port 50052, which is already being used.
0/2 nodes are available: 2 node(s) didn't have free ports for the requested pod ports.
I thought that multiple pods with the same requested port could be scheduled on the same node? Or is this not right?
Thanks, I figured it out -- I had set host network to true in my kubernetes deployment. Changing this back to false fixed my issue.
You are right, multiple pods with the same port can exist in a cluster. They have to have the type: ClusterIP
ClusterIP: Exposes the Service on a cluster-internal IP. Choosing this value makes the Service only reachable from within the cluster. This is the default ServiceType.
To avoid port clashes you should not use NodePort as port type. Because if you have 2 nodes and 4 pods, more then one pod will exist on each node causing a port clash.
Depending on how you want to reach your cluster, you have then different options...
Related
I have installed docker to host several containers on a server, using the host network - so ports are shared amongst all containers. If one container uses port 8000, no other ones can. Is there a tool - perhaps not so complex as k8s, though I've no idea whether that can do it - to assist me with selecting ports for each container? As the number of services on the host network grows, managing the list of available ports becomes unwieldy.
I remain confused as to why when I run docker ps, certain containers list no ports at all. It would be easier if the full list of ports were easily available, but I have two containers with a sizable list of exposed ports which show no ports at all. I suppose this is a separate question and a less important one.
Containers in a Pod are accessible via “localhost”; they use the same network namespace. Also, for containers, the observable host name is a Pod’s name. Because containers share the same IP address and port space, you should use different ports in containers for incoming connections. In other words, applications in a Pod must coordinate their usage of ports.
In the following example, we will create a multi-container Pod where nginx in one container works as a reverse proxy for a simple web application running in the second container.
Step 1. Create a ConfigMap with the nginx configuration file. Incoming HTTP requests to port 80 will be forwarded to port 5000 on localhost
Step 2. Create a multi-container Pod with the simple web app and nginx in separate containers. Note that for the Pod, we define only nginx port 80. Port 5000 will not be accessible outside of the Pod.
Step 3. Expose the Pod using the NodePort service:
$ kubectl expose pod mc3 --type=NodePort --port=80
service "mc3" exposed
Now you can use your browser (or curl) to navigate to your node’s port to access the web application.
it’s quite common for several containers in a Pod to listen on different ports — all of which need to be exposed. To make this happen, you can either create a single service with multiple exposed ports, or you can create a single service for every poirt you’re trying to expose.
I'm trying to create a Kubernetes cluster for learning purposes. So, I created 3 virtual machines with Vagrant where the master has IP address of 172.17.8.101 and the other two are 172.17.8.102 and 172.17.8.103.
It's clear that we need Flannel so that our containers in different machines can connect to each other without port mapping. And for Flannel to work, we need Etcd, because flannel uses this Datastore to put and get its data.
I installed Etcd on master node and put Flannel network address on it with command etcdctl set /coreos.com/network/config '{"Network": "10.33.0.0/16"}'
To enable ip masquerading and also using the private network interface in the virtual machine, I added --ip-masq --iface=enp0s8 to FLANNEL_OPTIONS in /etc/sysconfig/flannel file.
In order to make Docker use Flannel network, I added --bip=${FLANNEL_SUBNET} --mtu=${FLANNEL_MTU}' to OPTIONS variable in /etc/sysconfig/docker file. Note that the values for FLANNEL_SUBNET and FLANNEL_MTU variables are the ones set by Flannel in /run/flannel/subnet.env file.
After all these settings, I installed kubernetes-master and kubernetes-client on the master node and kubernetes-node on all the nodes. For the final configurations, I changed KUBE_SERVICE_ADDRESSES value in /etc/kubernetes/apiserver file to --service-cluster-ip-range=10.33.0.0/16
and KUBELET_API_SERVER value in /etc/kubernetes/kubelet file to --api-servers=http://172.17.8.101:8080.
This is the link to k8s-tutorial project repository with the complete files.
After all these efforts, all the services start successfully and work fine. It's clear that there are 3 nodes running when I use the command kubectl get nodes. I can successfully create a nginx pod with command kubectl run nginx-pod --image=nginx --port=80 --labels="app=nginx" and create a service with kubectl expose pod nginx-pod --port=8000 --target-port=80 --name="service-pod" command.
The command kubectl describe service service-pod outputs the following results:
Name: service-pod
Namespace: default
Labels: app=nginx
Selector: app=nginx
Type: ClusterIP
IP: 10.33.39.222
Port: <unset> 8000/TCP
Endpoints: 10.33.72.2:80
Session Affinity: None
No events.
The challenge is that when I try to connect to the created service with curl 10.33.79.222:8000 I get curl: (7) Failed connect to 10.33.72.2:8000; Connection refused but if I try curl 10.33.72.2:80 I get the default nginx page. Also, I can't ping to 10.33.79.222 and all the packets get lost.
Some suggested to stop and disable Firewalld, but it wasn't running at all on the nodes. As Docker changed FORWARD chain policy to DROP in Iptables after version 1.13 I changed it back to ACCEPT but it didn't help either. I eventually tried to change the CIDR and use different IP/subnets but no luck.
Does anybody know where am I going wrong or how to figure out what's the problem that I can't connect to the created service?
The only thing I can see that you have that is conflicting is the PodCidr with Cidr that you are using for the services.
The Flannel network: '{"Network": "10.33.0.0/16"}'. Then on the kube-apiserver --service-cluster-ip-range=10.33.0.0/16. That's the same range and it should be different so you have your kube-proxy setting up services for 10.33.0.0/16 and then you have your overlay thinking it needs to route to the pods running on 10.33.0.0/16. I would start by choosing a completely non-overlapping Cidrs for both your pods and services.
For example on my cluster (I'm using Calico) I have a podCidr of 192.168.0.0/16 and I have a service Cidr of 10.96.0.0/12
Note: you wouldn't be able to ping 10.33.79.222 since ICMP is not allowed in this case.
Your service is of type ClusterIP which means it can only be accessed by other Kubernetes pods. To achieve what you are trying to do consider switching to a service of type NodePort. You can then connect to it using the command curl <Kubernetes-IP-address>:<exposedServicePort>
See https://kubernetes.io/docs/tasks/access-application-cluster/service-access-application-cluster/ for an example of using NodePort.
Rancher 2 provides 4 options in the "Ports" section when deploying a new workload:
NodePort
HostPort
Cluster IP
Layer-4 Load Balancer
What are the differences? Especially between NodePort, HostPort and Cluster IP?
HostPort (nodes running a pod): Similiar to docker, this will open a port on the node on which the pod is running (this allows you to open port 80 on the host). This is pretty easy to setup an run, however:
Don’t specify a hostPort for a Pod unless it is absolutely necessary. When you bind a Pod to a hostPort, it limits the number of places the Pod can be scheduled, because each combination must be unique. If you don’t specify the hostIP and protocol explicitly, Kubernetes will use 0.0.0.0 as the default hostIP and TCP as the default protocol.
kubernetes.io
NodePort (On every node): Is restricted to ports between port 30,000 to ~33,000. This usually only makes sense in combination with an external loadbalancer (in case you want to publish a web-application on port 80)
If you explicitly need to expose a Pod’s port on the node, consider using a NodePort Service before resorting to hostPort.
kubernetes.io
Cluster IP (Internal only): As the description says, this will open a port only available for internal applications running in the same cluster. A service using this option is accessbile via the internal cluster-ip.
Host Port
Node Port
Cluster IP
When a pod is using a hostPort, a connection to the node’s port is forwarded directly to the pod running on that node
With a NodePort service, a connection to the node’s port is forwarded to a randomly selected pod (possibly on another node)
Exposes the Service on an internal IP in the cluster. This type makes the Service only reachable from within the cluster.
pods using a hostPort, the node’s port is only bound on nodes that run such pods
NodePort services bind the port on all nodes, even on those that don’t run such a pod
NA
The hostPort feature is primarily used for exposing system services, which are deployed to every node using DaemonSets
NA
NA
General Ask Question
Q: What happens when many pods running on the same node whit NodePort?
A: With NodePort it doesn't matter if you have one or multiple nodes, the port is available on every node.
I was following along with the Hello, World example in Kubernetes getting started guide.
In that example, a cluster with 3 nodes/instances is created on Google Container Engine.
The container to be deployed is a basic nodejs http server, which listens on port 8080.
Now when I run
kubectl run hello-node --image <image-name> --port 8080
it creates a pod and a deployment, deploying the pod on one of nodes.
Running the
kubectl scale deployment hello-node --replicas=4
command increases the number of pods to 4.
But since each pod exposes the 8080 port, will it not create a port conflict on the pod where two nodes are deployed?
I can see 4 pods when I do kubernetes get pods, however what the behaviour will be in this case?
Got some help in #kubernetes-users channel on slack :
The port specified in kubectl run ... is that of a pod. And each pod has its unique IP address. So, there are no port conflicts.
The pods won’t serve traffic until and unless you expose them as a service.
Exposing a service by running kubectl expose ... assigns a NodePort (which is in range 30000-32000) on every node. This port must be unique for every service.
If a node has multiple pods kube-proxy balances the traffic between those pods.
Also, when I accessed my service from the browser, I was able to see logs in all the 4 pods, so the traffic was served from all the 4 pods.
There is a difference between the port that your pod exposes and the physical ports on your node. Those need to be linked by for instance a kubernetes service or a loadBalancer as discussed a bit further in the hello-world documentation http://kubernetes.io/docs/hellonode/#allow-external-traffic
We have a private kubernetes cluster running on a baremetal CoreOS cluster (with Flannel for network overlay) with private addresses.
On top of this cluster we run a kubernetes ReplicationController and Service for elasticsearch. To enable load-balancing, this service has a ClusterIP defined - which is also a private IP address: 10.99.44.10 (but in a different range to node IP addresses).
The issue that we face is that we wish to be able to connect to this ClusterIP from outside the cluster. As far as we can tell this private IP is not contactable from other machines in our private network...
How can we achieve this?
The IP addresses of the nodes are:
node 1 - 192.168.77.102
node 2 - 192.168.77.103
.
and this is how the Service, RC and Pod appear with kubectl:
NAME LABELS SELECTOR IP(S) PORT(S)
elasticsearch <none> app=elasticsearch 10.99.44.10 9200/TCP
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
elasticsearch elasticsearch elasticsearch app=elasticsearch 1
NAME READY STATUS RESTARTS AGE
elasticsearch-swpy1 1/1 Running 0 26m
You need to set the type of your Service.
http://docs.k8s.io/v1.0/user-guide/services.html#external-services
If you are on bare metal, you don't have a LoadBalancer integrated. You can use NodePort to get a port on each VM, and then set up whatever you use for load-balancing to aim at that port on any node.
You can use nodeport, but also use hostport for some daemonsets and deployments and hostnetwork to give a pod total node network access
IIRC, if you have a recent enough kubernetes, each node can forward traffic to the internal network, so if you create the correct routing in your clients/switch, you can access the internal network by delivering those TCP/IP packages to one node. The node will then receive the package and SNAT+forward to the clusterIP or podIP.
Finally, barebone can use now MetalLB for kubernetes loadbalancer, that is mostly using this last feature in a more automatic and redundant way