Make Kubernetes Service Accessible Externally - docker

We have a private kubernetes cluster running on a baremetal CoreOS cluster (with Flannel for network overlay) with private addresses.
On top of this cluster we run a kubernetes ReplicationController and Service for elasticsearch. To enable load-balancing, this service has a ClusterIP defined - which is also a private IP address: 10.99.44.10 (but in a different range to node IP addresses).
The issue that we face is that we wish to be able to connect to this ClusterIP from outside the cluster. As far as we can tell this private IP is not contactable from other machines in our private network...
How can we achieve this?
The IP addresses of the nodes are:
node 1 - 192.168.77.102
node 2 - 192.168.77.103
.
and this is how the Service, RC and Pod appear with kubectl:
NAME LABELS SELECTOR IP(S) PORT(S)
elasticsearch <none> app=elasticsearch 10.99.44.10 9200/TCP
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
elasticsearch elasticsearch elasticsearch app=elasticsearch 1
NAME READY STATUS RESTARTS AGE
elasticsearch-swpy1 1/1 Running 0 26m

You need to set the type of your Service.
http://docs.k8s.io/v1.0/user-guide/services.html#external-services
If you are on bare metal, you don't have a LoadBalancer integrated. You can use NodePort to get a port on each VM, and then set up whatever you use for load-balancing to aim at that port on any node.

You can use nodeport, but also use hostport for some daemonsets and deployments and hostnetwork to give a pod total node network access
IIRC, if you have a recent enough kubernetes, each node can forward traffic to the internal network, so if you create the correct routing in your clients/switch, you can access the internal network by delivering those TCP/IP packages to one node. The node will then receive the package and SNAT+forward to the clusterIP or podIP.
Finally, barebone can use now MetalLB for kubernetes loadbalancer, that is mostly using this last feature in a more automatic and redundant way

Related

MacVlan network with kubernetes

I setup the kubernetes cluster using k3s. I have one master and two nodes. I created docker macvlan network on one of the node.
I want to achieve below mentioned scenario.
Assign IP to container/pod.(user defined IP, not cluster IP).
q1.Is there any alternative option for docker macvlan?
q2.Can we run command on node (not on pod/container)? (while deploying the pod/service)
q3.can we create kubernetes network with user defined IP? (I don`t think LB/NP/Ingress will help for user defined IP, correct me if I am wrong!)
Kubernetes has its own very specialized network implementation. It can't easily assign a unique externally accessible IP address to each process the way the Docker MacVLAN setup can. Kubernetes also can't reuse the Docker networking infrastructure. Generally the cluster takes responsibility for assigning IP addresses to pods and services, and you can't specify them yourself.
So, in Kubernetes:
You can't manually assign IP addresses to things;
The cluster-internal IP addresses aren't directly accessible from outside the cluster;
The Kubernetes constructs can only launch containers on arbitrarily chosen nodes (possibly with some constraints; possibly on every node), but you don't usually launch a container on a single specific node, and you can't run a non-container command on a node.
Given what you're describing, a more general-purpose cluster automation tool like Salt Stack, Ansible, or Chef might meet your needs better. This will let you launch processes directly on managed nodes, and if those are server-type processes, they'll be accessible using the host's IP address as normal.
You can look into MetalLB, specifically into the Layer2 & Local traffic policy
(https://metallb.universe.tf/usage/)
You cannot assign IPs to the Pods but when you'll create a service type LoadBalancer (for example a http routing service like Traefik), MetalLB will help bind that service to the IP of the node.
As an example, you can see the external IP of the service Trafik is reported as my node's address - 192.168.1.201
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
node02 Ready <none> 8d v1.20.2+k3s1 192.168.1.202 <none> Alpine Linux v3.13 5.10.10-0-virt containerd://1.4.3-k3s1
node01 Ready control-plane,master 8d v1.20.2+k3s1 192.168.1.201 <none> Alpine Linux v3.13 5.10.10-0-virt containerd://1.4.3-k3s1
For q2:
Of course you can, k8s doesn't take over the node. You ssh into it and run whatever you like.
For q1:
No.
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
default service/kubernetes ClusterIP 10.43.0.1 <none> 443/TCP
kube-system service/kube-dns ClusterIP 10.43.0.10 <none> 53/UDP,53/TCP,9153/TCP
kube-system service/metrics-server ClusterIP 10.43.254.20 <none> 443/TCP
kube-system service/traefik LoadBalancer 10.43.130.1 192.168.1.201 80:31666/TCP,443:31194/TCP,8080:31199/TCP
default service/whoami ClusterIP 10.43.61.10 <none> 80/TCP

Docker container ports are clashing in Kubernetes

I am deploying docker containers on a kubernetes cluster with 2 nodes. The docker containers need to have port 50052 open. My understanding was that I just need to define a containerPort (50052) and have a service that points to this.
But when I deploy this, only the first 2 pods will spin up successfully. After that, I get the following message, presumably because the new pods are trying top open port 50052, which is already being used.
0/2 nodes are available: 2 node(s) didn't have free ports for the requested pod ports.
I thought that multiple pods with the same requested port could be scheduled on the same node? Or is this not right?
Thanks, I figured it out -- I had set host network to true in my kubernetes deployment. Changing this back to false fixed my issue.
You are right, multiple pods with the same port can exist in a cluster. They have to have the type: ClusterIP
ClusterIP: Exposes the Service on a cluster-internal IP. Choosing this value makes the Service only reachable from within the cluster. This is the default ServiceType.
To avoid port clashes you should not use NodePort as port type. Because if you have 2 nodes and 4 pods, more then one pod will exist on each node causing a port clash.
Depending on how you want to reach your cluster, you have then different options...

Accessing a k8s service with cluster IP in default namespace from a docker container

I have a server that is orchestrated using k8s it's service looks like below
➜ installations ✗ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
oxd-server ClusterIP 10.96.124.25 <none> 8444/TCP,8443/TCP 3h32m
and it's pod.
➜ helm git:(helm-rc1) ✗ kubectl get po
NAME READY STATUS RESTARTS AGE
sam-test-oxd-server-6b8f456cb6-5gwwd 1/1 Running 0 3h2m
Now, I have a docker image with an env variable that requires the URL of this server.
I have 2 questions from here.
How can the docker image get the URL or access the URL?
How can I access the same URL in my terminal so I make some curl commands through it?
I hope I am clear on the explanation.
If your docker container is outside the kubernetes cluster, then it's not possible to access you ClusterIP service.
As you could guess by its name, ClusterIP type services are only accessible from within the cluster.
By within the cluster I mean any resource managed by Kubernetes.
A standalone docker container running inside a VM which is part of your K8S cluster is not a resource managed by K8S.
So, in order to achieve what you want, you'll have those possibilities :
Set a hostPort inside your pod. This is not recommanded and is listed as a bad practice in the doc. Keep this usage for very specific case.
Switch your service to NodePort instead of ClusterIP. This way, you'll be able to access it using a node IP + the node port.
Use a LoadBalancer type of service, but this solution needs some configuration and is not straightforward.
Use an Ingress along with an IngressController but just like the load balancer, this solution needs some configuration and is not that straightforward.
Depending on what you do and if this is critical or not, you'll have to choose one of these solutions.
1 & 2 for debug/dev
3 & 4 for prod, but you'll have to work with your k8s admin
You can use the name of the service oxd-server from any other pod in the same namespace to access it i.e., if the service is backed by pods that are serving HTTPS, you can access the service at https://oxd-server:8443/.
If the client pod that wants to access this service is in a different namespace, then you can use oxd-server.<namespace> name. In your case that would be oxd-server.default since your service is in default namespace.
To access this service from outside the cluster(from your terminal) for local debugging, you can use port forwarding.
Then you can use the URL localhost:8443 to make any requests and request would be port forwarded to the service.
kubectl port-forward svc/oxd-server 8443:8443
If you want to access this service from outside the cluster for production use, you can make the service as type: NodePort or type: LoadBalancer. See service types here.

Connection refused when trying to connect to services in Kubernetes

I'm trying to create a Kubernetes cluster for learning purposes. So, I created 3 virtual machines with Vagrant where the master has IP address of 172.17.8.101 and the other two are 172.17.8.102 and 172.17.8.103.
It's clear that we need Flannel so that our containers in different machines can connect to each other without port mapping. And for Flannel to work, we need Etcd, because flannel uses this Datastore to put and get its data.
I installed Etcd on master node and put Flannel network address on it with command etcdctl set /coreos.com/network/config '{"Network": "10.33.0.0/16"}'
To enable ip masquerading and also using the private network interface in the virtual machine, I added --ip-masq --iface=enp0s8 to FLANNEL_OPTIONS in /etc/sysconfig/flannel file.
In order to make Docker use Flannel network, I added --bip=${FLANNEL_SUBNET} --mtu=${FLANNEL_MTU}' to OPTIONS variable in /etc/sysconfig/docker file. Note that the values for FLANNEL_SUBNET and FLANNEL_MTU variables are the ones set by Flannel in /run/flannel/subnet.env file.
After all these settings, I installed kubernetes-master and kubernetes-client on the master node and kubernetes-node on all the nodes. For the final configurations, I changed KUBE_SERVICE_ADDRESSES value in /etc/kubernetes/apiserver file to --service-cluster-ip-range=10.33.0.0/16
and KUBELET_API_SERVER value in /etc/kubernetes/kubelet file to --api-servers=http://172.17.8.101:8080.
This is the link to k8s-tutorial project repository with the complete files.
After all these efforts, all the services start successfully and work fine. It's clear that there are 3 nodes running when I use the command kubectl get nodes. I can successfully create a nginx pod with command kubectl run nginx-pod --image=nginx --port=80 --labels="app=nginx" and create a service with kubectl expose pod nginx-pod --port=8000 --target-port=80 --name="service-pod" command.
The command kubectl describe service service-pod outputs the following results:
Name: service-pod
Namespace: default
Labels: app=nginx
Selector: app=nginx
Type: ClusterIP
IP: 10.33.39.222
Port: <unset> 8000/TCP
Endpoints: 10.33.72.2:80
Session Affinity: None
No events.
The challenge is that when I try to connect to the created service with curl 10.33.79.222:8000 I get curl: (7) Failed connect to 10.33.72.2:8000; Connection refused but if I try curl 10.33.72.2:80 I get the default nginx page. Also, I can't ping to 10.33.79.222 and all the packets get lost.
Some suggested to stop and disable Firewalld, but it wasn't running at all on the nodes. As Docker changed FORWARD chain policy to DROP in Iptables after version 1.13 I changed it back to ACCEPT but it didn't help either. I eventually tried to change the CIDR and use different IP/subnets but no luck.
Does anybody know where am I going wrong or how to figure out what's the problem that I can't connect to the created service?
The only thing I can see that you have that is conflicting is the PodCidr with Cidr that you are using for the services.
The Flannel network: '{"Network": "10.33.0.0/16"}'. Then on the kube-apiserver --service-cluster-ip-range=10.33.0.0/16. That's the same range and it should be different so you have your kube-proxy setting up services for 10.33.0.0/16 and then you have your overlay thinking it needs to route to the pods running on 10.33.0.0/16. I would start by choosing a completely non-overlapping Cidrs for both your pods and services.
For example on my cluster (I'm using Calico) I have a podCidr of 192.168.0.0/16 and I have a service Cidr of 10.96.0.0/12
Note: you wouldn't be able to ping 10.33.79.222 since ICMP is not allowed in this case.
Your service is of type ClusterIP which means it can only be accessed by other Kubernetes pods. To achieve what you are trying to do consider switching to a service of type NodePort. You can then connect to it using the command curl <Kubernetes-IP-address>:<exposedServicePort>
See https://kubernetes.io/docs/tasks/access-application-cluster/service-access-application-cluster/ for an example of using NodePort.

Difference between NodePort, HostPort and Cluster IP

Rancher 2 provides 4 options in the "Ports" section when deploying a new workload:
NodePort
HostPort
Cluster IP
Layer-4 Load Balancer
What are the differences? Especially between NodePort, HostPort and Cluster IP?
HostPort (nodes running a pod): Similiar to docker, this will open a port on the node on which the pod is running (this allows you to open port 80 on the host). This is pretty easy to setup an run, however:
Don’t specify a hostPort for a Pod unless it is absolutely necessary. When you bind a Pod to a hostPort, it limits the number of places the Pod can be scheduled, because each combination must be unique. If you don’t specify the hostIP and protocol explicitly, Kubernetes will use 0.0.0.0 as the default hostIP and TCP as the default protocol.
kubernetes.io
NodePort (On every node): Is restricted to ports between port 30,000 to ~33,000. This usually only makes sense in combination with an external loadbalancer (in case you want to publish a web-application on port 80)
If you explicitly need to expose a Pod’s port on the node, consider using a NodePort Service before resorting to hostPort.
kubernetes.io
Cluster IP (Internal only): As the description says, this will open a port only available for internal applications running in the same cluster. A service using this option is accessbile via the internal cluster-ip.
Host Port
Node Port
Cluster IP
When a pod is using a hostPort, a connection to the node’s port is forwarded directly to the pod running on that node
With a NodePort service, a connection to the node’s port is forwarded to a randomly selected pod (possibly on another node)
Exposes the Service on an internal IP in the cluster. This type makes the Service only reachable from within the cluster.
pods using a hostPort, the node’s port is only bound on nodes that run such pods
NodePort services bind the port on all nodes, even on those that don’t run such a pod
NA
The hostPort feature is primarily used for exposing system services, which are deployed to every node using DaemonSets
NA
NA
General Ask Question
Q: What happens when many pods running on the same node whit NodePort?
A: With NodePort it doesn't matter if you have one or multiple nodes, the port is available on every node.

Resources