Forward all service ports to a singe container - docker

I would like to run a container in kubernetes with a static ip. I found out that only a service can provide an ip address.
Is it possible to map a Service to one pod and forward all ports?

A service discovers pods based on labels and selectors. So it is not necessary to use an IP Address to statically reference a pod from a service. However, if you so wish, you can override the autonomy behind this and manually configure your own ClusterIP for the service.
Once the Pod and Service have been created, other pods in your cluster will be able to interact with the pod via the Name of the Service provided they are in the same namespace. If they are not, you will need to pass the FQDN of the service.
If you are trying to access the pod from outside of Kubernetes, then you will need to use a Service with a different type than ClusterIP. For example, a NodePort or a LoadBalancer. Alternatively, if you have an Ingress Controller with a gateway already provisioned you could use that.
With regards to you desire to forward all ports, this is not possible as port declarations in Service files must be statically mapped. It is not currently possible to pass a Port Range but there is a long standing feature request for it.

Related

Kubernetes echo pod from different external IP

I have k8s cluster with 3 nodes, each node run 4 pods.
I want that each pod get different external IP, How it possible to do that with K8s/docker?
You can't assign external IP to pod.
In order to expose your application outside of your cluster through an external IP, you need to create a service.
You can have an example here in the official doc.
Also you might want to read some documentation about services.
To expose your application to the outside world, You need to create a service & it will provide you an external IP address.
This Service can be applied to one or more pods. If you apply to more pods then service will randomly select any pod to satisfy your request.
Your case:
You want external IP for each pod then, create service for each pod. 4 pod requires 4 IP and that's why it requires 4 services.
On creating a successful service, you get this output,
Display information about the Service:
kubectl get services my-service
The output is similar to this:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
my-service LoadBalancer 10.3.245.137 104.198.205.71 8080/TCP 54s
you should see 4 services with 4 external IP addresses here.
Learn from official Kubernetes website how to create service.
You cannot assign external IP to the node.
what you can do:
Create 4 deployment with 1 replica or deploy 4 pods with different labels
Create 4 service to that 4 deployment or 4 specific labels above with specific external IP you desired.
Docs: external IP, deployment

How does kubernetes pod gets IP instead of container instead of it as CNI plugin works at container level

How does kubernetes pod gets IP instead of container instead of it as CNI plugin works at container level?
How all containers of same pod share same network stack?
Containers use a feature from the kernel called virtual network interface, the virtual network Interface( lets name it veth0) is created and then assigned to a namespace, when a container is created, it is also assigned to a namespace, when multiple containers are created within the same namespace, only a single network interface veth0 will be created.
A POD is just the term used to specify a set of resources and features, one of them is the namespace and the containers running in it.
When you say the POD get an IP, what actually get the ip is the veth0 interface, container apps see the veth0 the same way applications outside a container see a single physical network card on a server.
CNI is just the technical specification on how it should work to make multiple network plugins work without changes to the platform. The process above should be the same to all network plugins.
There is a nice explanation in this blog post
its the kubeproxy that makes everything work. one pod has one proxy which translates all the ports over one IP for the remaining containers. only in specific cases it is said that you want to have multiple containers in the same pod. its not preferred but its possible. this is why they call it "tightly" coupled. please refer to: https://kubernetes.io/docs/concepts/cluster-administration/proxies/
Firstly, let's dig deeper into the CNI aspect. In production systems, workload/pod (workload can be thought of as one or many containerized applications used to fulfill a certain function) network isolation is a first class security requirement. Moreover, depending on how the infrastructure is set up, the routing plane might also need to be a attribute of either the workload (kubectl proxy), or the host-level proxy (kube proxy), or the central routing plane (apiserver proxy) that the host-level proxy exposes a gateway for.
For both service discovery, and actually sending requests from a workload/pod, you don't want individual application developers to talk to the apiserver proxy, since it may incur overhead. Instead you want them to communicate with other applications via either the kubectl or kube proxy, with those layers being responsible for knowing when and how to communicate with the apiserver plane.
Therefore, when spinning up a new workload, the kubelet can be passed --network-plugin=cni and a path to a configuration telling kubelet how to set up the virtual network interface for this workload/pod.
For example, if you dont want your application containers in pod to be able to talk to host-level kube proxy directly, since you want to do some infrastructure specific monitoring, your CNI and workload configuration would be:
monitoring at outermost container
outermost container creates virtual network interface for every other container in pod
outermost container is on bridge interface (also a private virtual network interface) that can talk to kube proxy on host
The IP that the pod gets is to allow other workloads to send bytes to this pod, via its bridge interface - since fundamentally, other people should be talking to the pod, not individual work units inside the pod.
There is a special container called 'pause container' that holds the network namespace for the pod. It does not do anything and its container process just goes into sleep.
Kubernetes creates one pause container for each pod, to acquire the respective pod's IP address and set up the network namespace for all other containers that are part of specific pod. All containers in a pod can reach each other using localhost.
This means that your 'application' container can die, and come back to life, and all of the network setup will still be intact.

Kubernetes-How to send data from a pod to another pod in kubernetes

In dockers, I had two containers Mosquitto abd userInfo
userInfo is a container which performs some logic and then send the result to mosquitto container. Mosquitto container then use this information to send it to IOT hub. To start these containers in Docker, I created a network and started both the container in the same network. So I can easily use the hostname of mosquitto container inside userinfo container to send data. I need to do this same in kubernetes.
So in kubernetes, what I did, I deployed the Mosquitto so its POD was created then I created its service and use it inside the userInfo pod to send data to mosquitto. But this is not working.
I created the service by using
kubectl expose deployment mosquitto
I need to send data of userInfo to Mosquitto.
How can I achieve this.?
Do I need to create network as I was doing in dockers or is there any other way.?
I also tried creating a pod with two containers i.e. mosquitto & userInfo, but this was also not working.
Thanks
A Kubernetes pod may contain multiple containers. People generally run multiple containers in a pod when the two containers are tightly coupled, and it sounds like this is what you're looking for. These containers are guaranteed to be hosted on the same machine (they can contact each other via localhost), share the same port space, and can also use the same volumes.
https://kubernetes.io/docs/concepts/workloads/pods/pod/#what-is-a-pod
Two containers same POD
If you are interested in the communication between two containers belonging to the same POD there is the guide from the official documentation showing how to achieve this through shared volumes.
The primary reason that Pods can have multiple containers is to support helper applications that assist a primary application. Typical examples of helper applications are data pullers, data pushers, and proxies. Helper and primary applications often need to communicate with each other. Typically this is done through a shared filesystem, as shown in this exercise, or through the loopback network interface, localhost.
Try to avoid to place two container in the same POD if you do not need it. Additional information can be found here: Multi-container pods and container communication in Kubernetes.
Two containers two POD
In this case (you can do the same also for the previous case) the best way to proceed is to expose a through a service the listening process of the container.
In this way you will be able to rely always on the very same IP or domain name (that you will be able to resolve merely internally) and port.
For example, if you have a Service called "my-service" in Kubernetes Namespace "my-ns" a DNS record for "my-service.my-ns" is created.
The network part is managed by Kubernetes so you will not need to do anything(in the basic configurations), merely when creating the service to instruct which is the target port of the container and the port that the client should use and the mapping is automatic.
Then once you exposed a port and an IP how you implement the communication and the datatransfer is no longer a Kubernetes question. You can implement it thorough a web server having static contents, through FTP, having a script sending SCP commands, basically there are infinite ways to do it.

How should dynamic Kubernetes/OpenShift DNS resolution be configured?

I'm unable to find relevant information on this, which is why I'm asking the question here.
Instead of using /etc/hosts which is a hacky solution for resolving Kubernetes container names to their service IP addresses, what would the best method be to automatically or dynamically map new Kubernetes pods to their service IPs?
I've heard using /etc/resolv.conf is one such method, but was unable to find exactly how that file should be configured for this scenario.
If you are using OpenShift it deploys with an internal DNS. When you create a Service object it will automatically have its service name, setup as a hostname in the internal DNS, with it mapping to the IP address of the service.
Further the label selectors on the service are matched against labels on Pods, the IP addresses of the pods will be associated as an endpoint for that service and internal network setup so that connection to the service IP directly, or after DNS lookup by hostname (service name), will route connection through to one of the pods.
So all of this is done for you automatically and you don't need to do anything. The service object is even created for you automatically if you are using oc new-app to deploy applications in OpenShift.

On Premise - Kubernetes External Endpoint for services

We are analyzing the integration of the Kubernetes service in our on premise environment. We have SaaS based services which can be exposed publicly.
We have doubts in setting up the external endpoints for the services. Is there any way to create the external endpoints for the services?
We have tried to setup the ExternalIP parameter in the services with the master node IP address. Not sure this is the correct way. Once we setup the external IP with the master node IP address we are able to access the services.
We have also tried with ingress controllers and also there we can access our services with the IP address of the node where the ingress controllers are running.
For Example :
Public IP : XXX.XX.XX.XX
Ideally, we would map the public IP with the load balancer virtual IP, but we cannot find such a setting in Kubernetes.
Is there any way to address this issue?
My suggestion is to use an Ingress Controller that acts as a proxy for all your services in kubernetes.
Of course your ingress controller has to be somehow exposed to the outside world. My suggestion is to use the hostNetwork setting for the ingress controller pod (this way, the pod will be listening on your host's physical interface, like any other "traditional" service).
A few resources:
Here details on how a pod can be reached from outside your k8s cluster).
Here a nice tutorial on how to setup an ingress controller on k8s.
If you have more than one minion in your cluster, you'll end up having problems with load balancing them. This question can be helpful about that.

Resources