Kubernetes configuration to link containers - docker

I am trying to see if there are any example to create a Kubernetes POD
which starts 2-3 containers and these containers are linked with each other but couldn't find any.
Does anybody tried linking containers using Kubernetes config.

The containers in same pod shares the localhost, so you need not link containers, just use localhost:containerPort.

You have to use the Kubernetes service (Proxy) https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/services.md#how-do-they-work.
Have a look how they work togehter: https://github.com/GoogleCloudPlatform/kubernetes/tree/master/examples/guestbook
To be specific, there is no concept of "linking" similarly to the way Docker does it. Every service endpoint is a fully qualified domain name and you just call it from one container to another, and every label on a container that can be picked up by a service endpoint can be used to direct network traffic. So, you don't have to do ENV["$FOO_BAR_BAZ"] to get the correct IP, just call it directly (curl http://foo_bar_baz).

I think you are speaking about single pod multiple container configurations.
In kubernetes the small single unit is a pod. So multiple containers in a pod share the same IPC and multiple processes in different container can access via localhost:process port

Pod is a basic unit of deployment in kubernetes.
You can run one or more containers in pod. They will share same network i.e localhost. So you don't need to specify the link url just specify localhost:containerPort

Related

How to use a Kubernetes pod as a gateway to specific IPs?

I've got a database running in a private network (say IP 1.2.3.4).
In my own computer, I can do these steps in order to access the database:
Start a Docker container using something like docker run --privileged --sysctl net.ipv4.ip_forward=1 ...
Get the container IP
Add a routing rule, such as ip route add 1.2.3.4/32 via $container_ip
And then I'm able to connect to the database as usual.
I wonder if there's a way to route traffic through a specific pod in Kubernetes for certain IPs in order to achieve the same results. We use GKE, by the way, I don't know if this helps in any way.
PS: I'm aware of the sidecar pattern, but I don't think this would be ideal for our use case, as our jobs are short-lived tasks, and we are not able to run multiple "gateway" containers at the same time.
I wonder if there's a way to route traffic through a specific pod in Kubernetes for certain IPs in order to achieve the same results. We use GKE, by the way, I don't know if this helps in any way.
You can start a GKE in a fully private network like this, then you run application that needs to be fully private in this cluster. Access to this cluster is only possible when explicitly granted; just like those commands you used in your question, but of course now you will use the cloud platform (eg. service control, bastion etc etc), there is no need to "route traffic through a specific pod in Kubernetes for certain IPs". But if you have to run everything in a cluster, then likely a fully private cluster will not work for you, in this case you can use network policy to control access to your database pod.
GKE doesn't support the use case you mentionned #gabriel Milan.
What's your requirement ? Do you need to know which IP the pod will use to reach the database so you can open a firewall for it ?
Replying here as the comments have limited character count
Unfortunately GKE doesn't support that use case.
However You have couple of options:
Option#1: Create a dedicated nodepool with couple of nodes, force the pods to be scheduled on these nodes using taints and tolerations [1]. Use the IP addresses of these nodes on your firewall
Option#2: Install a Service Mesh like Istio, Use the Egress gateway[2] to route traffic toward your onPrem system and force the gateways to be deployed on a specific set of nodes so you have a know IP address. This quite complicated as a solution
[1] https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/
[2] https://istio.io/latest/docs/tasks/traffic-management/egress/egress-gateway/
i would suggest using or creating the NAT gateway instead of using the container as a gateway option.
Using container or Istio is a good idea however it has its own limitations hard to implement, management and resources usage of that gateway containers.
Ultimately you want Single IP for your K8s cluster, instead request going out instead of Node's IP on which POD is scheduled.
Here terraform of GKE NAT gateway which you can use it.
https://registry.terraform.io/modules/GoogleCloudPlatform/nat-gateway/google/latest/examples/gke-nat-gateway
NAT gateway will forward all PODs traffic from a single VM and you can use that IP in the database to whitelist also.
After implementation, there will be single Egress point in your cluster.
GitHub Repo link - Click to deploy available GCP magic ;)

can two web services be hosted in a kubernetes pod?

Is is possible to have two web services in a single pod in kubernetes. If yes how will load balancer will handle it? One more question, does load balancer talk directly to pod or container inside pod? If its talk to pod doesn't the route increase like first, LB->pod, pod->container. As pod is in between. I am new to Kubernetes and had these doubts.
You can run multiple containers inside a single pod, but using that to host two separate services is probably not the intended use.
An example case for running multiple containers inside the same pod is one container, a so-called sidecar, that's running some form of application to generate files (e.g. some sync tool), while the main service uses those files somehow. This could be a web server serving static files that the sync tool pulls from somewhere.
Back to your question, since a pod only has one IP, you can only use each port once. A port on a container corresponds directly to a port on the pod. So while you can theoretically run two containers with a web service, you will need to use two different ports. As such, the load balancer would need to address those two ports separately.
If you want to run multiple copies of the same service for load balancing, you should use multiple pods, ideally managed by a deployment, and use a service (cluster IP for internal or load balancer for external) to distribute traffic.
Here are some answers that will help you.
- A pod is a running instance of a container. You can have two containers / two web services running in side a Pod, although its ideal to run one under a POD.
- When you bring up your containers you create ingress / LoadBalancer routes to your services. - Hence when you have two web services running inside your pod, each would have published their service at two different service ingress. - Ideally two routes inside the POD for these services, and a small service discovery to identify them inside.
- This is one reason we prefer running one container per POD.
- Request you to read Kubernetes in Action book to get more clear insight into.
You can run multiple containers on the same pod, if the services are tightly coupled. For example, if you have a web server and a SQL database.
If the web services are not tightly coupled, you would likely want to put them in different pods.
Then you need to deploy a service and expose it to make you web service reachable whether from inside the cluster or from outside depending on the service type.
To load balance between your services you would need an ingress controller.

2 containers using the same port in Kubernetes pod

I have the same problem as the following:
Dual nginx in one Kubernetes pod
In my Kubernetes Deployment template, I have 2 containers that are using the same port 80.
I understand that containers within a Pod are actually under the same network namespace, which enables accessing another container in the Pod with localhost or 127.0.0.1.
It means containers can't use the same port.
It's very easy to achieve this with the help of docker run or docker-compose, by using 8001:80 for the first container and 8002:80 for the second container.
Is there any similar or better solution to do this in Kubernetes Pod ? Without separating these 2 containers into different Pods.
Basically I totally agree with #David's and #Patric's comments but I decided to add to it a few more things expanding it into an answer.
I have the same problem as the following: Dual nginx in one Kubernetes pod
And there is already a pretty good answer for that problem in a mentioned thread. From the technical point of view it provides ready solution to your particular use-case however it doesn't question the idea itself.
It's very easy to achieve this with the help of docker run or
docker-compose, by using 8001:80 for the first container and 8002:80
for the second container.
It's also very easy to achieve in Kubernetes. Simply put both containers in different Pods and you will not have to manipulate with nginx config to make it listen on a port different than 80. Note that those two docker containers that you mentioned don't share a single network namespace and that's why they can both listen on ports 80 which are mapped to different ports on host system (8001 and 8002). This is not the case with Kubernetes Pods. Read more about microservices architecture and especially how it is implemented on k8s and you'll notice that placing a few containers in a single Pod is really rare use case and definitely should not be applied in a case like yours. There should be a good reason to put 2 or more containers in a single Pod. Usually the second container has some complimentary function to the main one.
There are 3 design patterns for multi-container Pods, commonly used in Kubernetes: sidecar, ambassador and adapter. Very often all of them are simply referred to as sidecar containers.
Note that 2 or more containers coupled together in a single Pod in all above mentioned use cases have totally different function. Even if you put more than just one container in a single Pod (which is most common), in practice it is never a container of the same type (like two nginx servers listening on different ports in your case). They should be complimentary and there should be a good reason why they are put together, why they should start and shut down at the same time and share same network namespace. Sidecar container with a monitoring agent running in it has complimentary function to the main container which can be e.g. nginx webserver. You can read more about container design patterns in general in this article.
I don't have a very firm use case, because I'm still
very new to Kubernetes and the concept of a cluster.
So definitely don't go this way if you don't have particular reason for such architecture.
My initial planning of the cluster is putting all my containers of the system
into a pod. So that I can replicate this pod as many as I want.
You don't need a single Pod to replicate it. You can have in your cluster a lot of replicaSets (usually managed by Deployments), each of them taking care of running declared number of replicas of a Pod of a certain kind.
But according to all the feedback that I have now, it seems like I going
in the wrong direction.
Yes, this is definitely wrong direction, but it was actually already said. I'd like only to highlight why namely this direction is wrong. Such approach is totally against the idea of microservices architecture and this is what Kubernetes is designed for. Putting all your infrastructure in a single huge Pod and binding all your containers tightly together makes no sense. Remember that a Pod is the smallest deployable unit in Kubernetes and when one of its containers crashes, the whole Pod crashes. There is no way you can manually restart just one container in a Pod.
I'll review my structure and try with the
suggests you all provided. Thank you, everyone! =)
This is a good idea :)
I believe what you need to do is specify a different Container Port for each container in the pod. Kubernetes allows you specify the port each container exposes using this parameter in the pod definition file. You can then create services pointing to same pods but different ports.

Why POD is the fundamental unit of deployment instead of containers?

In Kubernetes POD is considered as a single unit of deployment which might have one or more containers, so if we scale all the containers in the POD are scaled irrespectively.
If the POD has only one container its easier to scale the particular POD, so whats purpose of packaging one or more containers inside the POD?
From the documentation:
Pods can be used to host vertically integrated application stacks (e.g. LAMP), but their primary motivation is to support co-located, co-managed helper programs
The most common example of this is sidecar containers which contain helper applications like log shipping utilities.
A deeper dive can be found here
The reason behind using pod rather than directly container is that kubernetes requires more information to orchestrate the containers like restart policy, liveness probe, readiness probe. A liveness probe defines that container inside the pods is alive or not, restart policy defines the what to do with container when it failed. A readiness probe defines that container is ready to start serving.
So, Instead of adding those properties to the existing container, kubernetes had decided to write the wrapper on containers with all the necessary additional information.
Also, Kubernetes supports the multi-container pod which is mainly requires for the sidecar containers mainly log or data collector or proxies for the main container. Another advantage of multi-container pod is they can have very tightly coupled application container together sharing the same data, same network namespace and same IPC namespace which would not be possible if they choose for directly using container without any wrapper around it.
Following is very nice article to give you brief idea:
https://www.mirantis.com/blog/multi-container-pods-and-container-communication-in-kubernetes/

Not able to connect to a container(Created via Rest API) in Kubernetes

I am creating a docker container ( using docker run) in a kubernetes Environment by invoking a rest API.
I have mounted the docker.sock of the host machine and i am building an image and running that image from RESTAPI..
Now i need to connect to this container from some other container which is actually started by Kubectl from deployment.yml file.
But when used kubeclt describe pod (Pod name), my container created using Rest API is not there.. So where is this container running and how can i connect to it from some other container ?
Are you running the container in the same namespace as namespace with deployment.yml? One of the option to check that would be to run -
kubectl get pods --all-namespaces
If you are not able to find the docker container there than I would suggest performing below steps -
docker ps -a {verify running docker status}
Ensuring that while mounting docker.sock there are no permission errors
If there are permission errors, escalate privileges to the appropriate level
To answer the second question, connection between two containers should be possible by referencing cluster DNS in below format -
"<servicename>.<namespacename>.svc.cluster.local"
I would also request you to detail steps, codes and errors(if there are any) for me to better answer the question.
You probably shouldn't be directly accessing the Docker API from anywhere in Kubernetes. Kubernetes will be totally unaware of anything you manually docker run (or equivalent) and as you note normal administrative calls like kubectl get pods won't see it; the CPU and memory used by the pod won't be known about by the node interface and this could cause a node to become over utilized. The Kubernetes network environment is also pretty complicated, and unless you know the details of your specific CNI provider it'll be hard to make your container accessible at all, much less from a pod running on a different node.
A process running in a pod can access the Kubernetes API directly, though. That page notes that all of the official client libraries are aware of the conventions this uses. This means that you should be able to directly create a Job that launches your target pod, and a Service that connects to it, and get the normal Kubernetes features around this. (For example, servicename.namespacename.svc.cluster.local is a valid DNS name that reaches any Pod connected to the Service.)
You should also consider whether you actually need this sort of interface. For many applications, it will work just as well to deploy some sort of message-queue system (e.g., RabbitMQ) and then launch a pool of workers that connects to it. You can control the size of the worker queue using a Deployment. This is easier to develop since it avoids a hard dependency on Kubernetes, and easier to manage since it prevents a flood of dynamic jobs from overwhelming your cluster.

Resources