2 containers using the same port in Kubernetes pod - docker

I have the same problem as the following:
Dual nginx in one Kubernetes pod
In my Kubernetes Deployment template, I have 2 containers that are using the same port 80.
I understand that containers within a Pod are actually under the same network namespace, which enables accessing another container in the Pod with localhost or 127.0.0.1.
It means containers can't use the same port.
It's very easy to achieve this with the help of docker run or docker-compose, by using 8001:80 for the first container and 8002:80 for the second container.
Is there any similar or better solution to do this in Kubernetes Pod ? Without separating these 2 containers into different Pods.

Basically I totally agree with #David's and #Patric's comments but I decided to add to it a few more things expanding it into an answer.
I have the same problem as the following: Dual nginx in one Kubernetes pod
And there is already a pretty good answer for that problem in a mentioned thread. From the technical point of view it provides ready solution to your particular use-case however it doesn't question the idea itself.
It's very easy to achieve this with the help of docker run or
docker-compose, by using 8001:80 for the first container and 8002:80
for the second container.
It's also very easy to achieve in Kubernetes. Simply put both containers in different Pods and you will not have to manipulate with nginx config to make it listen on a port different than 80. Note that those two docker containers that you mentioned don't share a single network namespace and that's why they can both listen on ports 80 which are mapped to different ports on host system (8001 and 8002). This is not the case with Kubernetes Pods. Read more about microservices architecture and especially how it is implemented on k8s and you'll notice that placing a few containers in a single Pod is really rare use case and definitely should not be applied in a case like yours. There should be a good reason to put 2 or more containers in a single Pod. Usually the second container has some complimentary function to the main one.
There are 3 design patterns for multi-container Pods, commonly used in Kubernetes: sidecar, ambassador and adapter. Very often all of them are simply referred to as sidecar containers.
Note that 2 or more containers coupled together in a single Pod in all above mentioned use cases have totally different function. Even if you put more than just one container in a single Pod (which is most common), in practice it is never a container of the same type (like two nginx servers listening on different ports in your case). They should be complimentary and there should be a good reason why they are put together, why they should start and shut down at the same time and share same network namespace. Sidecar container with a monitoring agent running in it has complimentary function to the main container which can be e.g. nginx webserver. You can read more about container design patterns in general in this article.
I don't have a very firm use case, because I'm still
very new to Kubernetes and the concept of a cluster.
So definitely don't go this way if you don't have particular reason for such architecture.
My initial planning of the cluster is putting all my containers of the system
into a pod. So that I can replicate this pod as many as I want.
You don't need a single Pod to replicate it. You can have in your cluster a lot of replicaSets (usually managed by Deployments), each of them taking care of running declared number of replicas of a Pod of a certain kind.
But according to all the feedback that I have now, it seems like I going
in the wrong direction.
Yes, this is definitely wrong direction, but it was actually already said. I'd like only to highlight why namely this direction is wrong. Such approach is totally against the idea of microservices architecture and this is what Kubernetes is designed for. Putting all your infrastructure in a single huge Pod and binding all your containers tightly together makes no sense. Remember that a Pod is the smallest deployable unit in Kubernetes and when one of its containers crashes, the whole Pod crashes. There is no way you can manually restart just one container in a Pod.
I'll review my structure and try with the
suggests you all provided. Thank you, everyone! =)
This is a good idea :)

I believe what you need to do is specify a different Container Port for each container in the pod. Kubernetes allows you specify the port each container exposes using this parameter in the pod definition file. You can then create services pointing to same pods but different ports.

Related

Why POD is the fundamental unit of deployment instead of containers?

In Kubernetes POD is considered as a single unit of deployment which might have one or more containers, so if we scale all the containers in the POD are scaled irrespectively.
If the POD has only one container its easier to scale the particular POD, so whats purpose of packaging one or more containers inside the POD?
From the documentation:
Pods can be used to host vertically integrated application stacks (e.g. LAMP), but their primary motivation is to support co-located, co-managed helper programs
The most common example of this is sidecar containers which contain helper applications like log shipping utilities.
A deeper dive can be found here
The reason behind using pod rather than directly container is that kubernetes requires more information to orchestrate the containers like restart policy, liveness probe, readiness probe. A liveness probe defines that container inside the pods is alive or not, restart policy defines the what to do with container when it failed. A readiness probe defines that container is ready to start serving.
So, Instead of adding those properties to the existing container, kubernetes had decided to write the wrapper on containers with all the necessary additional information.
Also, Kubernetes supports the multi-container pod which is mainly requires for the sidecar containers mainly log or data collector or proxies for the main container. Another advantage of multi-container pod is they can have very tightly coupled application container together sharing the same data, same network namespace and same IPC namespace which would not be possible if they choose for directly using container without any wrapper around it.
Following is very nice article to give you brief idea:
https://www.mirantis.com/blog/multi-container-pods-and-container-communication-in-kubernetes/

Start a container from another one in a pod in Kubernetes

I have a container that performs some actions over some data. This container is heavy in memory and CPU resources, and I want it to start only an on-demand basis.
As an example, with docker-compose, out of Kubernetes, I use it this way:
docker-compose run heavycontainer perform.sh some-action
The container performs the action and ends.
In Kubernetes I want this container to perform the actions it provides, but in response to some messages (AMQP messages, created by other containers). I have a container that listens for messages. My first thought was a pod with two containers: listener and performer. But I don't know whether is possible or not start a container from another.
Init or sidecar containers doesn't seem a solution. And I prefer to avoid creating a custom image to inject the listener into the performer.
Is there any way to achieve this?
I hope it help you.
The pod need to run regularly, CronJob
The pod need to run on demand, Job
Firstly, I apologize you about my wrong answer.
I understand what you want now, and I think it can be available to run multiple containers in same pod. Patterns for Application Augmentation on OpenShift is helpful for you.
PS. OpenShift is Enterprise Kubernetes, so you can think OpenShift is same with Kubernetes.
You can use Horizontal Pod Autoscaling (https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/) for orchestrating your heavycontainer, based on number of relevant messages in the queue.

Kubernetes: Can a single K8s POD host 2 or more K8s services

I am new to kubernetes, just wanted to know if my question is valid.
I had a question if a single POD can host 2 or more services.
And If it can host multiple services, how can it differentiate the traffic between the services.
Does it do a PORT mapping.
Please, let me know.
You can add multiple containers to the same pod, but that's only recommended if the services are tightly coupled, like if they need to communicate. For example, if you have a web server and a sql database, you would likely want them in the same pod.
If the services are distinct, you would likely want to put them in different pods, but deploy them to the same cluster of nodes. Then, you can have a LoadBalancer service on the cluster that can route different ports or paths to the right pod. In this way, the services can be scaled and managed separately (and without worrying about port conflicts), but they still draw from the same pool of resources
a single POD can host 2 or more services?
I am assuming that, by services you meant Docker containers. If that's not the case, please let me know.
Yes, Single pod can host more than one containers.
how can it differentiate the traffic between the services.
That's the catch. You need to deal with it using the ports. Expose one service on one a port and another on a different port. (If you want more than one service/container to be exposed from the same pod rethink about your design, it may be an ideal candidate for a different pod)
That being said, Now let's see the best practices,
When should I use 2 Docker containers in a pod
If your both services (docker containers) are tightly coupled and when you scale one service you need to scale the other along with the former. (Trust me, this very rare scenario). Usually these are referred as side-cars.
When should use different pod for different
If you want to scale each of them independent of the other.
Examples
Microservice -- database
Microservice -- Redis cache
Edge service -- Microservice

Kubernetes Deployments, Pod and Container concepts

I have started recently getting familiar with Kubernetes, however while I do get the concept I have some questions I am unable to answer clearly through Kubernete's Concept and Documentation, and some understandings that I'd wish to confirm.
A Deployment is a group of one or more container images (Docker ..etc) that is deployed within a Pod, and through Kubernetes Deployment Controller such deployments are monitored and created, updated, or deleted.
A Pod is a group of one or more containers, are those containers from the same Deployment, or can they be from multiple deployments?
"A pod models contains one or more application containers which are relatively tightly coupled". Is there any clear criteria on when to deploy containers within the same pod, rather than separate pods?
"Pods are the smallest deployable units of computing that can be created and managed in Kubernetes" - Pods, Kuberenets Documentation. Is that to mean that Kubernetes API is unable to monitor, and manage containers (at least directly)?
Appreciate your input.
your question is actually too broad for StackOverflow but I'll quickly answer before this one is closed.
Maybe it get's clearer when you look at the API documentation. Which you could read like this:
A Deployment describes a specification of the desired behavior for the contained objects.
This is done within the spec field which is of type DeploymentSpec.
A DeploymentSpec defines how the related Pods should look like with a templatethrough the PodTemplateSpec
The PodTemplateSpec then holds the PodSpec for all the require parameters and that defines how containers within this Pod should look like through a Container definition.
This is not a punchy oneline statement, but maybe makes it easier to see how things relate to each other.
Related to the criteria on what's a good size and what's too big for a Pod or a Container. This is very opinion loaded and the best way to figure that out is to read through the opinions on the size of Microservices.
To cover your last point - Kubernetes is able to monitor and manage containers, but the "user" is not able to schedule single containers. They have to be embedded in a Pod definion. You can of course access Container status and details per container (e.g. through kubeget logs <pod> -c <container> (details) or through the metrics API.
I hope this helps a bit and doesn't add to the confusion.
Pod is an abstraction provided by Kubernetes and it corresponds to a group of containers which share a subset of namespaces, most importantly the network namespace. For instances the applications running in these containers can interact like the way applications in the same vm would interact, except for the fact that they don't share the same filesystem hierarchy.
The workloads are run in the form of pods, but POD is a lower level abstraction. The workloads are typically scheduled in terms of Kubernetes Deployments/ Jobs / CronJobs / Daemonsets etc which in turn create the Pods.

Kubernetes configuration to link containers

I am trying to see if there are any example to create a Kubernetes POD
which starts 2-3 containers and these containers are linked with each other but couldn't find any.
Does anybody tried linking containers using Kubernetes config.
The containers in same pod shares the localhost, so you need not link containers, just use localhost:containerPort.
You have to use the Kubernetes service (Proxy) https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/services.md#how-do-they-work.
Have a look how they work togehter: https://github.com/GoogleCloudPlatform/kubernetes/tree/master/examples/guestbook
To be specific, there is no concept of "linking" similarly to the way Docker does it. Every service endpoint is a fully qualified domain name and you just call it from one container to another, and every label on a container that can be picked up by a service endpoint can be used to direct network traffic. So, you don't have to do ENV["$FOO_BAR_BAZ"] to get the correct IP, just call it directly (curl http://foo_bar_baz).
I think you are speaking about single pod multiple container configurations.
In kubernetes the small single unit is a pod. So multiple containers in a pod share the same IPC and multiple processes in different container can access via localhost:process port
Pod is a basic unit of deployment in kubernetes.
You can run one or more containers in pod. They will share same network i.e localhost. So you don't need to specify the link url just specify localhost:containerPort

Resources