Kubernetes - Create two containers in one pod - docker

I'm developing an application that consists of two containers. I want to deploy them in Kubernetes into one Pod, since I want the two services to be behind one IP address. However, I'm having a hard time trying to connect the Kubernetes Services with the containers.
How could I write a deployment.yml file, so that when the user calls a x.x.x.x:port1, the request is forwarded to the first container, and when the x.x.x.x:port2 is called, the request is forwarded to the second container. How could I specify the Services?
Here's what I have until now:
apiVersion:v1
kind: Pod
metadata:
name: my_app
spec:
containers:
- name: first_container
image: first_image
- name: second_container
image: second_image
---
apiVersion: v1
kind: Service
...

In your containers section you need to define a containerPort for each:
containers:
- name: first_container
image: first_image
ports:
- containerPort: port1
- name: second_container
image: second_image
ports:
- containerPort: port2
And then in the ports section of the service definition you need to point the targetPorts of the service at those ports like in https://stackoverflow.com/a/42272547/9705485

Related

Multi Container ASP.NET Core app in a Kubernetes Pod gives error address already in use

I have an ASP.NET Core Multi-Container docker app which I am now trying to host to Kubernetes cluster on my local PC. But unfortunately one container is starting and other is giving error address already in use.
The Deployment file is given below:
apiVersion: apps/v1
kind: Deployment
metadata:
name: multi-container-dep
labels:
app: aspnet-core-multi-container-app
spec:
replicas: 1
selector:
matchLabels:
component: multi-container
template:
metadata:
labels:
component: multi-container
spec:
containers:
- name: cmultiapp
image: multiapp
imagePullPolicy: Never
ports:
- containerPort: 80
- name: cmultiapi
image: multiapi
imagePullPolicy: Never
ports:
- containerPort: 81
The full logs of the container which is failing is:
Unable to start Kestrel.
System.IO.IOException: Failed to bind to address http://[::]:80: address already in use.
---> Microsoft.AspNetCore.Connections.AddressInUseException: Address already in use
---> System.Net.Sockets.SocketException (98): Address already in use
at System.Net.Sockets.Socket.UpdateStatusAfterSocketErrorAndThrowException(SocketError error, String callerName)
at System.Net.Sockets.Socket.DoBind(EndPoint endPointSnapshot, SocketAddress socketAddress)
at System.Net.Sockets.Socket.Bind(EndPoint localEP)
at Microsoft.AspNetCore.Server.Kestrel.Transport.Sockets.SocketConnectionListener.<Bind>g__BindSocket|13_0(<>c__DisplayClass13_0& )
--- End of inner exception stack trace ---
at Microsoft.AspNetCore.Server.Kestrel.Transport.Sockets.SocketConnectionListener.<Bind>g__BindSocket|13_0(<>c__DisplayClass13_0& )
at Microsoft.AspNetCore.Server.Kestrel.Transport.Sockets.SocketConnectionListener.Bind()
at Microsoft.AspNetCore.Server.Kestrel.Transport.Sockets.SocketTransportFactory.BindAsync(EndPoint endpoint, CancellationToken cancellationToken)
at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.Infrastructure.TransportManager.BindAsync(EndPoint endPoint, ConnectionDelegate connectionDelegate, EndpointConfig endpointConfig)
Note that I already tried putting another port to that container in the YAML file
ports:
- containerPort: 81
But it seems to not working. How to fix it?
To quote this answer: https://stackoverflow.com/a/62057548/12201084
containerPort as part of the pod definition is only informational purposes.
This means that setting containerPort does not have any influence on what port application opens. You can even skip it and don't set it at all.
If you want your application to open a specific port you need to tell it to the applciation. It's usually done with flags, envs or configfiles. Setting a port in pod/container yaml definition won't change a thing.
You have to remember that k8s network model is different than docker and docker compose's model.
So why does the containerPort field exist if is doesn't do a thing? - you may ask
Well. Actually is not completely true. It's main puspose is indeed for informational/documenting purposes but it may also be used with services. You can name a port in pod definition and then use this name to reference the port in service definition yaml (this only applies to targetPort field).
Check whether your images exposes the same port or try to use the same port (see in the images Dockerfile).
I suppose, it is because of your images may be trying to start anything in the same port, so when first one get created it create perfectly but during second container creation it tries to use the same port, and it gets bind: address already in use error.
You can see the pod logs for one of your container (by kubectl logs <pod_name> <container_name>) then you will be clear.
I tried applying your yaml with one of my docker image (which used to start a server in 8080 port), then after applying the below yaml I got the same error as you got.
apiVersion: apps/v1
kind: Deployment
metadata:
name: multi-container-dep
labels:
app: aspnet-core-multi-container-app
spec:
replicas: 1
selector:
matchLabels:
component: multi-container
template:
metadata:
labels:
component: multi-container
spec:
containers:
- name: cmultiapp
image: shahincsejnu/httpapiserver:v1.0.5
imagePullPolicy: Always
ports:
- containerPort: 8080
- name: cmultiapi
image: shahincsejnu/httpapiserver:v1.0.5
imagePullPolicy: Always
ports:
- containerPort: 8081
I saw the first pod's log which ran successfully by kubectl logs pod/multi-container-dep-854c78cfd4-7jd6n cmultiapp and the result is :
int port : :8080
start called
Then I saw the second pod's log which crashed by kubectl logs pod/multi-container-dep-854c78cfd4-7jd6n cmultiapi and seen the below error:
int port : :8080
start called
2021/03/20 13:49:24 listen tcp :8080: bind: address already in use # this is the reason of the error
So, I suppose your images also do something like that.
What works
This below yamls ran successfully both container:
1.
apiVersion: apps/v1
kind: Deployment
metadata:
name: multi-container-dep
labels:
app: aspnet-core-multi-container-app
spec:
replicas: 1
selector:
matchLabels:
component: multi-container
template:
metadata:
labels:
component: multi-container
spec:
containers:
- name: cmultiapp
image: shahincsejnu/httpapiserver:v1.0.5
imagePullPolicy: Always
ports:
- containerPort: 8080
- name: cmultiapi
image: nginx
imagePullPolicy: Always
ports:
- containerPort: 80
apiVersion: apps/v1
kind: Deployment
metadata:
name: multi-container-dep
labels:
app: aspnet-core-multi-container-app
spec:
replicas: 1
selector:
matchLabels:
component: multi-container
template:
metadata:
labels:
component: multi-container
spec:
containers:
- name: cmultiapp
image: shahincsejnu/httpapiserver:v1.0.5
imagePullPolicy: Always
ports:
- containerPort: 80
- name: cmultiapi
image: nginx
imagePullPolicy: Always
ports:
- containerPort: 8081
If you have a docker compose yaml, please use Kompose Tool to convert it into Kubernetes Objects.
Below is the documentation link
https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/
Please use kubectl explain to understand every field of your deployment yaml
As can be seen in below explanation for ports, ports list in deployment yaml is primarily informational.
Since both the containers in the Pod share the same Network Namespace, the processes running inside the containers cannot use the same ports.
kubectl explain deployment.spec.template.spec.containers.ports
KIND: Deployment
VERSION: apps/v1
RESOURCE: ports <[]Object>
DESCRIPTION:
List of ports to expose from the container. Exposing a port here gives the
system additional information about the network connections a container
uses, but is primarily informational. Not specifying a port here DOES NOT
prevent that port from being exposed. Any port which is listening on the
default "0.0.0.0" address inside a container will be accessible from the
network. Cannot be updated.
ContainerPort represents a network port in a single container.
FIELDS:
containerPort <integer> -required-
Number of port to expose on the pod's IP address. This must be a valid port
number, 0 < x < 65536.
hostIP <string>
What host IP to bind the external port to.
hostPort <integer>
Number of port to expose on the host. If specified, this must be a valid
port number, 0 < x < 65536. If HostNetwork is specified, this must match
ContainerPort. Most containers do not need this.
name <string>
If specified, this must be an IANA_SVC_NAME and unique within the pod. Each
named port in a pod must have a unique name. Name for the port that can be
referred to by services.
protocol <string>
Protocol for port. Must be UDP, TCP, or SCTP. Defaults to "TCP".
Please provide the Dockerfile for both images and docker compose files or docker run commands or docker service create commands for the existing multi container docker application for futher help.
I solved this by using environment variables and assigning aspnet url to port 81.
- name: cmultiapi
image: multiapi
imagePullPolicy: Never
ports:
- containerPort: 81
env:
- name: ASPNETCORE_URLS
value: http://+:81
I would also like to mention the url where I got the necessary help. Link is here.

Need help running two OS containers in a single pod on kubernetes

I'm still new to Kubernetes. I'm trying to run a ubuntu container and a linux kali container within the same pod on kubernetes. I also need those two containers to be able to be accessed from a browser. My approach right now is using ubuntu and kali docker image with VNC installed.
Here are the docker image that I'm trying to use:
https://hub.docker.com/r/consol/ubuntu-xfce-vnc (Ubuntu image)
https://hub.docker.com/r/jgamblin/kalibrowser-lxde (Kali image)
Here is the YAML file for creating the pod:
apiVersion: v1
kind: Pod
metadata:
name: training
labels:
app: training
spec:
containers:
- name: kali
image: jgamblin/kalibrowser-lxde
ports:
- containerPort: 6080
- name: centos
image: consol/centos-xfce-vnc
ports:
- containerPort: 5901
Here's the problem. When I run the pod with those 2 containers, only the Kali container is having issue running, cause it to keep on restarting.
May I know how I can achieve this?
You can add a simple sleep command to be executed inside then container to keep it running, for example:
apiVersion: v1
kind: Pod
metadata:
name: training
labels:
app: training
spec:
containers:
- name: kali
image: jgamblin/kalibrowser-lxde
ports:
- containerPort: 6080
command: ["bash", "-c"]
args: ["sleep 500"]
- name: centos
image: consol/centos-xfce-vnc
ports:
- containerPort: 5901`
This way the pod will be in running state:
kubectl get pod
NAME READY STATUS RESTARTS AGE
training 2/2 Running 0 81s
jgamblin/kalibrowser-lxde image require tty (display) allocation.
You can see an example command on docker hub page.
Then you should allow it in your Pod manifest:
apiVersion: v1
kind: Pod
metadata:
name: training
labels:
app: training
spec:
containers:
- name: kali
image: jgamblin/kalibrowser-lxde
ports:
- containerPort: 6080
tty: true
- name: centos
image: consol/centos-xfce-vnc
ports:
- containerPort: 5901
Put tty: true in kali container declaration.

how to combine mutiple images ( redis+memcache+python) into 1 single container in a pod

how to combine multiple images (redis+ memcache +python) into 1 single container in a pod using kubectl command .
do we have any other option instead of creating custom docker image with
all required image
Instead of this, you could run all three containers in a single Kubernetes pod, which is what I would recommend if they are tightly coupled.
It's a good idea to keep each container as small as it needs to be to do one thing.
Just add more containers to your pod spec...
apiVersion: v1
kind: Pod
metadata:
name: example
spec:
containers:
- name: app
image: python
ports:
- containerPort: 80
- name: key-value-store
image: redis
ports:
- containerPort: 6379
- name: cache
image: memcached
ports:
- containerPort: 9001
name: or-whatever-port-memcached-uses
I wouldn't use a pod directly, but the same idea applies to pods created by deployments, daemonsets, etc.

How do I set these docker-compose ports in a kubernetes yaml file?

Given the following ports defined in a docker-compose.yml file, how do I do the equivalent in a kubernetes yml file?
docker-compose.yml
seq.logging:
image: datalust/seq
networks:
- backend
container_name: seq.logging
environment:
- ACCEPT_EULA=Y
ports:
- "5300:80" # UI
- "5301:5341" # Data ingest
kubernetes.yml
---
apiVersion: v1
kind: Pod
metadata:
name: backend-infrastructure
labels:
system: backend
app: infrastructure
spec:
containers:
- name: seq-logging
image: datalust/seq
# ports: ?????????????????????????????????????
# - containerPort: "5300:80" # UI
# - containerPort: "5301:5341" # Data ingest
env:
- name: ACCEPT_EULA
value: "Y"
You do not expose a port using Pod/deployment yaml.
Services are the way to do it. Here you can either use multiple services on top of your pod/deployment but this will result in multiple IP addresses. Other way is to name each port and then create a multi port service definition.
In your case it should look somewhat like this (note this is just a quickly written example). Also
When using multiple ports you must give all of your ports names, so
that endpoints can be disambiguated.
apiVersion: v1
kind: Pod
metadata:
name: backend-infrastructure
labels:
system: backend
app: infrastructure
spec:
containers:
- name: seq-logging
image: datalust/seq
ports:
- containerPort: 80 # UI
name: ui
- containerPort: 5341 # Data ingest
name: data-ingest
env:
- name: ACCEPT_EULA
value: "Y"
---
apiVersion: v1
kind: Service
metadata:
name: seq-logging-service
spec:
type: #service type
ports:
- name: ui
port: 5300
targetPort: 80
- name: data-ingest
port: 5301
targetPort: 5341
Some more resources:
- Docs about connecting applications with services.
- example yaml from the above featuring deployment with multiple port container and corresponding service.
Update:
containerPort
List of ports to expose from the container. Exposing a port here gives
the system additional information about the network connections a
container uses, but is primarily informational. Not specifying a port
here DOES NOT prevent that port from being exposed. Any port which is
listening on the default "0.0.0.0" address inside a container will be
accessible from the network. Cannot be updated.

kubernetes - Use ports of other containers on the same pod for setting environment variables

I want to communicate with containers which are in the same pod programmatically.
So,I decided to set the ports of the auxiliary containers (bar1-container and bar2-container in this example) as environment variables of the primary container (i.e. foo-container).
However, I could not figure out how to pass the exposed ports of the auxiliary ports can be implicitly in the .yaml file for my deployment configuration:
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: test-deployment
spec:
replicas: 1
template:
metadata:
labels:
app: web
tier: frontend
spec:
containers:
# Only container to be exposed outside the pod
- name: foo-container
image: foo
env:
- name: BAR_1_PORT
# HOW TO GET PORT HERE IMPLICITLY ???
value: XXXX
- name: BAR_2_PORT
# HOW TO GET PORT HERE IMPLICITLY ???
value: XXXX
ports:
- name: https
containerPort: 443
- name: http
containerPort: 80
# SubContainer 1
- name: bar1-container
image: bar1
# SubContainer 2
- name: bar2-container
image: bar2
I wonder if there is a way to use the ports like ${some-variable-or-so-here} instead of 5300, 80, 9000 or whichever port is exposed from the container.
P.S: I deliberately did not specify ports or containerPort values of auxiliary containers in the yaml configuatin above as they will not be exposed outside the pod.
You are mixing containers, pods and services here. If you have multiple containers within the same pod, to communicate between them, you need no service at all, also you need to pointing it to a hostname as they share the same network namespace. All you need to do is to connect to localhost on port that your particular service is listening on. For example you can have nginx container (listening on 80) connect to the second containers service of php-fpm via localhost:9000.

Resources