How to configure multiple services/containers in Kubernetes? - docker

I am new to Docker and Kubernetes.
Technologies used:
Dotnet Core 2.2
Asp.NET Core WebAPI 2.2
Docker for windows(Edge) with Kubernetes support enabled
Code
I am having two services hosted into two docker containers container1 and container2.
Below is my deploy.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: webapi-dockerkube
spec:
replicas: 1
template:
metadata:
labels:
app: webapi-dockerkube
spec:
containers:
- name: webapi-dockerkube
image: "webapidocker:latest"
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
livenessProbe:
httpGet:
path: /api/values
port: 80
readinessProbe:
httpGet:
path: /api/values
port: 80
- name: webapi-dockerkube2
image: "webapidocker2:latest"
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
livenessProbe:
httpGet:
path: /api/other/values
port: 80
readinessProbe:
httpGet:
path: /api/other/values
port: 80
When I am running command:
kubectl create -f .\deploy.yaml
I am getting status as CrashLoopBackOff.
But same is running fine when i have only one container configured.
When checking logs I am getting following error:
Error from server (BadRequest): a container name must be specified for pod webapi-dockerkube-8658586998-9f8mk, choose one of: [webapi-dockerkube webapi-dockerkube2]

You are running two containers in the same pod which bind both to port 80. This is not possible within the same pod.
Think of a pod like a 'server' and you can't have two processes bind to the same port.
Solution in your situation: Use different ports inside the pod or use separate pods. From your deployment there seems to be no shared resources like filesystem, so it would be easy to split the containers to separate pods.
Note that it will not suffice to change the pod definition if you want to have both containers running in the same pod with different ports. The application in the container must bind to a different port as well.

apiVersion: v1
kind: Pod
metadata:
name: two-containers
spec:
restartPolicy: Never
volumes:
- name: shared-data
emptyDir: {}
containers:
- name: nginx-container
image: nginx
volumeMounts:
- name: shared-data
mountPath: /usr/share/nginx/html
- name: debian-container
image: debian
volumeMounts:
- name: shared-data
mountPath: /pod-data
command: ["/bin/sh"]
args: ["-c", "echo Hello from the debian container > /pod-data/index.html"]
here sharing example for multi container you can use this template
Also you can check for logs of using
Kubectl logs
Check reason for crashloop back

Related

Apache server runs with docker run but kubernetes pod fails with CrashLoopBackOff

My application uses apache2 web server. Due to restrictions in the kubernetes cluster, I do not have root previliges inside pod. So I have changed default port of apache2 from 80 to 8080 to be able to run as non-root user.
My problem is that once I build the docker image and run it in local it runs fine, but when I deploy using kubernetes in the cluster it keeps failing with:
Action '-D FOREGROUND' failed.
resulting in CrashLoopBackOff.
So, basically the apache2 server is not able to run in the pod with non-root user, but runs fine in local with docker run.
Any help is appreciated.
I am attaching my deployment and service files for reference:
apiVersion: apps/v1
kind: Deployment
metadata:
name: &DeploymentName app
spec:
replicas: 1
selector:
matchLabels: &appName
app: *DeploymentName
template:
metadata:
name: main
labels:
<<: *appName
spec:
securityContext:
fsGroup: 2000
runAsUser: 1000
runAsGroup: 3000
volumes:
- name: var-lock
emptyDir: {}
containers:
- name: *DeploymentName
image: image:id
ports:
- containerPort: 8080
volumeMounts:
- mountPath: /etc/apache2/conf-available
name: var-lock
- mountPath: /var/lock/apache2
name: var-lock
- mountPath: /var/log/apache2
name: var-lock
- mountPath: /mnt/log/apache2
name: var-lock
readinessProbe:
tcpSocket:
port: 8080
initialDelaySeconds: 180
periodSeconds: 60
livenessProbe:
tcpSocket:
port: 8080
initialDelaySeconds: 300
periodSeconds: 180
imagePullPolicy: Always
tty: true
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
envFrom:
- configMapRef:
name: *DeploymentName
resources:
limits:
cpu: 1
memory: 2Gi
requests:
cpu: 1
memory: 2Gi
---
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
name: &hpaName app
spec:
maxReplicas: 1
minReplicas: 1
scaleTargetRef:
apiVersion: extensions/v1beta1
kind: Deployment
name: *hpaName
targetCPUUtilizationPercentage: 60
---
apiVersion: v1
kind: Service
metadata:
labels:
app: app
name: app
spec:
selector:
app: app
ports:
- protocol: TCP
name: http-web-port
port: 80
targetPort: 8080
- protocol: TCP
name: https-web-port
port: 443
targetPort: 443
CrashLoopBackOff is a common error in Kubernetes, indicating a pod constantly crashing in an endless loop.
The CrashLoopBackOff error can be caused by a variety of issues, including:
Insufficient resources-lack of resources prevents the container from loading Locked file—a file was already locked by another container
Locked database-the database is being used and locked by other pods
Failed reference—reference to scripts or binaries that are not present on the container
Setup error- an issue with the init-container setup in Kubernetes
Config loading error—a server cannot load the configuration file.
Misconfigurations- a general file system misconfiguration
Connection issues—DNS or kube-DNS is not able to connect to a third-party service
Deploying failed services—an attempt to deploy services/applications that have already failed (e.g. due to a lack of access to other services)
To fix kubernetes CrashLoopbackoff error refer to this link and also check out stackpost for more information.

Kubernetes - minikube service connection timeout

I have a Docker environment with 3 containers: nginx, PHP with Laravel and a MySQL database. It works fine, and I'm now trying to learn Kubernetes.
I was hoping to create a deployment and a service just for the nginx container to make it simple to start with:
Here is the deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: toolkit-app-deployment
spec:
replicas: 1
selector:
matchLabels:
container: toolkit-server
template:
metadata:
labels:
container: toolkit-server
spec:
containers:
- name: toolkit-server
image: my/toolkit-server:test
ports:
- containerPort: 8000
imagePullSecrets:
- name: my-cred
Here is the service.yaml:
apiVersion: v1
kind: Service
metadata:
name: backend
spec:
selector:
container: toolkit-server
ports:
- protocol: "TCP"
port: 80
targetPort: 8000
type: LoadBalancer
And just incase it's needed, here is the nginx part of the docker-compose.yaml:
version: "3.8"
services:
server:
build:
context: .
dockerfile: dockerfiles/nginx.dockerfile
ports:
- "8000:80"
volumes:
- ./src:/var/www/html
- ./nginx/nginx.conf:/etc/nginx/conf.d/default.conf:ro
container_name: toolkit-server
The deployment is created successfully and I can see that 1/1 pods are running.
However when I run minikube service backend, the URL I get just times out.
I was expecting to see some sort of nginx page, maybe an nginx error - but with a time out I'm not sure what the next step is.
I'm brand new to Kubernetes to theres a good chance I've messed the ports up or something basic. Any help appreciated.
Edit:
As advised by #david-maze I changed the following in the deployment.yaml:
ports:
- containerPort: 80
And the following in service.yaml:
targetPort: 80
This gave me an nginx error page when viewed in the browser, as expected, but crucially no longer times out.
This is a community wiki answer posted for better visibility. Feel free to expand it.
As discussed in the comments the issue was due to wrong port configuration.
TargetPort is the port on which the service will send requests to, that your pod will be listening on. Your application in the container will need to be listening on this port also.
ContainerPort defines the port on which app can be reached out inside the container.
So in your use case the deployment should have:
ports:
- containerPort: 80
and the service:
targetPort: 80
This will make the connection not to timeout anymore.

Need help running two OS containers in a single pod on kubernetes

I'm still new to Kubernetes. I'm trying to run a ubuntu container and a linux kali container within the same pod on kubernetes. I also need those two containers to be able to be accessed from a browser. My approach right now is using ubuntu and kali docker image with VNC installed.
Here are the docker image that I'm trying to use:
https://hub.docker.com/r/consol/ubuntu-xfce-vnc (Ubuntu image)
https://hub.docker.com/r/jgamblin/kalibrowser-lxde (Kali image)
Here is the YAML file for creating the pod:
apiVersion: v1
kind: Pod
metadata:
name: training
labels:
app: training
spec:
containers:
- name: kali
image: jgamblin/kalibrowser-lxde
ports:
- containerPort: 6080
- name: centos
image: consol/centos-xfce-vnc
ports:
- containerPort: 5901
Here's the problem. When I run the pod with those 2 containers, only the Kali container is having issue running, cause it to keep on restarting.
May I know how I can achieve this?
You can add a simple sleep command to be executed inside then container to keep it running, for example:
apiVersion: v1
kind: Pod
metadata:
name: training
labels:
app: training
spec:
containers:
- name: kali
image: jgamblin/kalibrowser-lxde
ports:
- containerPort: 6080
command: ["bash", "-c"]
args: ["sleep 500"]
- name: centos
image: consol/centos-xfce-vnc
ports:
- containerPort: 5901`
This way the pod will be in running state:
kubectl get pod
NAME READY STATUS RESTARTS AGE
training 2/2 Running 0 81s
jgamblin/kalibrowser-lxde image require tty (display) allocation.
You can see an example command on docker hub page.
Then you should allow it in your Pod manifest:
apiVersion: v1
kind: Pod
metadata:
name: training
labels:
app: training
spec:
containers:
- name: kali
image: jgamblin/kalibrowser-lxde
ports:
- containerPort: 6080
tty: true
- name: centos
image: consol/centos-xfce-vnc
ports:
- containerPort: 5901
Put tty: true in kali container declaration.

How do I set these docker-compose ports in a kubernetes yaml file?

Given the following ports defined in a docker-compose.yml file, how do I do the equivalent in a kubernetes yml file?
docker-compose.yml
seq.logging:
image: datalust/seq
networks:
- backend
container_name: seq.logging
environment:
- ACCEPT_EULA=Y
ports:
- "5300:80" # UI
- "5301:5341" # Data ingest
kubernetes.yml
---
apiVersion: v1
kind: Pod
metadata:
name: backend-infrastructure
labels:
system: backend
app: infrastructure
spec:
containers:
- name: seq-logging
image: datalust/seq
# ports: ?????????????????????????????????????
# - containerPort: "5300:80" # UI
# - containerPort: "5301:5341" # Data ingest
env:
- name: ACCEPT_EULA
value: "Y"
You do not expose a port using Pod/deployment yaml.
Services are the way to do it. Here you can either use multiple services on top of your pod/deployment but this will result in multiple IP addresses. Other way is to name each port and then create a multi port service definition.
In your case it should look somewhat like this (note this is just a quickly written example). Also
When using multiple ports you must give all of your ports names, so
that endpoints can be disambiguated.
apiVersion: v1
kind: Pod
metadata:
name: backend-infrastructure
labels:
system: backend
app: infrastructure
spec:
containers:
- name: seq-logging
image: datalust/seq
ports:
- containerPort: 80 # UI
name: ui
- containerPort: 5341 # Data ingest
name: data-ingest
env:
- name: ACCEPT_EULA
value: "Y"
---
apiVersion: v1
kind: Service
metadata:
name: seq-logging-service
spec:
type: #service type
ports:
- name: ui
port: 5300
targetPort: 80
- name: data-ingest
port: 5301
targetPort: 5341
Some more resources:
- Docs about connecting applications with services.
- example yaml from the above featuring deployment with multiple port container and corresponding service.
Update:
containerPort
List of ports to expose from the container. Exposing a port here gives
the system additional information about the network connections a
container uses, but is primarily informational. Not specifying a port
here DOES NOT prevent that port from being exposed. Any port which is
listening on the default "0.0.0.0" address inside a container will be
accessible from the network. Cannot be updated.

How to pass docker container flags via kubernetes pod

Hi I am running kubernetes cluster where I run mailhog container.
But I need to run it with own docker run parameter. If I would run it in docker directly. I would use command:
docker run mailhog/mailhog -auth-file=./auth.file
But I need to run it via Kubernetes pod. My pod looks like:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: mailhog
spec:
replicas: 1
revisionHistoryLimit: 1
strategy:
type: RollingUpdate
template:
metadata:
labels:
app: mailhog
spec:
containers:
- name: mailhog
image: us.gcr.io/com/mailhog:1.0.0
ports:
- containerPort: 8025
How to achieve to run Docker container with parameter -auth-file=./auth.file via kubernetes. Thanks.
I tried adding under containers
command: ["-auth-file", "/data/mailhog/auth.file"]
but then I get
Failed to start container with docker id 7565654 with error: Error response from daemon: Container command '-auth-file' not found or does not exist.
thanks to #lang2
here is my deployment.yaml:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: mailhog
spec:
replicas: 1
revisionHistoryLimit: 1
strategy:
type: RollingUpdate
template:
metadata:
labels:
app: mailhog
spec:
volumes:
- name: secrets-volume
secret:
secretName: mailhog-login
containers:
- name: mailhog
image: us.gcr.io/com/mailhog:1.0.0
resources:
limits:
cpu: 70m
memory: 30Mi
requests:
cpu: 50m
memory: 20Mi
volumeMounts:
- name: secrets-volume
mountPath: /data/mailhog
readOnly: true
ports:
- containerPort: 8025
- containerPort: 1025
args:
- "-auth-file=/data/mailhog/auth.file"
In kubernetes, command is equivalent of ENTRYPOINT. In your case, args should be used.
https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.19/#container-v1-core
You are on the right track. It's just that you also need to include the name of the binary in the command array as the first element. You can find that out by looking​ in the respective Dockerfile (CMD and/or ENTRYPOINT).
In this case:
command: ["Mailhog", "-auth-file", "/data/mailhog/auth.file"]
I needed similar task (my aim was passing the application profile to app) and what I did is the following:
Setting an environment variable in Deployment section of the kubernetes yml file.
env:
- name: PROFILE
value: "dev"
Using this environment variable in dockerfile as command line argument.
CMD java -jar -Dspring.profiles.active=${PROFILE} /opt/app/xyz-service-*.jar

Resources