Need help running two OS containers in a single pod on kubernetes - docker

I'm still new to Kubernetes. I'm trying to run a ubuntu container and a linux kali container within the same pod on kubernetes. I also need those two containers to be able to be accessed from a browser. My approach right now is using ubuntu and kali docker image with VNC installed.
Here are the docker image that I'm trying to use:
https://hub.docker.com/r/consol/ubuntu-xfce-vnc (Ubuntu image)
https://hub.docker.com/r/jgamblin/kalibrowser-lxde (Kali image)
Here is the YAML file for creating the pod:
apiVersion: v1
kind: Pod
metadata:
name: training
labels:
app: training
spec:
containers:
- name: kali
image: jgamblin/kalibrowser-lxde
ports:
- containerPort: 6080
- name: centos
image: consol/centos-xfce-vnc
ports:
- containerPort: 5901
Here's the problem. When I run the pod with those 2 containers, only the Kali container is having issue running, cause it to keep on restarting.
May I know how I can achieve this?

You can add a simple sleep command to be executed inside then container to keep it running, for example:
apiVersion: v1
kind: Pod
metadata:
name: training
labels:
app: training
spec:
containers:
- name: kali
image: jgamblin/kalibrowser-lxde
ports:
- containerPort: 6080
command: ["bash", "-c"]
args: ["sleep 500"]
- name: centos
image: consol/centos-xfce-vnc
ports:
- containerPort: 5901`
This way the pod will be in running state:
kubectl get pod
NAME READY STATUS RESTARTS AGE
training 2/2 Running 0 81s

jgamblin/kalibrowser-lxde image require tty (display) allocation.
You can see an example command on docker hub page.
Then you should allow it in your Pod manifest:
apiVersion: v1
kind: Pod
metadata:
name: training
labels:
app: training
spec:
containers:
- name: kali
image: jgamblin/kalibrowser-lxde
ports:
- containerPort: 6080
tty: true
- name: centos
image: consol/centos-xfce-vnc
ports:
- containerPort: 5901
Put tty: true in kali container declaration.

Related

how to combine mutiple images ( redis+memcache+python) into 1 single container in a pod

how to combine multiple images (redis+ memcache +python) into 1 single container in a pod using kubectl command .
do we have any other option instead of creating custom docker image with
all required image
Instead of this, you could run all three containers in a single Kubernetes pod, which is what I would recommend if they are tightly coupled.
It's a good idea to keep each container as small as it needs to be to do one thing.
Just add more containers to your pod spec...
apiVersion: v1
kind: Pod
metadata:
name: example
spec:
containers:
- name: app
image: python
ports:
- containerPort: 80
- name: key-value-store
image: redis
ports:
- containerPort: 6379
- name: cache
image: memcached
ports:
- containerPort: 9001
name: or-whatever-port-memcached-uses
I wouldn't use a pod directly, but the same idea applies to pods created by deployments, daemonsets, etc.

How to add "-v /var/run/docker.sock:/var/run/docker.sock" when running container from kubernetes deployment yaml

I'm setting up a kubernetes deployment with an image that will execute docker commands (docker ps etc.).
My yaml looks as the following:
kind: Deployment
apiVersion: apps/v1
metadata:
name: discovery
namespace: kube-system
labels:
discovery-app: kubernetes-discovery
spec:
selector:
matchLabels:
discovery-app: kubernetes-discovery
strategy:
type: Recreate
template:
metadata:
labels:
discovery-app: kubernetes-discovery
spec:
containers:
- image: docker:dind
name: discover
ports:
- containerPort: 8080
name: my-awesome-port
imagePullSecrets:
- name: regcred3
volumes:
- name: some-volume
emptyDir: {}
serviceAccountName: kubernetes-discovery
Normally I will run a docker container as following:
docker run -v /var/run/docker.sock:/var/run/docker.sock docker:dind
Now, kubernetes yaml supports commands and args but for some reason does not support options.
What is the right thing to do?
Perhaps I should configure a volume, but then, is it volumeMount or just a volume?
I am new with kubernetes so it is important for me to do it the right way.
Thank you
You want to add the volume to the container.
spec:
containers:
- name: discover
image: docker:dind
volumeMounts:
- name: dockersock
mountPath: "/var/run/docker.sock"
volumes:
- name: dockersock
hostPath:
path: /var/run/docker.sock
It seems like a bad idea to interact directly with containers on any nodes in Kubernetes. The whole point of Kubernetes is to orchestrate. If you add containers outside of the Pod construct, then Kubernetes will not be aware the processes running on the nodes. This will affect resource allocation.
It also needs to be said that directly working with containers bypasses security.

How to configure multiple services/containers in Kubernetes?

I am new to Docker and Kubernetes.
Technologies used:
Dotnet Core 2.2
Asp.NET Core WebAPI 2.2
Docker for windows(Edge) with Kubernetes support enabled
Code
I am having two services hosted into two docker containers container1 and container2.
Below is my deploy.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: webapi-dockerkube
spec:
replicas: 1
template:
metadata:
labels:
app: webapi-dockerkube
spec:
containers:
- name: webapi-dockerkube
image: "webapidocker:latest"
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
livenessProbe:
httpGet:
path: /api/values
port: 80
readinessProbe:
httpGet:
path: /api/values
port: 80
- name: webapi-dockerkube2
image: "webapidocker2:latest"
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
livenessProbe:
httpGet:
path: /api/other/values
port: 80
readinessProbe:
httpGet:
path: /api/other/values
port: 80
When I am running command:
kubectl create -f .\deploy.yaml
I am getting status as CrashLoopBackOff.
But same is running fine when i have only one container configured.
When checking logs I am getting following error:
Error from server (BadRequest): a container name must be specified for pod webapi-dockerkube-8658586998-9f8mk, choose one of: [webapi-dockerkube webapi-dockerkube2]
You are running two containers in the same pod which bind both to port 80. This is not possible within the same pod.
Think of a pod like a 'server' and you can't have two processes bind to the same port.
Solution in your situation: Use different ports inside the pod or use separate pods. From your deployment there seems to be no shared resources like filesystem, so it would be easy to split the containers to separate pods.
Note that it will not suffice to change the pod definition if you want to have both containers running in the same pod with different ports. The application in the container must bind to a different port as well.
apiVersion: v1
kind: Pod
metadata:
name: two-containers
spec:
restartPolicy: Never
volumes:
- name: shared-data
emptyDir: {}
containers:
- name: nginx-container
image: nginx
volumeMounts:
- name: shared-data
mountPath: /usr/share/nginx/html
- name: debian-container
image: debian
volumeMounts:
- name: shared-data
mountPath: /pod-data
command: ["/bin/sh"]
args: ["-c", "echo Hello from the debian container > /pod-data/index.html"]
here sharing example for multi container you can use this template
Also you can check for logs of using
Kubectl logs
Check reason for crashloop back

How to pass docker container flags via kubernetes pod

Hi I am running kubernetes cluster where I run mailhog container.
But I need to run it with own docker run parameter. If I would run it in docker directly. I would use command:
docker run mailhog/mailhog -auth-file=./auth.file
But I need to run it via Kubernetes pod. My pod looks like:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: mailhog
spec:
replicas: 1
revisionHistoryLimit: 1
strategy:
type: RollingUpdate
template:
metadata:
labels:
app: mailhog
spec:
containers:
- name: mailhog
image: us.gcr.io/com/mailhog:1.0.0
ports:
- containerPort: 8025
How to achieve to run Docker container with parameter -auth-file=./auth.file via kubernetes. Thanks.
I tried adding under containers
command: ["-auth-file", "/data/mailhog/auth.file"]
but then I get
Failed to start container with docker id 7565654 with error: Error response from daemon: Container command '-auth-file' not found or does not exist.
thanks to #lang2
here is my deployment.yaml:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: mailhog
spec:
replicas: 1
revisionHistoryLimit: 1
strategy:
type: RollingUpdate
template:
metadata:
labels:
app: mailhog
spec:
volumes:
- name: secrets-volume
secret:
secretName: mailhog-login
containers:
- name: mailhog
image: us.gcr.io/com/mailhog:1.0.0
resources:
limits:
cpu: 70m
memory: 30Mi
requests:
cpu: 50m
memory: 20Mi
volumeMounts:
- name: secrets-volume
mountPath: /data/mailhog
readOnly: true
ports:
- containerPort: 8025
- containerPort: 1025
args:
- "-auth-file=/data/mailhog/auth.file"
In kubernetes, command is equivalent of ENTRYPOINT. In your case, args should be used.
https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.19/#container-v1-core
You are on the right track. It's just that you also need to include the name of the binary in the command array as the first element. You can find that out by looking​ in the respective Dockerfile (CMD and/or ENTRYPOINT).
In this case:
command: ["Mailhog", "-auth-file", "/data/mailhog/auth.file"]
I needed similar task (my aim was passing the application profile to app) and what I did is the following:
Setting an environment variable in Deployment section of the kubernetes yml file.
env:
- name: PROFILE
value: "dev"
Using this environment variable in dockerfile as command line argument.
CMD java -jar -Dspring.profiles.active=${PROFILE} /opt/app/xyz-service-*.jar

Is there any definitive guide on how to pass all the arguments to Docker containers while starting a container through kubernetes?

I want to start a docker container with Kubernetes with the parameter --oom-score-adj .
My kubernetes deployment script looks like this:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: xxx
spec:
template:
metadata:
labels:
app: xxx
spec:
volumes:
- name: some-name
hostPath:
path: /some-path
containers:
- name: xxx-container
image: xxx-image
imagePullPolicy: "IfNotPresent"
securityContext:
privileged: true
command:
- /bin/sh
- -c
args:
- ./rsome-command.sh
volumeMounts:
- name: some-name
mountPath: /some-path
When I inspect the created container, I find --oom-score-adj is set to 1000. I want to set it to 0. Can anyone shed any line on how can I do it? Is there any definitive guide to pass such arguments?
You can't do this yet, it's one of the frustrating things still unresolved with Kubernetes.
There's a similar issue here around logging drivers. Unfortunately, you'll have to set the value on the docker daemon

Resources