Add docker runtime arguments to helm chart - docker

I have a docker image that I run with specific runtime arguments. When I install a helm chart to deploy the kubernetes pod with this image, the output from the pod is different from when I use 'docker run.' I found that I should use command and args parameters in the values.yml file and the templates/deployment directory but I'm still not getting the desired output.
I've tried different variations from these links but no luck:
How to pass docker run flags via kubernetes pod
How to pass dynamic arguments to a helm chart that runs a job
How to pass arguments to Docker container in Kubernetes or OpenShift through command line?
Here's the docker run command:
docker run --it --rm --network=host --ulimit rtptrio=0 --cap-add=sys_nice --ipc=private --sysctl fs.msqueue.msg_max="10000" image_name:tag

Please try something like that:
apiVersion: v1
kind: Pod
metadata:
name: example
spec:
hostNetwork: true
securityContext:
capabilities:
add: ["SYS_NICE"]
containers:
- name: main
image: image_name:tag

Related

Docker image not able to stay alive in a Jenkins Kubernetes build pipeline

We want to deploy using ArgoCD from our Jenkinsfile (which is slightly not how this is intended to be done but close enough), and after done some experiments want to try using the official container with the CLI, so we have added this snippet to our our pipeline kubernetes yaml:
- name: argocdcli
image: argoproj/argocli
command:
- argo
args:
- version
tty: true
Unfortunately the usual way to keep these containers alive is to invoke cat in the container, which isn't there, so it fails miserably. Actually the only command in there is the "argo" command which doesn't have a way to sleep infinitely. (We are going to report this upstream so it will be fixed, but while we wait for that....)
My question therefore is, is there a way to indicate to Kubernetes that we know that this pod cannot keep itself up on its own, and therefore not tear it down immediately?
Unfortunately it's not possible since as you stated, argo is the only command available on this image.
It can be confirmed here:
####################################################################################################
# argocli
####################################################################################################
FROM scratch as argocli
COPY --from=argo-build /go/src/github.com/argoproj/argo/dist/argo-linux-amd64 /bin/argo
ENTRYPOINT [ "argo" ]
As we can see on this output, running argo is all this container is doing:
$ kubectl run -i --tty --image argoproj/argocli argoproj1 --restart=Never
argo is the command line interface to Argo
Usage:
argo [flags]
argo [command]
...
You can optionally create you own image based on that and include sleep, so it'll be possible to keep it running as in this example:
apiVersion: v1
kind: Pod
metadata:
name: busybox
namespace: default
spec:
containers:
- name: busybox
image: busybox:1.28
command:
- sleep
- "3600"
imagePullPolicy: IfNotPresent
restartPolicy: Always

Kubernetes: supply parameters for docker

I want to run docker containers using real-time scheduler. Is it possible to pass parameters in pod/deployment file to Kubernetes to run my containers as follows?
docker run -it --cpu-rt-runtime=950000 \
--ulimit rtprio=99 \
--cap-add=sys_nice \
debian:jessie
Unfortunately not all Docker command line features has relevant options in Kubernetes YAML.
While sys_time capability can be set using securiyContext in yaml, the --cpu-rt-runtime=950000 cannot.
In the K8s API Pod documentation you can find all the configuration that can be pass into container
under PodSecurityContext v1 core.
Another thing is that I`ve tried to run a container itself with the specs that you provided but I ran into an error:
docker: Error response from daemon: Your kernel does not support
cgroup cpu real-time runtime. See 'docker run --help'
This is related directly to kernel configuration named CONFIG_RT_GROUP_SCHED that is missing from your kernel image. Without it the cpu-rt-runtime won`t be possible to set to container.
https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/
You can use config maps to declare variables.
Then mount config map to env variables. Pass env variables to docker args.
https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#configure-all-key-value-pairs-in-a-configmap-as-container-environment-variables
Create config map
apiVersion: v1
kind: ConfigMap
metadata:
name: special-config
namespace: default
data:
SPECIAL_LEVEL: very
SPECIAL_TYPE: charm
Create POD
apiVersion: v1
kind: Pod
metadata:
name: dapi-test-pod
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
command: [ "/bin/sh", "-c", "env" ]
envFrom:
- configMapRef:
name: special-config
restartPolicy: Never
Not all of these options are available in K8s but you can find a workaround using Limit Ranges. This is explained here.

Kubernetes equivalent of 'docker run -it'

I have one docker image and I am using following command to run it.
docker run -it -p 1976:1976 --name demo demo.docker.cloud.com/demo/runtime:latest
I want to run the same in Kubernetes. This is my current yaml file.
apiVersion: v1
kind: Deployment
metadata:
name: demo-deployment
labels:
app: demo
spec:
replicas: 1
selector:
matchLabels:
app: demo
template:
metadata:
labels:
app: demo
spec:
containers:
- name: demo
image: demo.docker.cloud.com/demo/runtime:latest
ports:
- containerPort: 1976
imagePullPolicy: Never
This yaml file covers everything except flag "-it". I am not able to find its Kubernetes equivalent. Please help me out with this. Thanks
I assume you are trying to connect a shell to your running container. Following the guide at https://kubernetes.io/docs/tasks/debug-application-cluster/get-shell-running-container/ - You would need the following commands. To apply your above configuration:
Create the pod: kubectl apply -f ./demo-deployment.yaml
Verify the Container is running: kubectl get pod demo-deployment
Get a shell to the running Container: kubectl exec -it demo-deployment -- /bin/bash
Looking at the Container definition in the API reference, the equivalent options are stdin: true and tty: true.
(None of the applications I work on have ever needed this; the documentation for stdin: talks about "reads from stdin in the container" and the typical sort of server-type processes you'd run in a Deployment don't read from stdin at all.)
kubectl run is the close match to docker run based on the requested scenario.
Some examples from Kubernetes documentation and it's purpose :
kubectl run -i --tty busybox --image=busybox -- sh # Run pod as interactive shell
kubectl run nginx --image=nginx -n
mynamespace # Run pod nginx in a specific namespace
kubectl run nginx --image=nginx # Run pod nginx and write its spec into a file called pod.yaml
--dry-run=client -o yaml > pod.yaml

Unable to write file from docker run inside a kubernetes pod

I have a docker image that uses a volume to write files:
docker run --rm -v /home/dir:/out/ image:cli args
when I try to run this inside a pod the container exit normally but no file is written.
I don't get it.
The container throw errors if it does not find the volume, for example if I run it without the -v option it throws:
Unhandled Exception: System.IO.DirectoryNotFoundException: Could not find a part of the path '/out/file.txt'.
But I don't have any error from the container.
It finishes like it wrote files, but files do not exist.
I'm quite new to Kubernetes but this is getting me crazy.
Does kubernetes prevent files from being written? or am I missing something obvious?
The whole Kubernetes context is managed by GCP composer-airflow, if it helps...
docker -v: Docker version 17.03.2-ce, build f5ec1e2
If you want to have that behavior in Kubernetes you can use a hostPath volume.
Essentially you specify it in your pod spec and then the volume is mounted on the node where your pod runs and then the file should be there in the node after the pod exits.
apiVersion: v1
kind: Pod
metadata:
name: test-pd
spec:
containers:
- image: image:cli
name: test-container
volumeMounts:
- mountPath: /home/dir
name: test-volume
volumes:
- name: test-volume
hostPath:
path: /out
type: Directory
when I try to run this inside a pod the container exit normally but no file is written
First of all, there is no need to run the docker run command inside the pod :). A spec file (yaml) should be written for the pod and kubernetes will run the container in the pod using docker for you. Ideally, you don't need to run docker commands when using kubernetes (unless you are debugging docker-related issues).
This link has useful kubectl commands for docker users.
If you are used to docker-compose, refer Kompose to go from docker-compose to kubernetes:
https://github.com/kubernetes/kompose
http://kompose.io
Some options to mount a directory on the host as a volume inside the container in kubernetes:
hostPath
emptyDir
configMap

How to pass docker run parameter via kubernetes pod

Hi I am running kubernetes cluster where I run Logstash container.
But I need to run it with own docker run parameter. If I would run it in docker directly. I would use command:
docker run --log-driver=gelf logstash -f /config-dir/logstash.conf
But I need to run it via Kubernetes pod. My pod looks like:
spec:
containers:
- name: logstash-logging
image: "logstash:latest"
command: ["logstash", "-f" , "/config-dir/logstash.conf"]
volumeMounts:
- name: configs
mountPath: /config-dir/logstash.conf
How to achieve to run Docker container with parameter --log-driver=gelf via kubernetes. Thanks.
Kubernetes does not expose docker-specific options such as --log-driver. A higher abstraction of logging behavior might be added in the future, but it is not in the current API yet. This issue was discussed in https://github.com/kubernetes/kubernetes/issues/15478, and the suggestion was to change the default logging driver for docker daemon in the per-node configuration/salt template.

Resources