Is there a sneaky way to run a command before the entrypoint (in a k8s deployment manifest) without having to modify the dockerfile/image? [duplicate] - docker

In this official document, it can run command in a yaml config file:
https://kubernetes.io/docs/tasks/configure-pod-container/
apiVersion: v1
kind: Pod
metadata:
name: hello-world
spec: # specification of the pod’s contents
restartPolicy: Never
containers:
- name: hello
image: "ubuntu:14.04"
env:
- name: MESSAGE
value: "hello world"
command: ["/bin/sh","-c"]
args: ["/bin/echo \"${MESSAGE}\""]
If I want to run more than one command, how to do?

command: ["/bin/sh","-c"]
args: ["command one; command two && command three"]
Explanation: The command ["/bin/sh", "-c"] says "run a shell, and execute the following instructions". The args are then passed as commands to the shell. In shell scripting a semicolon separates commands, and && conditionally runs the following command if the first succeed. In the above example, it always runs command one followed by command two, and only runs command three if command two succeeded.
Alternative: In many cases, some of the commands you want to run are probably setting up the final command to run. In this case, building your own Dockerfile is the way to go. Look at the RUN directive in particular.

My preference is to multiline the args, this is simplest and easiest to read. Also, the script can be changed without affecting the image, just need to restart the pod. For example, for a mysql dump, the container spec could be something like this:
containers:
- name: mysqldump
image: mysql
command: ["/bin/sh", "-c"]
args:
- echo starting;
ls -la /backups;
mysqldump --host=... -r /backups/file.sql db_name;
ls -la /backups;
echo done;
volumeMounts:
- ...
The reason this works is that yaml actually concatenates all the lines after the "-" into one, and sh runs one long string "echo starting; ls... ; echo done;".

If you're willing to use a Volume and a ConfigMap, you can mount ConfigMap data as a script, and then run that script:
---
apiVersion: v1
kind: ConfigMap
metadata:
name: my-configmap
data:
entrypoint.sh: |-
#!/bin/bash
echo "Do this"
echo "Do that"
---
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
containers:
- name: my-container
image: "ubuntu:14.04"
command:
- /bin/entrypoint.sh
volumeMounts:
- name: configmap-volume
mountPath: /bin/entrypoint.sh
readOnly: true
subPath: entrypoint.sh
volumes:
- name: configmap-volume
configMap:
defaultMode: 0700
name: my-configmap
This cleans up your pod spec a little and allows for more complex scripting.
$ kubectl logs my-pod
Do this
Do that

If you want to avoid concatenating all commands into a single command with ; or && you can also get true multi-line scripts using a heredoc:
command:
- sh
- "-c"
- |
/bin/bash <<'EOF'
# Normal script content possible here
echo "Hello world"
ls -l
exit 123
EOF
This is handy for running existing bash scripts, but has the downside of requiring both an inner and an outer shell instance for setting up the heredoc.

I am not sure if the question is still active but due to the fact that I did not find the solution in the above answers I decided to write it down.
I use the following approach:
readinessProbe:
exec:
command:
- sh
- -c
- |
command1
command2 && command3
I know my example is related to readinessProbe, livenessProbe, etc. but suspect the same case is for the container commands. This provides flexibility as it mirrors a standard script writing in Bash.

IMHO the best option is to use YAML's native block scalars. Specifically in this case, the folded style block.
By invoking sh -c you can pass arguments to your container as commands, but if you want to elegantly separate them with newlines, you'd want to use the folded style block, so that YAML will know to convert newlines to whitespaces, effectively concatenating the commands.
A full working example:
apiVersion: v1
kind: Pod
metadata:
name: myapp
labels:
app: myapp
spec:
containers:
- name: busy
image: busybox:1.28
command: ["/bin/sh", "-c"]
args:
- >
command_1 &&
command_2 &&
...
command_n

Here is my successful run
apiVersion: v1
kind: Pod
metadata:
labels:
run: busybox
name: busybox
spec:
containers:
- command:
- /bin/sh
- -c
- |
echo "running below scripts"
i=0;
while true;
do
echo "$i: $(date)";
i=$((i+1));
sleep 1;
done
name: busybox
image: busybox

Here is one more way to do it, with output logging.
apiVersion: v1
kind: Pod
metadata:
labels:
type: test
name: nginx
spec:
containers:
- image: nginx
name: nginx
volumeMounts:
- name: log-vol
mountPath: /var/mylog
command:
- /bin/sh
- -c
- >
i=0;
while [ $i -lt 100 ];
do
echo "hello $i";
echo "$i : $(date)" >> /var/mylog/1.log;
echo "$(date)" >> /var/mylog/2.log;
i=$((i+1));
sleep 1;
done
dnsPolicy: ClusterFirst
restartPolicy: Always
volumes:
- name: log-vol
emptyDir: {}

Here is another way to run multi line commands.
apiVersion: batch/v1
kind: Job
metadata:
name: multiline
spec:
template:
spec:
containers:
- command:
- /bin/bash
- -exc
- |
set +x
echo "running below scripts"
if [[ -f "if-condition.sh" ]]; then
echo "Running if success"
else
echo "Running if failed"
fi
name: ubuntu
image: ubuntu
restartPolicy: Never
backoffLimit: 1

Just to bring another possible option, secrets can be used as they are presented to the pod as volumes:
Secret example:
apiVersion: v1
kind: Secret
metadata:
name: secret-script
type: Opaque
data:
script_text: <<your script in b64>>
Yaml extract:
....
containers:
- name: container-name
image: image-name
command: ["/bin/bash", "/your_script.sh"]
volumeMounts:
- name: vsecret-script
mountPath: /your_script.sh
subPath: script_text
....
volumes:
- name: vsecret-script
secret:
secretName: secret-script
I know many will argue this is not what secrets must be used for, but it is an option.

Related

Kubernetes /bin/bash with -c argument returns - : invalid option

I have this definition in my values.yaml which is supplied to job.yaml
command: ["/bin/bash"]
args: ["-c", "cd /opt/nonrtric/ric-common/ansible/; cat group_vars/all"]
However, after the pod initializes, I get this error:
/bin/bash: - : invalid option
If i try this syntax:
command: ["/bin/sh", "-c"]
args:
- >
cd /opt/nonrtric/ric-common/ansible/ &&
cat group_vars/all
I get this error: Error: failed to start container "ric-register-avro": Error response from daemon: OCI runtime create failed: container_linux.go:370: starting container process caused: exec: "/bin/sh -c": stat /bin/sh -c: no such file or directory: unknown
Both sh and bash are supplied in the image, which is CentOS 7
job.yaml
---
apiVersion: batch/v1
kind: Job
metadata:
name: ric-register-avro
spec:
backoffLimit: 0
template:
spec:
containers:
- image: "{{ .Values.ric_register_avro_job.image }}"
name: "{{ .Values.ric_register_avro_job.name }}"
command: {{ .Values.ric_register_avro_job.containers.command }}
args: {{ .Values.ric_register_avro_job.containers.args }}
volumeMounts:
- name: all-file
mountPath: "/opt/nonrtric/ric-common/ansible/group_vars/"
readOnly: true
subPath: all
volumes:
- name: all-file
configMap:
name: ric-register-avro--configmap
restartPolicy: Never
values.yaml
global:
name: ric-register-avro
namespace: foo-bar
ric_register_avro_job:
name: ric-register-avro
all_file:
rest_api_url: http://10.230.227.13/foo
auth_username: foo
auth_password: bar
backoffLimit: 0
completions: 1
image: 10.0.0.1:5000/5gc/ric-app
containers:
name: ric-register-avro
command: ["/bin/bash"]
args: ["-c cd /opt/nonrtric/ric-common/ansible/; cat group_vars/all"]
restartPolicy: Never
In your Helm chart, you directly specify command: and args: using template syntax
command: {{ .Values.ric_register_avro_job.containers.command }}
args: {{ .Values.ric_register_avro_job.containers.args }}
However, the output of a {{ ... }} block is always a string. If the value you have inside the template is some other type, like a list, it will be converted to a string using some default Go rules, which aren't especially useful in a Kubernetes context.
Helm includes two lightly-documented conversion functions toJson and toYaml that can help here. Valid JSON is also valid YAML, so one easy approach is just to convert both parts to JSON
command: {{ toJson .Values.ric_register_avro_job.containers.command }}
args: {{ toJson .Values.ric_register_avro_job.containers.args }}
or, if you want it to look a little more like normal YAML,
command:
{{ .Values.ric_register_avro_job.containers.command | toYaml | indent 12 }}
args:
{{ .Values.ric_register_avro_job.containers.args | toYaml | indent 12 }}
or, for that matter, if you're passing a complete container description via Helm values, it could be enough to
containers:
- name: ric_register_avro_job
{{ .Values.ric_register_avro_job.containers | toYaml | indent 10 }}
In all of these cases, I've put the templating construct starting at the first column, but then used the indent function to correctly indent the YAML block. Double-check the indentation and adjust the indent parameter.
You can also double-check that what's coming out looks correct using helm template, using the same -f option(s) as when you install the chart.
(In practice, I might put many of the options you show directly into the chart template, rather than making them configurable as values. The container name, for example, doesn't need to be configurable, and I'd usually fix the command. For this very specific example you can also set the container's workingDir: rather than running cd inside a shell wrapper.)
I use this:
command: ["/bin/sh"]
args: ["-c", "my-command"]
Trying this simple job I've no issue:
apiVersion: batch/v1
kind: Job
metadata:
name: foo
spec:
template:
spec:
containers:
- name: foo
image: centos:7
command: ["/bin/sh"]
args: ["-c", "echo 'hello world'"]
restartPolicy: Never

Run container in cronjob k8s

I have a dockerfile finishing with an entrypoint:
ENTRYPOINT ["/bin/bash" , "-c", "source /app/env.sh && printenv && python3 /app/script.py"]
And a yaml k8s CronJob:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: my_healthcheck
namespace: default
labels:
app: my_healthcheck
spec:
schedule: "30 8 * * 1-5"
jobTemplate:
spec:
backoffLimit: 5
template:
spec:
containers:
- name: pythonscript
image: xxx/pythonscript:latest
imagePullPolicy: IfNotPresent
command: [ <what do i put here> ]
restartPolicy: OnFailure
Inside "command", what command i need to put, to run the container ?
thanks
The image ENTRYPOINT handles everything, so the command doesn't need to supplied.
If you do provide a command, it will override the ENTRYPOINT.
command: [ '/bin/bash', '-c', 'echo "not running python"' ]
You can supply args if you want to append arguments to the command/ENTRYPOINT.
See the difference between the Kubernetes/Docker terms.

Init container to wait for rabbit-mq readiness

I saw the example for docker healthcheck of RabbitMQ at docker-library/healthcheck.
I would like to apply a similar mechanism to my Kubernetes deployment to await on Rabbit deployment readiness. I'm doing a similar thing with MongoDB, using a container that busy-waits mongo with some ping command.
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-1
spec:
replicas: 1
selector:
matchLabels:
app: app-1
template:
metadata:
labels:
app: app-1
spec:
initContainers:
- name: wait-for-mongo
image: gcr.io/app-1/tools/mongo-ping
containers:
- name: app-1-service
image: gcr.io/app-1/service
...
However when I tried to construct such an init container I couldn't find any solution on how to query the health of rabbit from outside its cluster.
The following works without any extra images/scripts, but requires you to enable the Management Plugin, eg by using the rabbitmq:3.8-management image instead of eg rabbitmq:3.8.
initContainers:
- name: check-rabbitmq-ready
image: busybox
command: [ 'sh', '-c',
'until wget http://guest:guest#rabbitmq:15672/api/aliveness-test/%2F;
do echo waiting for rabbitmq; sleep 2; done;' ]
Specifically, this is waiting until the HTTP Management API is available, and then checking that the default vhost is running healthily. The %2F refers to the default / vhost, which has to be urlendoded. If using your own vhost, enter that instead.
Adapted from this example, as suggested by #Hanx:
Dockerfile
FROM python:3-alpine
ENV RABBIT_HOST="my-rabbit"
ENV RABBIT_VHOST="vhost"
ENV RABBIT_USERNAME="root"
RUN pip install pika
COPY check_rabbitmq_connection.py /check_rabbitmq_connection.py
RUN chmod +x /check_rabbitmq_connection.py
CMD ["sh", "-c", "python /check_rabbitmq_connection.py --host $RABBIT_HOST --username $RABBIT_USERNAME --password $RABBIT_PASSWORD --virtual_host $RABBIT_VHOST"]
check_rabbitmq_connection.py
#!/usr/bin/env python3
# Check connection to the RabbitMQ server
# Source: https://blog.sleeplessbeastie.eu/2017/07/10/how-to-check-connection-to-the-rabbitmq-message-broker/
import argparse
import time
import pika
# define and parse command-line options
parser = argparse.ArgumentParser(description='Check connection to RabbitMQ server')
parser.add_argument('--host', required=True, help='Define RabbitMQ server hostname')
parser.add_argument('--virtual_host', default='/', help='Define virtual host')
parser.add_argument('--port', type=int, default=5672, help='Define port (default: %(default)s)')
parser.add_argument('--username', default='guest', help='Define username (default: %(default)s)')
parser.add_argument('--password', default='guest', help='Define password (default: %(default)s)')
args = vars(parser.parse_args())
print(args)
# set amqp credentials
credentials = pika.PlainCredentials(args['username'], args['password'])
# set amqp connection parameters
parameters = pika.ConnectionParameters(host=args['host'], port=args['port'], virtual_host=args['virtual_host'], credentials=credentials)
# try to establish connection and check its status
while True:
try:
connection = pika.BlockingConnection(parameters)
if connection.is_open:
print('OK')
connection.close()
exit(0)
except Exception as error:
raise
print('No connection yet:', error.__class__.__name__)
time.sleep(5)
Build and run:
docker build -t rabbit-ping .
docker run --rm -it \
--name rabbit-ping \
--net=my-net \
-e RABBIT_PASSWORD="<rabbit password>" \
rabbit-ping
and in deployment.yaml:
apiVersion: apps/v1
kind: Deployment
...
spec:
...
template:
...
spec:
initContainers:
- name: wait-for-rabbit
image: gcr.io/my-org/rabbit-ping
env:
- name: RABBIT_PASSWORD
valueFrom:
secretKeyRef:
name: rabbit
key: rabbit-password
containers:
...

Kubernetes Image goes into CrashLoopBackoff even if entry point is defined

I am trying to run an image using Kubernetes with below Dockerfile
FROM centos:6.9
COPY rpms/* /tmp/
RUN yum -y localinstall /tmp/*
ENTERYPOINT service test start && /bin/bash
Now when I try to deploy this image using pod.yml as shown below,
apiVersion: v1
kind: Pod
metadata:
labels:
app: testpod
name: testpod
spec:
containers:
- image: test:v0.2
name: test
imagePullPolicy: Always
volumeMounts:
- mountPath: /data
name: testpod
volumes:
- name: testod
persistentVolumeClaim:
claimName: testpod
Now when I try to create the pod the image goes into a crashloopbackoff. So how I can make the image to wait in /bin/bash on Kubernetes as when I use docker run -d test:v0.2 it work fines and keep running.
You need to attach a terminal to the running container. When starting a pod using kubectl run ... you can use -i --tty to do that. In the pod yml filke, you can add the following, to the container spec to attach tty.
stdin: true
tty: true
You can put a command like tail -f /dev/null to keep your container always be on, this could be done inside your Dockerfile or in your Kubernetes yaml file.

Disable Transparent Huge Pages from Kubernetes

I deploy Redis container via Kubernetes and get the following warning:
WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled
Is it possible to disable THP via Kubernetes? Perhaps via init-containers?
Yes, with init-containers it's quite straightforward:
apiVersion: v1
kind: Pod
metadata:
name: thp-test
spec:
restartPolicy: Never
terminationGracePeriodSeconds: 1
volumes:
- name: host-sys
hostPath:
path: /sys
initContainers:
- name: disable-thp
image: busybox
volumeMounts:
- name: host-sys
mountPath: /host-sys
command: ["sh", "-c", "echo never >/host-sys/kernel/mm/transparent_hugepage/enabled"]
containers:
- name: busybox
image: busybox
command: ["cat", "/sys/kernel/mm/transparent_hugepage/enabled"]
Demo (notice that this is a system wide setting):
$ ssh THATNODE cat /sys/kernel/mm/transparent_hugepage/enabled
always [madvise] never
$ kubectl create -f thp-test.yaml
pod "thp-test" created
$ kubectl logs thp-test
always madvise [never]
$ kubectl delete pod thp-test
pod "thp-test" deleted
$ ssh THATNODE cat /sys/kernel/mm/transparent_hugepage/enabled
always madvise [never]
Ay,
I don't know if what I did is a good idea but we needed to deactivate THP on all our K8S VMs for all our apps. So I used a DaemonSet instead of adding an init-container to all our stacks :
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: thp-disable
namespace: kube-system
spec:
selector:
matchLabels:
name: thp-disable
template:
metadata:
labels:
name: thp-disable
spec:
restartPolicy: Always
terminationGracePeriodSeconds: 1
volumes:
- name: host-sys
hostPath:
path: /sys
initContainers:
- name: disable-thp
image: busybox
volumeMounts:
- name: host-sys
mountPath: /host-sys
command: ["sh", "-c", "echo never >/host-sys/kernel/mm/transparent_hugepage/enabled"]
containers:
- name: busybox
image: busybox
command: ["watch", "-n", "600", "cat", "/sys/kernel/mm/transparent_hugepage/enabled"]
I think it's a little dirty but it works.

Resources