Difference between Docker ENTRYPOINT and Kubernetes container spec COMMAND? - docker

Dockerfile has a parameter for ENTRYPOINT and while writing Kubernetes deployment YAML file, there is a parameter in Container spec for COMMAND.
I am not able to figure out what's the difference and how each is used?

Kubernetes provides us with multiple options on how to use these commands:
When you override the default Entrypoint and Cmd in Kubernetes .yaml file, these rules apply:
If you do not supply command or args for a Container, the defaults
defined in the Docker image are used.
If you supply only args for a Container, the default Entrypoint
defined in the Docker image is run with the args that you supplied.
If you supply a command for a Container, only the
supplied command is used. The default EntryPoint and the default Cmd
defined in the Docker image are ignored. Your command is
run with the args supplied (or no args if none supplied).
Here is an example:
Dockerfile:
FROM alpine:latest
COPY "executable_file" /
ENTRYPOINT [ "./executable_file" ]
Kubernetes yaml file:
spec:
containers:
- name: container_name
image: image_name
args: ["arg1", "arg2", "arg3"]
https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/

The key difference is terminology. Kubernetes thought that the terms that Docker used to define the interface to
a container were awkward, and so they used different, overlapping terms. Since the vast majority of containers Kubernetes orchestrates are Docker, confusion abounds.
Specifically, docker entrypoints are kubernetes commands, and docker commands are kubernetes args, as indicated here.
-------------------------------------------------------------------------------------
| Description | Docker field name | Kubernetes field name |
-------------------------------------------------------------------------------------
| The command run by the container | Entrypoint | command |
| The arguments passed to the command | Cmd | args |
-------------------------------------------------------------------------------------
#Berk's description of how Kubernetes uses those runtime options is correct, but it's also correct for how docker run uses them, as long as you translate the terms. The key is to understand the interplay between image and run specifications in either system, and to translate terms whenever speaking of the other.

The COMMAND in the YAML file overwrites anything mentioned in the ENTRYPOINT in the docker file.

Basically the COMMAND can override what is mentioned in the docker ENTRYPOINT
Simple example:
To override the dockerfile ENTRYPOINT, just add these fields to your K8s template (Look at the command and args):
apiVersion: v1
kind: Pod
metadata:
name: command-demo
labels:
purpose: demonstrate-command
spec:
containers:
- name: command-demo-container
image: debian
command: ["/bin/sh"]
args: ["-c", "printenv; #OR WHATEVER COMMAND YOU WANT"]
restartPolicy: OnFailure
K8s docs:
command field corresponds to entrypoint in some container runtimes.
Refer to the Notes below.
You can enter Notes link (K8s documentation for better understanding on how this command overrides the K8s ENTRYPOINT)

Related

How to specify run arguments for a docker container running on kubernetes

I am kind of stuck with running a docker container as part of a kubernetes job and specifying runtime arguments in the job template.
My Dockerfile specifies an entrypoint and no CMD directive:
ENTRYPOINT ["python", "script.py"]
From what I understand, this means that when running the docker image and specifying arguments, the container will run using the entrypoint specified in the Dockerfile and pass the arguments to it. I can confirm that this is actually working, because running the container using docker does the trick:
docker run --rm image -e foo -b bar
In my case this will start script.py, which is using argument parser to parse named arguments, with the intended arguments.
The problem starts to arise when I am using a kubernetes job to do the same:
apiVersion: batch/v1
kind: Job
metadata:
name: pipeline
spec:
template:
spec:
containers:
- name: pipeline
image: test
args: ["-e", "foo", "-b", "bar"]
In the pod that gets deployed the correct entrypoint will be run, but the specified arguments vanish. I also tried specifying the arguments like this:
args: ["-e foo", "-b bar"]
But this didn't help either. I don't know why this is not working, because the documentation cleary states that: "If you supply only args for a Container, the default Entrypoint defined in the Docker image is run with the args that you supplied.". The default entrypoint is running, that is correct, but the arguments are lost between kubernetes and docker.
Does somebody know what I am doing wrong?
I actually got it working using the following yaml syntax:
args:
- "-e"
- "foo"
- "-b"
- "bar"
The array syntax that I used beforehand seems not to be working at all as everything was passed to the -e argument of my script like this:
-e " foo -b bar"
That's why the -b argument was marked as missing even though the arguments were populated in the container.

Read environment variables from file before starting docker

I have this dockerfile
FROM rabbitmq:3.7.12-management
CMD . /files/envinfo && echo $RABBITMQ_DEFAULT_USER && rabbitmq-server
In the envinfo I have this content
export RABBITMQ_DEFAULT_USER='anothername'
When the docker starts up the echo of RABBITMQ_DEFAULT_USER really prints out anothername. But when the service starts it doesnt see it.
If I set the environment variable another way from the kubernetes file it works as it should.
You can see the rabbitmq image I extend here.
https://github.com/docker-library/rabbitmq/blob/35b41e318d9d9272126f681be74bcbfd9712d71b/3.8/ubuntu/Dockerfile
I have another process that fetches the file and puts it in /files/envinfo to make it available for this docker image when it starts. So I cant use environment settings from kubernetes.
Looking forward to hear some suggestions =)
I agree with #code-gorilla use Kubernetes environment variables. But another way to do it is to source the environment variables before the entry point:
ENTRYPOINT ["source /files/envinfo && docker-entrypoint.sh"]
Overriding CMD will only change the argument to ENTRYPOINT, that's probably why it doesn't work for you.
You can try to debug it further:
1) Connect to the container and check if the env variable set within it
docker exec -it <container-id> /bin/bash
Then in your container:
echo $RABBITMQ_DEFAULT_USER
2) Use the Kubernetes evironment variable configuration instead of script execution in CMD
See the docs.
apiVersion: v1
kind: Pod
metadata:
name: envar-demo
labels:
purpose: demonstrate-envars
spec:
containers:
- name: envar-demo-container
image: gcr.io/google-samples/node-hello:1.0
env:
- name: DEMO_GREETING
value: "Hello from the environment"
- name: DEMO_FAREWELL
value: "Such a sweet sorrow"
Note: I suspect that an evironment variable set within the CMD command is not available in all shells of the container, e.g. when you open a new bash within it. This is what the Kubernetes config takes care of.

Storing docker registry path in Kubernetes yaml's

In my company we use GitLab as a source control and also use GitLab's Docker registry.
Each repository contains a Dockerfile for building an image and Kubernetes yaml's for defining the Pod/Service/Deployment/etc of that project.
The problem is that in the yaml's the image references the GitLab registry by its url
(Example from the Redis repository)
I don't like it for two reasons
It would make switching to another registry provider really hard since you'll have to go over your entire code base and change all the Kubernetes yaml files.
If a developer wants to test his app via minikube, he has to make sure that the image is stored and up to date in the registry. In our company, pushing to the registry is something that is done as part of the CI pipeline.
Is there a way to avoid storing the registry url in the repository?
Logging in to the docker registry beforehand doesn't solve it, you still have to provide the full image name with the url.
You can use an environment variable with shell scripts.
Here is the working example.
1) Create an environment variable
export IMAGE_URL=node:7-alpine
2) Create a 1 line shell script. This script purpose is replacing your env value in .yaml file with your actual environment variable.
echo sed 's/\$IMAGE_URL'"/$IMAGE_URL/g" > image_url.sh
3) Create sample yaml for example mypod.yaml.tmpl
cat > mypod.yaml.tmpl << EOL
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- name: mypod
image: $IMAGE_URL
# Just spin & wait forever
command: [ "/bin/ash", "-c", "--" ]
args: [ "while true; do sleep 30; done;" ]
EOL
4) Run kubectl apply
cat mypod.yaml.tmpl | sh image_url.sh | kubectl apply -f -

How to append an argument to a container command?

I have node.js application built using Dockerfile that defines:
CMD node dist/bin/index.js
I'd like to "append" a parameter to the command as it is defined in Dockerfile, i.e. I want to execute the program as node dist/bin/index.js foo.
In docker land I am able to achieve this via:
docker build --tag test .
docker run test foo
In kubernetes I cannot use command because that will override the ENTRYPOINT. I cannot use args because that will override the cmd defined in the Dockerfile. It appears that my only option is:
cmd: ["node", "dist/bin/index.js", "foo"]
Is there a way to append an argument to a container command without redefining the entire Docker CMD definition?
No way to append. You can either set the command: or args: on container.spec. You can learn more about how to override CMD/ENTRYPOINT here: https://kubernetes.io/docs/concepts/configuration/container-command-args/
I don't there is a way to achieve what you want, except for setting the cmd, as you mentioned.
Let me know if I misunderstood your question, but if you just want to pass a parameter to the POD you can run your command inside the entrypoint script and use an environment variable that you pass to the container for that command. This should work in any Pod spec (as part of the deployment description for example).
Example
apiVersion: v1
kind: Pod
metadata:
name: envar-demo
labels:
purpose: demonstrate-envars
spec:
containers:
- name: envar-demo-container
image: gcr.io/google-samples/node-hello:1.0
env:
- name: DEMO_GREETING
value: "Hello from the environment"
For more see this doc.
You might be able to achieve the same effect using an optional param in your Dockerfile:
ENV YOUR_OPTIONAL_PARAM
CMD node dist/bin/index.js $YOUR_OPTIONAL_PARAM

How to pass docker run parameter via kubernetes pod

Hi I am running kubernetes cluster where I run Logstash container.
But I need to run it with own docker run parameter. If I would run it in docker directly. I would use command:
docker run --log-driver=gelf logstash -f /config-dir/logstash.conf
But I need to run it via Kubernetes pod. My pod looks like:
spec:
containers:
- name: logstash-logging
image: "logstash:latest"
command: ["logstash", "-f" , "/config-dir/logstash.conf"]
volumeMounts:
- name: configs
mountPath: /config-dir/logstash.conf
How to achieve to run Docker container with parameter --log-driver=gelf via kubernetes. Thanks.
Kubernetes does not expose docker-specific options such as --log-driver. A higher abstraction of logging behavior might be added in the future, but it is not in the current API yet. This issue was discussed in https://github.com/kubernetes/kubernetes/issues/15478, and the suggestion was to change the default logging driver for docker daemon in the per-node configuration/salt template.

Resources