I have this dockerfile
FROM rabbitmq:3.7.12-management
CMD . /files/envinfo && echo $RABBITMQ_DEFAULT_USER && rabbitmq-server
In the envinfo I have this content
export RABBITMQ_DEFAULT_USER='anothername'
When the docker starts up the echo of RABBITMQ_DEFAULT_USER really prints out anothername. But when the service starts it doesnt see it.
If I set the environment variable another way from the kubernetes file it works as it should.
You can see the rabbitmq image I extend here.
https://github.com/docker-library/rabbitmq/blob/35b41e318d9d9272126f681be74bcbfd9712d71b/3.8/ubuntu/Dockerfile
I have another process that fetches the file and puts it in /files/envinfo to make it available for this docker image when it starts. So I cant use environment settings from kubernetes.
Looking forward to hear some suggestions =)
I agree with #code-gorilla use Kubernetes environment variables. But another way to do it is to source the environment variables before the entry point:
ENTRYPOINT ["source /files/envinfo && docker-entrypoint.sh"]
Overriding CMD will only change the argument to ENTRYPOINT, that's probably why it doesn't work for you.
You can try to debug it further:
1) Connect to the container and check if the env variable set within it
docker exec -it <container-id> /bin/bash
Then in your container:
echo $RABBITMQ_DEFAULT_USER
2) Use the Kubernetes evironment variable configuration instead of script execution in CMD
See the docs.
apiVersion: v1
kind: Pod
metadata:
name: envar-demo
labels:
purpose: demonstrate-envars
spec:
containers:
- name: envar-demo-container
image: gcr.io/google-samples/node-hello:1.0
env:
- name: DEMO_GREETING
value: "Hello from the environment"
- name: DEMO_FAREWELL
value: "Such a sweet sorrow"
Note: I suspect that an evironment variable set within the CMD command is not available in all shells of the container, e.g. when you open a new bash within it. This is what the Kubernetes config takes care of.
Related
I am kind of stuck with running a docker container as part of a kubernetes job and specifying runtime arguments in the job template.
My Dockerfile specifies an entrypoint and no CMD directive:
ENTRYPOINT ["python", "script.py"]
From what I understand, this means that when running the docker image and specifying arguments, the container will run using the entrypoint specified in the Dockerfile and pass the arguments to it. I can confirm that this is actually working, because running the container using docker does the trick:
docker run --rm image -e foo -b bar
In my case this will start script.py, which is using argument parser to parse named arguments, with the intended arguments.
The problem starts to arise when I am using a kubernetes job to do the same:
apiVersion: batch/v1
kind: Job
metadata:
name: pipeline
spec:
template:
spec:
containers:
- name: pipeline
image: test
args: ["-e", "foo", "-b", "bar"]
In the pod that gets deployed the correct entrypoint will be run, but the specified arguments vanish. I also tried specifying the arguments like this:
args: ["-e foo", "-b bar"]
But this didn't help either. I don't know why this is not working, because the documentation cleary states that: "If you supply only args for a Container, the default Entrypoint defined in the Docker image is run with the args that you supplied.". The default entrypoint is running, that is correct, but the arguments are lost between kubernetes and docker.
Does somebody know what I am doing wrong?
I actually got it working using the following yaml syntax:
args:
- "-e"
- "foo"
- "-b"
- "bar"
The array syntax that I used beforehand seems not to be working at all as everything was passed to the -e argument of my script like this:
-e " foo -b bar"
That's why the -b argument was marked as missing even though the arguments were populated in the container.
We have a dockerfile that uses both ENTRYPOINT and CMD to execute a script and start our java application respectively. It contains other information, but essentially looks like this:
ENTRYPOINT ["/home/config/wait-for-elastic.sh", "http://localhost:9200", "home/config/createElasticIndex.sh"]
CMD java -Xdebug -Xrunjdwp:transport=dt_socket,address=0.0.0.0:9700,server=y,suspend=n -jar -Dlog4j.configurationFile=/home/config/log4j2.yaml application.jar --spring.config.name=application,kafka --spring.config.location=/home/config
This have been working fine, until today when I went to add some encryption libraries (Jasypt) to our project to secure our passwords. Having setup the application to use Jasypt, I now have to add the following arguement to my CMD command to provide the application the master password like so -Djasypt.encryptor.password=password. I've tested that using this, the container can run fine without issue. However this is obviously not very secure providing the password in plaintext through the command. So I've added the password as an environment variable called JASYPT_ENCRYPTOR_PASSWORD into the container via my Kubernetes deployment.yaml file. I've also verified this exists within the container by using printenv.
CMD java -Xdebug -Xrunjdwp:transport=dt_socket,address=0.0.0.0:9700,server=y,suspend=n -jar -Djasypt.encryptor.password=$JASYPT_ENCRYPTOR_PASSWORD -Dlog4j.configurationFile=/home/config/log4j2.yaml application.jar --spring.config.name=application,kafka --spring.config.location=/home/config
But this then results in the container crashing with an EncryptionOperationNotPossibleExeception. It looks like the command is using the the word $JASYPT_ENCRYPTOR_PASSWORD as opposed the value of the environment variable itself. I do not understand why this is happening. I have tried a number of different approaches, such as using CMD with an array of arguments like
CMD ["java", "-Xdebug", "-Xrunjdwp:transport=dt_socket,address=0.0.0.0:9700,server=y,suspend=n", "-jar", "-Djasypt.encryptor.password=$JASYPT_ENCRYPTOR_PASSWORD", "-Dlog4j.configurationFile=/home/config/log4j2.yaml", "application.jar", "--spring.config.name=application,kafka", "--spring.config.location=/home/config"]
But that seems to crash with other errors about the formatting being incorrect. So I am stuck as to how to resolve this.
I was able to get it to work by switching from using CMD to ENTRYPOINT with the above command like so
ENTRYPOINT ["java", "-Xdebug", "-Xrunjdwp:transport=dt_socket,address=0.0.0.0:9700,server=y,suspend=n", "-jar", "-Djasypt.encryptor.password=${JASYPT_ENCRYPTOR_PASSWORD}", "-Dlog4j.configurationFile=/home/config/log4j2.yaml", "application.jar", "--spring.config.name=application,kafka", "--spring.config.location=/home/config"]
But this then creates a problem where I can't have 2 ENTRYPOINT commands in the same Dockerfile.
Can anyone help with why the CMD command doesn't recognise the VM environment variable?
EDIT: Also if it's important, this container is being deployed to a Minikube environment via a normal Kubernetes deployment yaml file with the kubectl apply command.
EDIT 2: How I set the env variable in the deployment.yaml file:
---
apiVersion: v1
kind: Deployment
metadata:
name: my-app
lables:
app: my-app
spec:
replicas: 1
template:
spec:
containers:
- name: my-app
image: my-image
imagePullPolicy: IfNotPresent
volumeMounts:
- name: my-app-logs
mountPath: /home/logs
env:
- name: JASYPT_ENCRYPTOR_PASSWORD
value: "password"
EDIT 3:
I've found something unusual. If I change my CMD command to be
CMD ["/bin/sh", "-c", "java -Xdebug -Xrunjdwp:transport=dt_socket,address=0.0.0.0:9700,server=y,suspend=n, -jar -Djasypt.encryptor.password=$JASYPT_ENCRYPTOR_PASSWORD -Dlog4j.configurationFile=/home/config/log4j2.yaml application.jar --spring.config.name=application,kafka --spring.config.location=/home/config"]
and remove the ENTRYPOINT line, everything works fine, because the environment variable is being read correctly. I've verified this by simply running CMD ["/bin/sh", "echo $JASYPT_ENCRYPTOR_PASSWORD"] which prints "password", which is the correct value. However if I add the ENTRYPOINT line back in, then everything crashes again since the env var is being read in as an empty string. I have no idea why changing the CMD command the one above works different the one I had before.
EDIT 4: I believe I've narrowed this down to being an issue with how I'm using the CMD and ENTRYPOINT commands, due to not understanding how they interact with one another when run in the same way. I've found this link in the docker documentation, but still don't fully understand how to use it in my case after reading it.
In my company we use GitLab as a source control and also use GitLab's Docker registry.
Each repository contains a Dockerfile for building an image and Kubernetes yaml's for defining the Pod/Service/Deployment/etc of that project.
The problem is that in the yaml's the image references the GitLab registry by its url
(Example from the Redis repository)
I don't like it for two reasons
It would make switching to another registry provider really hard since you'll have to go over your entire code base and change all the Kubernetes yaml files.
If a developer wants to test his app via minikube, he has to make sure that the image is stored and up to date in the registry. In our company, pushing to the registry is something that is done as part of the CI pipeline.
Is there a way to avoid storing the registry url in the repository?
Logging in to the docker registry beforehand doesn't solve it, you still have to provide the full image name with the url.
You can use an environment variable with shell scripts.
Here is the working example.
1) Create an environment variable
export IMAGE_URL=node:7-alpine
2) Create a 1 line shell script. This script purpose is replacing your env value in .yaml file with your actual environment variable.
echo sed 's/\$IMAGE_URL'"/$IMAGE_URL/g" > image_url.sh
3) Create sample yaml for example mypod.yaml.tmpl
cat > mypod.yaml.tmpl << EOL
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- name: mypod
image: $IMAGE_URL
# Just spin & wait forever
command: [ "/bin/ash", "-c", "--" ]
args: [ "while true; do sleep 30; done;" ]
EOL
4) Run kubectl apply
cat mypod.yaml.tmpl | sh image_url.sh | kubectl apply -f -
Dockerfile has a parameter for ENTRYPOINT and while writing Kubernetes deployment YAML file, there is a parameter in Container spec for COMMAND.
I am not able to figure out what's the difference and how each is used?
Kubernetes provides us with multiple options on how to use these commands:
When you override the default Entrypoint and Cmd in Kubernetes .yaml file, these rules apply:
If you do not supply command or args for a Container, the defaults
defined in the Docker image are used.
If you supply only args for a Container, the default Entrypoint
defined in the Docker image is run with the args that you supplied.
If you supply a command for a Container, only the
supplied command is used. The default EntryPoint and the default Cmd
defined in the Docker image are ignored. Your command is
run with the args supplied (or no args if none supplied).
Here is an example:
Dockerfile:
FROM alpine:latest
COPY "executable_file" /
ENTRYPOINT [ "./executable_file" ]
Kubernetes yaml file:
spec:
containers:
- name: container_name
image: image_name
args: ["arg1", "arg2", "arg3"]
https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/
The key difference is terminology. Kubernetes thought that the terms that Docker used to define the interface to
a container were awkward, and so they used different, overlapping terms. Since the vast majority of containers Kubernetes orchestrates are Docker, confusion abounds.
Specifically, docker entrypoints are kubernetes commands, and docker commands are kubernetes args, as indicated here.
-------------------------------------------------------------------------------------
| Description | Docker field name | Kubernetes field name |
-------------------------------------------------------------------------------------
| The command run by the container | Entrypoint | command |
| The arguments passed to the command | Cmd | args |
-------------------------------------------------------------------------------------
#Berk's description of how Kubernetes uses those runtime options is correct, but it's also correct for how docker run uses them, as long as you translate the terms. The key is to understand the interplay between image and run specifications in either system, and to translate terms whenever speaking of the other.
The COMMAND in the YAML file overwrites anything mentioned in the ENTRYPOINT in the docker file.
Basically the COMMAND can override what is mentioned in the docker ENTRYPOINT
Simple example:
To override the dockerfile ENTRYPOINT, just add these fields to your K8s template (Look at the command and args):
apiVersion: v1
kind: Pod
metadata:
name: command-demo
labels:
purpose: demonstrate-command
spec:
containers:
- name: command-demo-container
image: debian
command: ["/bin/sh"]
args: ["-c", "printenv; #OR WHATEVER COMMAND YOU WANT"]
restartPolicy: OnFailure
K8s docs:
command field corresponds to entrypoint in some container runtimes.
Refer to the Notes below.
You can enter Notes link (K8s documentation for better understanding on how this command overrides the K8s ENTRYPOINT)
I have node.js application built using Dockerfile that defines:
CMD node dist/bin/index.js
I'd like to "append" a parameter to the command as it is defined in Dockerfile, i.e. I want to execute the program as node dist/bin/index.js foo.
In docker land I am able to achieve this via:
docker build --tag test .
docker run test foo
In kubernetes I cannot use command because that will override the ENTRYPOINT. I cannot use args because that will override the cmd defined in the Dockerfile. It appears that my only option is:
cmd: ["node", "dist/bin/index.js", "foo"]
Is there a way to append an argument to a container command without redefining the entire Docker CMD definition?
No way to append. You can either set the command: or args: on container.spec. You can learn more about how to override CMD/ENTRYPOINT here: https://kubernetes.io/docs/concepts/configuration/container-command-args/
I don't there is a way to achieve what you want, except for setting the cmd, as you mentioned.
Let me know if I misunderstood your question, but if you just want to pass a parameter to the POD you can run your command inside the entrypoint script and use an environment variable that you pass to the container for that command. This should work in any Pod spec (as part of the deployment description for example).
Example
apiVersion: v1
kind: Pod
metadata:
name: envar-demo
labels:
purpose: demonstrate-envars
spec:
containers:
- name: envar-demo-container
image: gcr.io/google-samples/node-hello:1.0
env:
- name: DEMO_GREETING
value: "Hello from the environment"
For more see this doc.
You might be able to achieve the same effect using an optional param in your Dockerfile:
ENV YOUR_OPTIONAL_PARAM
CMD node dist/bin/index.js $YOUR_OPTIONAL_PARAM