We have a dockerfile that uses both ENTRYPOINT and CMD to execute a script and start our java application respectively. It contains other information, but essentially looks like this:
ENTRYPOINT ["/home/config/wait-for-elastic.sh", "http://localhost:9200", "home/config/createElasticIndex.sh"]
CMD java -Xdebug -Xrunjdwp:transport=dt_socket,address=0.0.0.0:9700,server=y,suspend=n -jar -Dlog4j.configurationFile=/home/config/log4j2.yaml application.jar --spring.config.name=application,kafka --spring.config.location=/home/config
This have been working fine, until today when I went to add some encryption libraries (Jasypt) to our project to secure our passwords. Having setup the application to use Jasypt, I now have to add the following arguement to my CMD command to provide the application the master password like so -Djasypt.encryptor.password=password. I've tested that using this, the container can run fine without issue. However this is obviously not very secure providing the password in plaintext through the command. So I've added the password as an environment variable called JASYPT_ENCRYPTOR_PASSWORD into the container via my Kubernetes deployment.yaml file. I've also verified this exists within the container by using printenv.
CMD java -Xdebug -Xrunjdwp:transport=dt_socket,address=0.0.0.0:9700,server=y,suspend=n -jar -Djasypt.encryptor.password=$JASYPT_ENCRYPTOR_PASSWORD -Dlog4j.configurationFile=/home/config/log4j2.yaml application.jar --spring.config.name=application,kafka --spring.config.location=/home/config
But this then results in the container crashing with an EncryptionOperationNotPossibleExeception. It looks like the command is using the the word $JASYPT_ENCRYPTOR_PASSWORD as opposed the value of the environment variable itself. I do not understand why this is happening. I have tried a number of different approaches, such as using CMD with an array of arguments like
CMD ["java", "-Xdebug", "-Xrunjdwp:transport=dt_socket,address=0.0.0.0:9700,server=y,suspend=n", "-jar", "-Djasypt.encryptor.password=$JASYPT_ENCRYPTOR_PASSWORD", "-Dlog4j.configurationFile=/home/config/log4j2.yaml", "application.jar", "--spring.config.name=application,kafka", "--spring.config.location=/home/config"]
But that seems to crash with other errors about the formatting being incorrect. So I am stuck as to how to resolve this.
I was able to get it to work by switching from using CMD to ENTRYPOINT with the above command like so
ENTRYPOINT ["java", "-Xdebug", "-Xrunjdwp:transport=dt_socket,address=0.0.0.0:9700,server=y,suspend=n", "-jar", "-Djasypt.encryptor.password=${JASYPT_ENCRYPTOR_PASSWORD}", "-Dlog4j.configurationFile=/home/config/log4j2.yaml", "application.jar", "--spring.config.name=application,kafka", "--spring.config.location=/home/config"]
But this then creates a problem where I can't have 2 ENTRYPOINT commands in the same Dockerfile.
Can anyone help with why the CMD command doesn't recognise the VM environment variable?
EDIT: Also if it's important, this container is being deployed to a Minikube environment via a normal Kubernetes deployment yaml file with the kubectl apply command.
EDIT 2: How I set the env variable in the deployment.yaml file:
---
apiVersion: v1
kind: Deployment
metadata:
name: my-app
lables:
app: my-app
spec:
replicas: 1
template:
spec:
containers:
- name: my-app
image: my-image
imagePullPolicy: IfNotPresent
volumeMounts:
- name: my-app-logs
mountPath: /home/logs
env:
- name: JASYPT_ENCRYPTOR_PASSWORD
value: "password"
EDIT 3:
I've found something unusual. If I change my CMD command to be
CMD ["/bin/sh", "-c", "java -Xdebug -Xrunjdwp:transport=dt_socket,address=0.0.0.0:9700,server=y,suspend=n, -jar -Djasypt.encryptor.password=$JASYPT_ENCRYPTOR_PASSWORD -Dlog4j.configurationFile=/home/config/log4j2.yaml application.jar --spring.config.name=application,kafka --spring.config.location=/home/config"]
and remove the ENTRYPOINT line, everything works fine, because the environment variable is being read correctly. I've verified this by simply running CMD ["/bin/sh", "echo $JASYPT_ENCRYPTOR_PASSWORD"] which prints "password", which is the correct value. However if I add the ENTRYPOINT line back in, then everything crashes again since the env var is being read in as an empty string. I have no idea why changing the CMD command the one above works different the one I had before.
EDIT 4: I believe I've narrowed this down to being an issue with how I'm using the CMD and ENTRYPOINT commands, due to not understanding how they interact with one another when run in the same way. I've found this link in the docker documentation, but still don't fully understand how to use it in my case after reading it.
Related
I have a project which I had previously successfully deployed to Google Cloud Run, and set up with a trigger such that upon pushing to the repo's main branch on Github, it would automatically deploy. It worked great.
Then I tried to rename the github repo, which meant deleting and creating a new trigger, and now I cannot get it working again.
Everytime, the build succeeds but deployment fails with this error in Cloud Build:
Step #2 - "Deploy": ERROR: (gcloud.run.services.update) Cloud Run error: Container failed to start. Failed to start and then listen on the port defined by the PORT environment variable. Logs for this revision might contain more information.
I have not changed anything other than the repo name, leading me to believe the fix is not with my code, but I tried some changes there anyway.
I have looked into the solutions set forth in this post. However, I believe I am listening on the correct port.
My app is using Python and Flask, and contains this:
if __name__ == "__main__":
app.run(debug=False, host="0.0.0.0", port=int(os.environ.get("PORT", 8080)))
Which should use the ENV var Port (8080) and otherwise default to 8080. I also tried just using port=8080.
I tried explicitly exposing the port in the Dockerfile, which also did not work:
FROM python:3.7
#Copy files into docker image dir, and make that the current working dir
COPY . /docker-image
WORKDIR /docker-image
RUN pip install -r requirements.txt
CMD ["flask", "run", "--host", "0.0.0.0"]
EXPOSE 8080
Cloud Run does seem to be using port 8080 - if I dig into the response, I see this nested under Response.spec.container.0 :
ports: [
0: {
containerPort: 8080
name: "http1"
}
]
All that said, if I look at the logs, it shows "Now running on Port 5000".
I have no idea where that Port 5000 is coming from or being set, but trying to change the ports in Python/Flask and the Dockerfile to 5000 leads to the same errors.
How do I get it to run on Port 8080? It's very strange to me that this was working FINE prior to renaming the repo and creating a new trigger. How is this setup different? The Trigger does not give an option to set the port so I'm not sure how that caused this error.
You have mixed things. Flask command default port is effectively 5000. If you want to change it, you need to change your flask run command with the --port= parameter
CMD ["flask", "run", "--host", "0.0.0.0","--port","8080"]
In addition, your flask run command, is a flask runtime and totally ignore the standard python entrypoint if __name__ == "__main__":. If you want to use this entrypoint, use the Python runtime
CMD ["python", "<main file>.py"]
I am kind of stuck with running a docker container as part of a kubernetes job and specifying runtime arguments in the job template.
My Dockerfile specifies an entrypoint and no CMD directive:
ENTRYPOINT ["python", "script.py"]
From what I understand, this means that when running the docker image and specifying arguments, the container will run using the entrypoint specified in the Dockerfile and pass the arguments to it. I can confirm that this is actually working, because running the container using docker does the trick:
docker run --rm image -e foo -b bar
In my case this will start script.py, which is using argument parser to parse named arguments, with the intended arguments.
The problem starts to arise when I am using a kubernetes job to do the same:
apiVersion: batch/v1
kind: Job
metadata:
name: pipeline
spec:
template:
spec:
containers:
- name: pipeline
image: test
args: ["-e", "foo", "-b", "bar"]
In the pod that gets deployed the correct entrypoint will be run, but the specified arguments vanish. I also tried specifying the arguments like this:
args: ["-e foo", "-b bar"]
But this didn't help either. I don't know why this is not working, because the documentation cleary states that: "If you supply only args for a Container, the default Entrypoint defined in the Docker image is run with the args that you supplied.". The default entrypoint is running, that is correct, but the arguments are lost between kubernetes and docker.
Does somebody know what I am doing wrong?
I actually got it working using the following yaml syntax:
args:
- "-e"
- "foo"
- "-b"
- "bar"
The array syntax that I used beforehand seems not to be working at all as everything was passed to the -e argument of my script like this:
-e " foo -b bar"
That's why the -b argument was marked as missing even though the arguments were populated in the container.
I've been struggling with this concept. To start I'm new to docker and self teaching myself (slowly). I am using a docker swarm instance and trying to leverage docker secrets for a simple username and password to an exiting rocker/rstudio image. I've set up the reverse proxy and can successfully use https to access the R studio via my browser. Now when I pass the variables at path /run/secrets/user and /run/secrets/pass to the environment variables it doesn't work. Its essentially think the path is the actual username and password. I need the environment variables to actually pull the values (in this case user=test, pass=test123 as set up using the docker secret command). I've looked around and a bit of a loss on how to accomplish this. I know some have mentioned leveraging a custom entrypoint shell script and I'm a bit confused on how to do this. Here is what I've tried
Rebuild a brand new image using the existing r image with a dockerfile that adds entrypoint.sh to the image -> it can't find the entrypoint.sh doc
added entrypoint: entrypoint.sh as a part of my docker compose. Same issue.
I'm trying to use docker stack to build the containers. The stack gets built but the containers keep restarting to the point they are unusable.
Here are my files
Dockerfile
FROM rocker/rstudio
COPY entry.sh /
RUN chmod +x /entry.sh
ENTRYPOINT ["entry.sh"]
Here is my docker-compose.yaml
version: '3.3'
secrets:
user:
external: true
pass:
external: true
services:
rserver:
container_name: rstudio
image: rocker/rstudio:latest (<-- this is the output of the build using rocker/rstudio and Dockerfile)
secrets:
- user
- pass
environment:
- USER=/run/secrets/user
- PASSWORD=/run/secrets/pass
volumes:
- ./rstudio:/home/user/rstudio
ports:
- 8787:8787
restart: always
entrypoint: /entry.sh
Finally here is the entry.sh file that I found on another thread
#get your envs files and export envars
export $(egrep -v '^#' /run/secrets/* | xargs)
#if you need some specific file, where password is the secret name
#export $(egrep -v '^#' /run/secrets/password| xargs)
#call the dockerfile's entrypoint
source /docker-entrypoint.sh
In the end it would be great to use my secret user and pass and pass those to the environment variable so that I can authenticate into an R studio instance. If I just put a username and password in plain text under environment it works fine.
Any help is appreciated. Thanks in advance
I have this dockerfile
FROM rabbitmq:3.7.12-management
CMD . /files/envinfo && echo $RABBITMQ_DEFAULT_USER && rabbitmq-server
In the envinfo I have this content
export RABBITMQ_DEFAULT_USER='anothername'
When the docker starts up the echo of RABBITMQ_DEFAULT_USER really prints out anothername. But when the service starts it doesnt see it.
If I set the environment variable another way from the kubernetes file it works as it should.
You can see the rabbitmq image I extend here.
https://github.com/docker-library/rabbitmq/blob/35b41e318d9d9272126f681be74bcbfd9712d71b/3.8/ubuntu/Dockerfile
I have another process that fetches the file and puts it in /files/envinfo to make it available for this docker image when it starts. So I cant use environment settings from kubernetes.
Looking forward to hear some suggestions =)
I agree with #code-gorilla use Kubernetes environment variables. But another way to do it is to source the environment variables before the entry point:
ENTRYPOINT ["source /files/envinfo && docker-entrypoint.sh"]
Overriding CMD will only change the argument to ENTRYPOINT, that's probably why it doesn't work for you.
You can try to debug it further:
1) Connect to the container and check if the env variable set within it
docker exec -it <container-id> /bin/bash
Then in your container:
echo $RABBITMQ_DEFAULT_USER
2) Use the Kubernetes evironment variable configuration instead of script execution in CMD
See the docs.
apiVersion: v1
kind: Pod
metadata:
name: envar-demo
labels:
purpose: demonstrate-envars
spec:
containers:
- name: envar-demo-container
image: gcr.io/google-samples/node-hello:1.0
env:
- name: DEMO_GREETING
value: "Hello from the environment"
- name: DEMO_FAREWELL
value: "Such a sweet sorrow"
Note: I suspect that an evironment variable set within the CMD command is not available in all shells of the container, e.g. when you open a new bash within it. This is what the Kubernetes config takes care of.
I have node.js application built using Dockerfile that defines:
CMD node dist/bin/index.js
I'd like to "append" a parameter to the command as it is defined in Dockerfile, i.e. I want to execute the program as node dist/bin/index.js foo.
In docker land I am able to achieve this via:
docker build --tag test .
docker run test foo
In kubernetes I cannot use command because that will override the ENTRYPOINT. I cannot use args because that will override the cmd defined in the Dockerfile. It appears that my only option is:
cmd: ["node", "dist/bin/index.js", "foo"]
Is there a way to append an argument to a container command without redefining the entire Docker CMD definition?
No way to append. You can either set the command: or args: on container.spec. You can learn more about how to override CMD/ENTRYPOINT here: https://kubernetes.io/docs/concepts/configuration/container-command-args/
I don't there is a way to achieve what you want, except for setting the cmd, as you mentioned.
Let me know if I misunderstood your question, but if you just want to pass a parameter to the POD you can run your command inside the entrypoint script and use an environment variable that you pass to the container for that command. This should work in any Pod spec (as part of the deployment description for example).
Example
apiVersion: v1
kind: Pod
metadata:
name: envar-demo
labels:
purpose: demonstrate-envars
spec:
containers:
- name: envar-demo-container
image: gcr.io/google-samples/node-hello:1.0
env:
- name: DEMO_GREETING
value: "Hello from the environment"
For more see this doc.
You might be able to achieve the same effect using an optional param in your Dockerfile:
ENV YOUR_OPTIONAL_PARAM
CMD node dist/bin/index.js $YOUR_OPTIONAL_PARAM