I have a Docker container that executes a bash script as its ENTRYPOINT. This script does lots of things that rely on environment variables being configured.
The strangest thing happens, when I run the container, the entrypoint script is executed, and for a lack of better words, it eventually fails.
Now, if I enter the container manually $ docker exec -it <id> bash Then manually run the SAME script, it works!
What's going on here? Why does Docker executing the script differ from myself manually executing the script?
UPDATE for more context
Dockerfile
FROM cuda:torch:cudnn # Not real source, but these are what are in play
# setup lua and python
COPY . /app
WORKDIR /app
ENTRYPOINT ["./entrypoint.py"]
CMD ["start"]
entrypoint.py
class DoSomething:
def methods_that_work(self):
...
def run_torch(self):
"""
I do NOT work if called from the Dockerfiles ENTRYPOINT
I DO work if I manually run ./entrypoint.py start from within the container
cmd = ['th', ...]
subprocess.run(cmd)
Torch and Lua need to know where CUDA and CudNN are located. I can confirm all the ENV vars are set. When run via the Docker ENTRYPOINT, torch just kinda hangs, no errors, no anything, just kinda hangs.
When I bash into the container and manually run ./entrypoint.py it works.
For anyone who runs into this situation. This was explicitly an issue with Lua.
Lua paths expect to be delineated with ; not : like $PATH for example.
$LUA_PATH=/some/path;/some/other/path
Now, for why this was working in an interactive bash shell, and not via Docker? Well inside the .bashrc there was an "activate torch" function that essentially did a find and replace on : to ;.
The end of the day, this was not a Docker issue, but simply incorrectly formatted Lua environment variables.
Related
I had browsed through a lot of related posts but still didn’t resolve this issue. I am quite new to Docker so sorry if this is repeated.
So for my project, I have a shell script named vault-until.sh, which getting secrets from Vault and exported those secrets.
Like ‘export DB_Password_Auto=(Some Vault operations)’
What I want to achieve is to copy this file to the docker container and source this file in the Dockerfile. So that those secrets can be accessed as environment variables inside the container.
The code I have right now inside Dockerfile is:
COPY vault-until.sh /build
RUN Chmod -x /build/vault-until.sh
RUN /bin/sh -c “source /build/vault-util.sh”
After I log in to the container through “docker -exec -it -u build container-name /bin/bash”
the environment var is still empty.
It shows only after I type the source command again in the cli.
So I am wondering is this mechanism of access vault secret as env vat actually plausible? If so, what I need to modify in the Dockerfile to make this work? Thank you!
If you have a script that gets secrets from Vault, you probably need to re-run it every time the container starts. You don't want to compromise the secrets by putting them in a Docker image where they can be easily extracted, and you don't want an old version of a credential "baked into" an image if it changes in Vault.
You can use an entrypoint wrapper script to run this when the container starts up. This is a script you set as the container ENTRYPOINT; it does first-time setup like setting dynamic environment variables and then runs whatever is the container CMD.
#!/bin/sh
# entrypoint.sh
# Get a set of credentials from Vault.
. /build/vault-util.sh
# Run the main container command.
exec "$#"
In your Dockerfile, you need to make sure you COPY this in and set it as the ENTRYPOINT, but you don't need to immediately RUN it.
COPY vault-util.sh entrypoint.sh /build
ENTRYPOINT ["/build/entrypoint.sh"] # must be JSON-array syntax
CMD same command as originally
You won't be able to see the secrets with tools like docker inspect (this is good!). But if you want to you can run a test container to dump out the results of this setup. For example,
docker run --rm ... your-image env
replaces the Dockerfile's CMD with env, which prints out the environment and exits. This gets passed as arguments to the entrypoint, so first it runs the script to fetch environment variables and then runs env and then exits.
How can I run a command against a container and tell docker not to run the entry point? e.g.
docker-compose run foo bash
The above will run the entry point of the foo machine. How to prevent it temporarily without modifying Dockerfile?
docker-compose run --entrypoint=bash foo bash
It'll run a nested bash, a bit useless, but you'll have your prompt.
If you control the image, consider moving the entire default command line into the CMD instruction. Docker concatenates the ENTRYPOINT and CMD together when running a container, so you can do this yourself at build time.
# Bad: prevents operators from running any non-Python command
ENTRYPOINT ["python"]
CMD ["myapp.py"]
# Better: allows overriding command at runtime
CMD ["python", "myapp.py"]
This is technically "modifying the Dockerfile" but it won't change the default operation of your container: if you don't specify entrypoint: or command: in the docker-compose.yml then it will run the exact same command, but it also allows running things like debug shells in the way you're trying to.
I tend to reserve ENTRYPOINT for two cases. There's a common pattern of using an ENTRYPOINT to do some first-time setup (e.g., running database migrations) and then exec "$#" to run whatever was passed as CMD. This preserves the semantics of CMD (your docker-compose run bash will still work, but migrations will happen first). If I'm building a FROM scratch or other "distroless" image where it's actually impossible to run other commands (there isn't a /bin/sh at all) then making the single thing in the image be the ENTRYPOINT makes sense.
I am trying to figure out how to get the CMD command in dockerfile to run a script on startup for docker run I know that using the RUN command will get the image to prerun that script when building the image but I want it to run the script everytime I run a new container using that image. The script is just a simple script that outputs the current date/time to a file.
Here is the dockerfile that works if I use RUN
# Pull base image
FROM alpine:latest
# gcr.io/dev-ihm-analytics-platform/practice_docker:ulta
WORKDIR /root/
RUN apk --update upgrade && apk add bash
ADD ./script.sh ./
RUN ./script.sh
Here is the same dockerfile that doesnt work with CMD
# Pull base image
FROM alpine:latest
# gcr.io/dev-ihm-analytics-platform/practice_docker:ulta
WORKDIR /root/
RUN apk --update upgrade && apk add bash
ADD ./script.sh ./
CMD ["./script.sh"]
I have tried all sorts of things after the CMD command like ["/script.sh"], ["bash script.sh"], ["bash", "./script.sh"], bash script.sh but I always get an error and I don't know what I am doing wrong. All I want is to
docker run -it name_of_container bash
and then find that the script has executed be seeing there is an output file with the run information in the container once I am inside
There’s three basic ways to do this:
You can RUN ./script.sh. It will happen once, at docker build time, and be baked into your image.
You can CMD ./script.sh. It will happen once, and be the single command the container runs. If you provide some alternate command (docker run ... bash for instance) that runs instead of this CMD.
You can write a custom entrypoint script that does this first-time setup, then runs the CMD or whatever got passed on the command line. The main container process is the entrypoint, and it gets passed the command as arguments. This script (and whatever it does inside) will get run on every startup. This script can look something like
#!/bin/sh
./script.sh
exec "$#"
It needs to be separately COPYd into the image, and then you’d set something like ENTRYPOINT ["./entrypoint.sh"].
(Given the problem as you’ve actually described it — you have a shell script and you want to run it and inspect the file output in an interactive shell — I’d just run it at your local command prompt and not involve Docker at all. This avoids all of these sequencing and filesystem mapping issues.)
There are multiple ways to achieve what you want, but your first attempt, with the RUN ./script.sh line is probably the best.
The CMD and ENTRYPOINT commands are overridable on the command-line as flags to the container run command. So, if you want to ensure that this is run every time you start the container, then it shouldn't be part of the CMD or ENTRYPOINT commands.
Well, iam using the CMD command to start my Java applications and when the container is inside the WORKDIR iam executing the following:
CMD ["/usr/bin/java", "-jar", "-Dspring.profiles.active=default", "/app.jar"]
Have you tried to remove the "." in the CMD command so it looks like that:
CMD ["/script.sh"]
There might be a different syntax when using RUN or CMD.
Say I have a Dockerfile:
.
.
RUN echo 'source /root/script.sh' >> /etc/bash.bashrc
(The script adds some env variables)
If I:
1) Do this:
docker run -it -v /home/user/script.sh:/root/script.sh image
It takes me to shell where if I call "env" I see the variable set by the script
But if I:
2) Do this:
docker run -it -v /home/user/script.sh:/root/script.sh image env
It prints out env and exits and my variable is missing
What am I missing? I need the variable to exists even if I specify a command/script like "env" at the end of the docker run command
When you run a command like
docker run ... image command
Docker directly runs the command you give; it doesn’t launch any kind of shell, and there’s no opportunity for a .bashrc or similar file to be read.
I’d suggest two things here:
If your program does need environment variables set in some form, set them directly using Dockerfile ENV directives. Don’t try to edit .bashrc or /etc/profile or any other shell dotfile; they won’t reliably get run.
As much as you can install things in places so that you don’t need to change environment variables. For instance, Python supports a “virtual environment” concept that allows an isolated library environment, which requires changing $PATH and similar things; but Docker provides the same isolation on its own, so just install things into the “global” package space.
If you really can’t manage either of these things, then you can write an entrypoint script that sets environment variables and then launches the container’s command. This might look like
#!/bin/sh
. /root/script.sh
exec "$#"
And then you could include this in your Dockerfile like
...
COPY entrypoint.sh /
ENTRYPOINT ["/entrypoint.sh"]
CMD ["/app/myapp"]
(If you need to use docker exec to get a debugging shell in the container, that won’t be a child process of the entrypoint and won’t get its environment variables.)
I have a Dockerfile with the following CMD as the last line
CMD ["/usr/local/myapp/bin/startup.sh", "docker"]
Part of a script that is executed against the docker image during startup is as follows
# find directory of cacerts file in DOCKER_JAVA_HOME of container
DOCKER_CACERTS_DIR=$(dirname "$(docker run "$DOCKER_IMAGE_ID" find "$DOCKER_JAVA_HOME" -name cacerts)")
However, this still executes the CMD line from my Dockerfile.
I have found that I can alter this behaviour by changing the line in the script as follows.
# find directory of cacerts file in DOCKER_JAVA_HOME of container
DOCKER_CACERTS_DIR=$(dirname "$(docker run --entrypoint find "$DOCKER_IMAGE_ID" "$DOCKER_JAVA_HOME" -name cacerts)")
However, I didn't think this would be necessary. Is it normal for docker to execute the CMD when overridden in the docker run command? I thought this was supposed to be one of the differences between using CMD and ENTRYPOINT, that you could easily override CMD without using the --entrypoint flag.
In case it's important, this is using docker version 17.03.0-ce
The image being run has an ENTRYPOINT defined somewhere. Probably in the image you are building FROM if there isn't one in your Dockerfile.
When ENTRYPOINT and CMD are defined, Docker will pass the CMD to the ENTRYPOINT as arguments. From there, it's up to the ENTRYPOINT executable to decide what to do.
The arguments could be ignored completely, modified as the entry point sees fit or it can pass the complete command on to be run. That behaviour is image specific.