I have to use a custom docker image, which defines Entrypoint like this:
.
.
"Entrypoint": [
"tini",
"-g",
"--"
],
.
.
If I run docker-compose up without specifying any argument for this, it exited like this:
ml4t | tini (tini version 0.18.0)
ml4t | Usage: tini [OPTIONS] PROGRAM -- [ARGS] | --version
ml4t |
ml4t | Execute a program under the supervision of a valid init process (tini)
ml4t |
ml4t | Command line options:
ml4t |
ml4t | --version: Show version and exit.
ml4t | -h: Show this help message and exit.
ml4t | -p SIGNAL: Trigger SIGNAL when parent dies, e.g. "-p SIGKILL".
ml4t | -v: Generate more verbose output. Repeat up to 3 times.
ml4t | -w: Print a warning when processes are getting reaped.
ml4t | -g: Send signals to the child's process group.
ml4t | -e EXIT_CODE: Remap EXIT_CODE (from 0 to 255) to 0.
ml4t | -l: Show license and exit.
ml4t |
ml4t | Environment variables:
ml4t |
ml4t | TINI_VERBOSITY: Set the verbosity level (default: 1).
ml4t | TINI_KILL_PROCESS_GROUP: Send signals to the child's process group.
ml4t |
ml4t exited with code 1
I realized that I have to pass bash to this tini command. How can I do this without creating a new Dockerfile?
That sounds like your Dockerfile is missing a CMD. That wouldn't usually be bash, but instead the actual application you'd want the container to run.
ENTRYPOINT ["tini", "-g", "--"] # must be JSON-array form
CMD my_app --foreground
If you have both an ENTRYPOINT and a CMD, they are simply combined together, and the CMD is passed as arguments to ENTRYPOINT. This is a simplified form of a very common pattern where CMD contains the actual container you want to run, and ENTRYPOINT is a script or wrapper that does some first-time setup and then executes the CMD (in this case, it wraps it with a lightweight single-purpose init process).
After you make this edit to your existing Dockerfile, you can run docker build to rebuild the image, or docker-compose build or docker-compose up --build if you're using Compose. You don't need to create a new Dockerfile just to add the CMD (though you could if you wanted).
You can override CMD in a couple of ways: by including a command: in your docker-compose.yml file, or by including a command after the image name in docker run, or by including a command after the service name in docker-compose run. It's better to put CMD in the Dockerfile than to put command: in docker-compose.yml, lest other users run into the same problem you have.
use docker-compose run instead of up.
Doc: https://docs.docker.com/compose/reference/run/
Related
I am new to the docker world. I have to invoke a shell script that takes command line arguments through a docker container.
Ex: My shell script looks like:
#!bin/bash
echo $1
Dockerfile looks like this:
FROM ubuntu:14.04
COPY ./file.sh /
CMD /bin/bash file.sh
I am not sure how to pass the arguments while running the container
with this script in file.sh
#!/bin/bash
echo Your container args are: "$#"
and this Dockerfile
FROM ubuntu:14.04
COPY ./file.sh /
ENTRYPOINT ["/file.sh"]
you should be able to:
% docker build -t test .
% docker run test hello world
Your container args are: hello world
Use the same file.sh
#!/bin/bash
echo $1
Build the image using the existing Dockerfile:
docker build -t test .
Run the image with arguments abc or xyz or something else.
docker run -ti --rm test /file.sh abc
docker run -ti --rm test /file.sh xyz
There are a few things interacting here:
docker run your_image arg1 arg2 will replace the value of CMD with arg1 arg2. That's a full replacement of the CMD, not appending more values to it. This is why you often see docker run some_image /bin/bash to run a bash shell in the container.
When you have both an ENTRYPOINT and a CMD value defined, docker starts the container by concatenating the two and running that concatenated command. So if you define your entrypoint to be file.sh, you can now run the container with additional args that will be passed as args to file.sh.
Entrypoints and Commands in docker have two syntaxes, a string syntax that will launch a shell, and a json syntax that will perform an exec. The shell is useful to handle things like IO redirection, chaining multiple commands together (with things like &&), variable substitution, etc. However, that shell gets in the way with signal handling (if you've ever seen a 10 second delay to stop a container, this is often the cause) and with concatenating an entrypoint and command together. If you define your entrypoint as a string, it would run /bin/sh -c "file.sh", which alone is fine. But if you have a command defined as a string too, you'll see something like /bin/sh -c "file.sh" /bin/sh -c "arg1 arg2" as the command being launched inside your container, not so good. See the table here for more on how these two options interact
The shell -c option only takes a single argument. Everything after that would get passed as $1, $2, etc, to that single argument, but not into an embedded shell script unless you explicitly passed the args. I.e. /bin/sh -c "file.sh $1 $2" "arg1" "arg2" would work, but /bin/sh -c "file.sh" "arg1" "arg2" would not since file.sh would be called with no args.
Putting that all together, the common design is:
FROM ubuntu:14.04
COPY ./file.sh /
RUN chmod 755 /file.sh
# Note the json syntax on this next line is strict, double quotes, and any syntax
# error will result in a shell being used to run the line.
ENTRYPOINT ["file.sh"]
And you then run that with:
docker run your_image arg1 arg2
There's a fair bit more detail on this at:
https://docs.docker.com/engine/reference/run/#cmd-default-command-or-options
https://docs.docker.com/engine/reference/builder/#exec-form-entrypoint-example
With Docker, the proper way to pass this sort of information is through environment variables.
So with the same Dockerfile, change the script to
#!/bin/bash
echo $FOO
After building, use the following docker command:
docker run -e FOO="hello world!" test
What I have is a script file that actually runs things. This scrip file might be relatively complicated. Let's call it "run_container". This script takes arguments from the command line:
run_container p1 p2 p3
A simple run_container might be:
#!/bin/bash
echo "argc = ${#*}"
echo "argv = ${*}"
What I want to do is, after "dockering" this I would like to be able to startup this container with the parameters on the docker command line like this:
docker run image_name p1 p2 p3
and have the run_container script be run with p1 p2 p3 as the parameters.
This is my solution:
Dockerfile:
FROM docker.io/ubuntu
ADD run_container /
ENTRYPOINT ["/bin/bash", "-c", "/run_container \"$#\"", "--"]
If you want to run it #build time :
CMD /bin/bash /file.sh arg1
if you want to run it #run time :
ENTRYPOINT ["/bin/bash"]
CMD ["/file.sh", "arg1"]
Then in the host shell
docker build -t test .
docker run -i -t test
I wanted to use the string version of ENTRYPOINT so I could use the interactive shell.
FROM docker.io/ubuntu
...
ENTRYPOINT python -m server "$#"
And then the command to run (note the --):
docker run -it server -- --my_server_flag
The way this works is that the string version of ENTRYPOINT runs a shell with the command specified as the value of the -c flag. Arguments passed to the shell after -- are provided as arguments to the command where "$#" is located. See the table here: https://tldp.org/LDP/abs/html/options.html
(Credit to #jkh and #BMitch answers for helping me understand what's happening.)
Another option...
To make this works
docker run -d --rm $IMG_NAME "bash:command1&&command2&&command3"
in dockerfile
ENTRYPOINT ["/entrypoint.sh"]
in entrypoint.sh
#!/bin/sh
entrypoint_params=$1
printf "==>[entrypoint.sh] %s\n" "entry_point_param is $entrypoint_params"
PARAM1=$(echo $entrypoint_params | cut -d':' -f1) # output is 1 must be 'bash' it will be tested
PARAM2=$(echo $entrypoint_params | cut -d':' -f2) # the real command separated by &&
printf "==>[entrypoint.sh] %s\n" "PARAM1=$PARAM1"
printf "==>[entrypoint.sh] %s\n" "PARAM2=$PARAM2"
if [ "$PARAM1" = "bash" ];
then
printf "==>[entrypoint.sh] %s\n" "about to running $PARAM2 command"
echo $PARAM2 | tr '&&' '\n' | while read cmd; do
$cmd
done
fi
I'm relatively new to docker (at least to do more than run images others had built) and I'm stuck on this one. I'm building an app using Deno and trying to get it running in docker. my base image is the official deno image with Alpine, which uses an .sh as its entry point. The entry point script as defined in the official image is supposed to look at the first argument in the CMD (in this case "run") and if it's in a list (which it is), run it with the deno command. Instead I get an error.
the CMD
CMD run --allow-net --allow-read --lock=lock.json mod.ts
the error
/bin/sh: run: not found
when I hard code in the deno, it runs fine.
CMD deno run --allow-net --allow-read --lock=lock.json mod.ts
I can't figure why it's not working through the script as an entry point. What am I doing wrong?
docker-entry.sh
#!/bin/sh
set -e
if [ "$1" != "${1#-}" ]; then
# if the first argument is an option like `--help` or `-h`
exec deno "$#"
fi
case "$1" in
bundle | cache | compile | completions | coverage | doc | eval | fmt | help | info | install | lint | lsp | repl | run | test | types | uninstall | upgrade | vendor )
# if the first argument is a known deno command
exec deno "$#";;
esac
exec "$#"
my Dockerfile
FROM denoland/deno:alpine-1.19.2
# The port that your application listens to.
EXPOSE MYPORTNUMBER
WORKDIR /app
# Prefer not to run as root.
USER deno
# Cache the dependencies as a layer (the following two steps are re-run only when deps.ts is modified).
# Ideally cache deps.ts will download and compile _all_ external files used in main.ts.
COPY deps.ts .
RUN deno cache deps.ts
# These steps will be re-run upon each file change in your working directory:
ADD . .
# Compile the main app so that it doesn't need to be compiled each startup/entry.
RUN deno cache mod.ts
CMD run --allow-net --allow-read --lock=lock.json mod.ts
base image info here
I have the following line in my Dockerfile which is supposed to capture the display number of the host:
RUN DISPLAY_NUMBER="$(echo $DISPLAY | cut -d. -f1 | cut -d: -f2)" && echo $DISPLAY_NUMBER
When I tried to build the Dockerfile, the DISPLAY_NUMBER is empty. But however when I run the same command directly in the terminal I get the see the result. Is there anything that I'm doing wrong here?
Commands specified with RUN are executed when the image is built. There is no display during build hence the output is empty.
You can exchange RUN with ENTRYPOINT then the command is executed when the docker starts.
But how to forward the hosts display to the container is another matter entirely.
Host environment variables cannot be passed during build, only at run-time.
Only build args can be specified by:
first "declaring the arg"
ARG DISPLAY_NUMBER
and then running
docker build . --no-cache -t disp --build-arg DISPLAY_NUMBER=$DISPLAY_NUMBER
You can work around this issue using the envsubst trick
RUN echo $DISPLAY_NUMBER
And on the command line:
envsubst < Dockerfile | docker build . -f -
Which will rewrite the Dockerfile in memory and pass it to Docker with the environment variable changed.
Edit: Note that this solution is pretty useless though, because you probably
want to do this during run-time anyways, because this value should depend on not on where the image is built, but rather where it is run.
I would personally move that logic into your ENTRYPOINT or CMD script.
I'm trying to dockerize my django project, in order to run the project with gunicorn from the shell I use:
gunicorn --bind :8000 --workers $(( 2 * `cat /proc/cpuinfo | grep 'core id' | wc -l` + 1 )) MyQ.wsgi:application
which works great,
the idea is to utilize as many cores as I can as defined in gunicorn documentation.
the $(( 2 * cat /proc/cpuinfo | grep 'core id' | wc -l + 1 )) part simply returns 2*n+1 where n is the amount of cores in the system.
However,I'm having some trouble rewriting this command to a Dockerfile, here is my current attempt:
CMD ["gunicorn", "--bind :8000", "--workers", "$(( 2 * `cat /proc/cpuinfo | grep 'core id' | wc -l` + 1 ))", "MyQ.wsgi:application"]
This crashes with the following error when I run docker run:
gunicorn: error: argument -w/--workers: invalid int value: "$(( 2 * `cat /proc/cpuinfo | grep 'core id' | wc -l` + 1 ))"
so basically the "$..." is not being evaluated, and I don't know how to fix that.
I think it's better to define an Environment Variable with ENV instruction in your Dockerfile and use that in your CMD instruction. This way you can set your Environment Variable when creating a Container from your Docker Image.
Deifine environment variable like this in your Dockerfile:
ENV WORKERS 1
Then change your CMD instruction to this:
CMD ["sh", "-c", "gunicorn --bind :8000 --workers $WORKERS MyQ.wsgi:application"]
finally when you are creating the Container pass your WORKERS environmet variable with -e argument.
There are two forms of the CMD (and ENTRYPOINT and RUN) commands. The form you wrote is preferred:
CMD ["command_name", "--option", "value"]
But, it doesn't run a shell to preprocess the command line. So if you run, for instance,
CMD ["ls", ">", "/host/directory/foo.ls"]
it will pass > as an argument to the program and not do a shell redirect.
So for your construct to work, you need to use the other form, that does implicitly wrap it in a shell execution (/bin/sh -c '...')
CMD gunicorn --bind :8000 ...
In practice, trying to force runtime constraints like worker count via the Dockerfile isn't what you want; you should allow things like this to be specified in the docker run command or similar. #HassanMusavi's answer is a better one.
Dockerfile does not support run time arguments(which you want to compute while running docker file). But in your case you can write a script 'test.sh' which has entry for gunicorn with the parameters.
And in CMD define the script path like CMD["test.sh"]. So when you create a container from this image, it will run your script in the container and will evaluate $ expression and get the cores at run time(even though cat /proc/cpuinfo will run in your container but it will list down the cores of your docker machine). This way you dont have to depend on calculating cores and passing it as -e.
We have a custom C++ daemon application that forks once. So we've been doing this in our Upstart script on Ubuntu 12.04 and it works perfectly:
expect fork
exec /path/to/the/app
However now we need to pass in an argument to our app which contains the number of CPUs on the machine on which it runs:
cat /proc/cpuinfo | grep processor | wc -l
Our first attempt was this:
expect fork
exec /path/to/the/app -t `cat /proc/cpuinfo | grep processor | wc -l`
While that starts our app with the correct -t value, Upstart tracks the wrong pid value, I'm assuming because those cat, grep & wc commands all launch processes in exec before our app.
I also tried this, and even it doesn't work, I guess because setting an env var runs a process? Upstart still tracks the wrong pid:
expect fork
script
NUM_CORES=32
/path/to/the/app -t $NUM_CORES
end script
I've also tried doing this in an env stanza but apparently those don't run commands:
env num_cores=`cat /proc/cpuinfo | grep processor | wc -l`
Also tried doing this in pre-start, but env vars set there don't have any values in the exec stanza:
pre-start
NUM_CORES=32
end script
Any idea how to get this NUM_CORES set properly, and still get Upstart to track the correct pid for our app that forks once?
It's awkward. The recommended method is to write an env file in the pre-start stanza and then source it in the script stanza. It's ridiculous, I know.
expect fork
pre-start script
exec >"/tmp/$UPSTART_JOB"
echo "NUM_CORES=$(cat /proc/cpuinfo | grep processor | wc -l)"
end script
script
. "/tmp/$UPSTART_JOB"
/path/to/app -t "$NUM_CORES"
end script
post-start script
rm -f "/tmp/$UPSTART_JOB"
end script
I use the exec line in the pre-start because I usually have multiple env variables and I don't want to repeat the redirection code.
This only works because the '. ' command is a built-in in dash and thus no process is spawned.
According to zram-config's upstart config:
script
NUM_CORES=$(grep -c ^processor /proc/cpuinfo | sed 's/^0$/1/')
/path/to/the/app -t $NUM_CORES
end script
I would add
export NUM_CORES
after assigning it a value in "script". I remember that a /bin/sh symlinked to a non-Bash shell may run scripts, so I would avoid Bash-only constructs.
Re: using the "env" stanza, it passes values literally and does not process them using shell conventions.