The question Bazel run - passing main arguments asks how to pass arguments to arguments to the container when using bazel run //package:target. My question is how pass docker arguments via bazel run //package:target, e.g. --gpus=all.
The only workaround that I am aware of is to split it into two commands, e.g.
bazel run //package:target -- --norun
docker run --gpus=all package:target
I was not able to find docker arguments in the container_image rules.
Is there a better way?
I would like to encode docker arguments like --gpus=all in the container_image configuration.
Related
I have a Dockerfile that I use to build the same image but for slightly different purposes. Most of the time I want it to just be an "environment" without a specific entrypoint so that the user just specifies that on the Docker run line:
docker run --rm -it --name ${CONTAINER} ${IMAGE} any_command parameters
But for some applications I want users to download the container and run it without having to set a command.
docker build -t ${IMAGE}:demo (--entrypoint ./demo.sh) <== would be nice to have
Yes, I can have a different Dockerfile for that, or append an entrypoint to the basic Dockerfile during builds, or various other mickey-mouse hacks, but those are all just one more thing that can go wrong, adding complexity, and are workarounds for the essential requirement.
Any ideas? staged builds?
The Dockerfile CMD directive sets the default command. So if your Dockerfile ends with
CMD default_command
then you can run the image in multiple ways
docker run "$IMAGE"
# runs default_command
docker run "$IMAGE" any_command parameters
# runs any_command instead
A container must run a command; you can't "just run a container" with no process in it.
You do not want ENTRYPOINT here since its syntax is noticeably harder to work with at the command line. Your any_command would be passed as arguments to the entrypoint process, rather than replacing the built-in default.
I am trying to run tests in docker as part of my build process. What i'd like to do is start the docker container, ignore the normal entry point, run a test command, and immediately exit with the test status.
Something like:
results=`docker run my_image --entrypoint python -m unittest discover`
When I try this I get: entrypoint requires the handler name to be the first argument
Which I believe is a specific to the image I am building off of (aws lambda).
So far I'm only seeing options to each A) start the container and issue an arbitrary command, or B) have a second Dockerfile just for testing.
Is it possible to run a docker image with an arbitrary command (ignoring the default entrypoint) where after the command is executed the container is killed?
Ideally you should restructure your application to avoid needing to override the entrypoint.
Remember that, when you run an image, the ENTRYPOINT and CMD are combined to form a single command. If you'll frequently be replacing this (combined) command string, it's best to put the whole command into CMD. If you have ENTRYPOINT at all, it should be a wrapper that runs the command passed to it as arguments (in a shell script, with exec "$#").
# Optional entrypoint -- MUST be JSON-array syntax, and MUST `exec "$#"`
# ENTRYPOINT ["/entrypoint.sh"]
CMD python ... whatever you had before
Then once you do this, you can easily override the command part at the docker run command
docker run my_image python -m unittest discover
(There are two other ENTRYPOINT patterns I've seen. One is a "container as command" pattern, where the entire command line is in ENTRYPOINT, and the command part is used to take additional arguments; this supports a docker run imagename --extra-args pattern. If you really need this pattern, see below to override the whole thing. The second arbitrarily splits ENTRYPOINT ["python"], CMD ["script.py"], but there's no particular reason to do this; just combine them into CMD.)
If you can't refactor your image's Dockerfile, then you need to override --entrypoint. This option only takes a single command word, though, and it's treated as a Docker option so it needs to come before the image name. That leads to this awkward construction (split into multiple lines for readability):
docker run \
--entrypoint python \
my_image \
-m unittest discover
Also consider the possibilities of using a non-Docker host virtual environment for routine tasks like running your service's unit tests.
I have a Dockerfile like this:
FROM java:8
ARG cName
ADD target/jar1.jar p2p.jar
ADD ci/docker_entrypoint.sh .
CMD ["bash", "docker_entrypoint.sh" , "$cName"]
I have a docker_entrypoint.sh which look :
java -cp p2p.jar $1
I have multiple classes to run and I am providing className as input parameter to dockerfile. I am running couple of commands to build and run docker.
docker build -f Dockerfile -t docker-p2p --build-arg cName=com.HelloWorld .
docker run docker-p2p
after running the second command I am getting below error:
Error: Could not find or load main class $cName
I am new to docker and I am not able to parameterise by dockerfile but when I mention a className "HelloWorld" in the dockerfile, it runs well. But when I try to pass parameters , it throws me out with this error.
You have to differ between docker run, cmd and entrypoint.
For your example you can use an entrypoint and set the parameter via an environment variable.
One simple and easy Dockerfile example could be:
FROM java:8
ENV NAME="John Dow"
ENTRYPOINT ["/bin/bash", "-c", "echo Hello, $NAME"]
with docker build . -t test and docker run -e NAME="test123" test
Also have a look at some further docu: docker-run-vs-cmd-vs-entrypoint.
If you do wind up with a Docker image that can do multiple things, it's a little unusual to create one image per task the way you're describing. You can pass additional command-line parameters in docker run or most other ways to start a container, and you can use that to control what the image does.
For example, you might want to set up your image so that you can run
docker run ... docker-p2p com.HelloWorld
passing the class name as an argument. I'd write an entrypoint script that wrapped this in a java call if appropriate (but passed through non-class names, like docker run ... sh):
#!/bin/sh
set -e
case "$1" of
com.*) exec java "$#" ;;
*) exec "$#" ;;
esac
The corresponding Dockerfile doesn't take any ARGs; it could be
FROM java:8
# I prefer COPY to ADD, unless you explicitly want automatic
# HTTP fetches and/or tar file extraction.
COPY target/jar1.jar /p2p.jar
COPY ci/docker_entrypoint.sh /
# Globally set the class path. (A Docker image only does one thing.)
ENV CLASSPATH /p2p.jar
# Always launch the entrypoint script.
ENTRYPOINT ["/docker_entrypoint.sh"]
# Give a default command, which with our script is a class name.
CMD ["com.HelloWorld"]
If you actually want a container per task, you could create a base image that contained everything up to the ENTRYPOINT line, and then created derived images FROM that base image that just set a different CMD.
I have a bunch of Dockerfiles that are build from a common automated place using the same build command:
docker build -t $name:$tag --build-arg BRANCH=$branch .
Some of the Dockerfiles contain this:
ARG BRANCH=master
And that argument is used for some steps of the image build.
But for some Dockerfiles which doesn't need that argument I get this error at the end:
One or more build-args [BRANCH] were not consumed, failing build.
How can I overcome this problem without including the argument to all the Dockerfiles?
Have you considered grepping your Dockerfile for BRANCH and using it result to decide if you should supply your ARG or not?
You could replace your automation build trigger with something like:
if grep BRANCH Dockerfile; then docker build -t $name:$tag --build-arg BRANCH=$branch .; else docker build -t $name:$tag . ; fi
I don't see any documented way to avoid this error without changing your input or your Dockerfile. robertobado already covers changing your input. As a second option, you can include an effectively unused build arg at the end of your Dockerfile which would have a very minor impact on your build.
ARG BRANCH=undefined
RUN echo "Built from branch ${BRANCH}"
Since this doesn't modify the filesystem, I believe the image checksum will be identical.
My specific use case is that I want to organize some data about the EC2 instance a container is running on and make i available as an environment variable. I'd like to do this when the container is built.
I was hoping to be able to do something like ENV VAR_NAME $(./script/that/gets/var) in my Dockerfile, but unsurprisingly that does not work (you just get the string $(./script...).
I should mention that I know the docker run --env... will do this, but I specifically want it to be built into the container.
Am I missing something obvious? Is this even possible?
Docker v1.9 or newer
If you are using Docker v1.9 or newer, this is possible via support for build time arguments. Arguments are declared in the Dockerfile by using the ARG statement.
ARG REQUIRED_ARGUMENT
ARG OPTIONAL_ARGUMENT=default_value
When you later actually build your image using docker build you can pass arguments via the flag --build-arg as described in the docker docs.
$ docker build --build-arg REQUIRED_ARGUMENT=this-is-required .
Please note that it is not recommended to use build-time variables for passwords or secrets such as keys or credentials.
Furthermore, build-time variables may have great impact on caching. Therefore the Dockerfile should be constructed with great care to be able to utilize caching as much as possible and therein speed up the building process.
Edit: the "docker newer than v1.9"-part was added after input from leedm777:s answer.
Docker before v1.9
If you are using a Docker-version before 1.9, the ARG/--build-arg approach was not possible. You couldn't resolve this kind of info during the build so you had to pass them as parameters to the docker run command.
Docker images are to be consistent over time whereas containers can be tweaked and considered as "throw away processes".
More info about ENV
A docker discussion about dynamic builds
The old solution to this problem was to use templating. This is not a neat solution but was one of very few viable options at the time. (Inspiration from this discussion).
save all your dynamic data in a json or yaml file
create a docker file "template" where the dynamic can later be expanded
write a script that creates a Dockerfile from the config data using some templating library that you are familiar with
Docker 1.9 has added support for build time arguments.
In your Dockerfile, you add an ARG statement, which has a similar syntax to ENV.
ARG FOO_REQUIRED
ARG BAR_OPTIONAL=something
At build time, you can pass pass a --build-arg argument to set the argument for that build. Any ARG that was not given a default value in the Dockerfile must be specified.
$ docker build --build-arg FOO_REQUIRED=best-foo-ever .
To build ENV VAR_NAME $(./script/that/gets/var) into the container, create a dynamic Dockerfile at build time:
$ docker build -t awesome -f Dockerfile .
$ # get VAR_NAME value:
$ VAR_VALUE=`docker run --rm awesome \
bash -c 'echo $(./script/that/gets/var)'`
$ # use dynamic Dockerfile:
$ {
echo "FROM awesome"
echo "ENV VAR_NAME $VAR_VALUE"
} | docker build -t awesome -
https://github.com/42ua/docker-autobuild/blob/master/emscripten-sdk/README.md#build-docker-image