How do I pass default CMD to ENTRYPOINT with variable expansion? - docker

I'm trying to use ENTRYPOINT and CMD such that ENTRYPOINT is the script I am calling and CMD provides the default arguments to the ENTRYPOINT command but will be overridden by any arguments given to docker run.
The part I'm struggling with is how to have environment variable expanded in my default arguments using CMD.
For example. Given this dockerfile built as tag test:
FROM busybox
ENV AVAR=hello
ENTRYPOINT ["/bin/sh", "-c", "exec echo \"$#\""]
CMD ["${AVAR}"]
I am expecting the following results:
docker run -it --rm test
> hello
docker run -it --rm test world
> world
Note: I'm just using echo here as an example. In my actual Dockerfile I'll be calling ./bin/somescript.sh which is a script to launch an application I have no control over and is what I am trying to pass arguments to.
This question is similar but is asking about expanding variables in the ENTRYPOINT, I'm trying to expand variables in CMD.
I've tried many combinations of shell/exec form for both ENTRYPOINT and CMD but I just can't seem to find the magic combination:
FROM busybox
ENV AVAR=hello
ENTRYPOINT ["/bin/sh", "-c", "exec echo \"$#\""]
CMD ${AVAR}
docker run -it --rm test
> -c ${AVAR}
Is what I'm trying to do possible?
Many more failed attempts
This is the closest I can get:
FROM busybox
ENV AVAR=hello
ENV AVAR2=world
ENTRYPOINT ["/bin/sh", "-c", "echo $#", "$#"]
CMD ["${AVAR}", "${AVAR2}"]
This works fine when I pass in an argument to the run command:
docker run -it --rm test world
> world
But it doesn't expand the default arguments when not given a command:
docker run -it --rm test
> ${AVAR} ${AVAR2}

Found the magic. I don't completely follow what's going on here but I'll try to explain it
FROM busybox
ENV AVAR=hello
ENV AVAR2=world
ENTRYPOINT ["/bin/sh", "-c", "echo $(eval echo $#)", "$#"]
CMD ["${AVAR}", "${AVAR2}"]
docker run -it --rm test
> hello world
docker run -it --rm test world
> world
My attempt at explanation (I'm really not sure if this is right):
CMD in exec form is passed as an argument to ENTRYPOINT without shell substitution. I'm taking those values and passing them as positional arguments to /bin/sh -c ... which is why I need the "extra" "$#" at the end of the ENTRYPOINT array.
Within ENTRYPOINT I need to expand $# and do parameter substitution on the result of the expansion. So in a subshell ($(...)) I call eval to do parameter substitution and then echo the result which ends up just being the contents of CMD but with variables substituted.
If I pass in an argument to docker run it simply takes place of CMD and is evaluated correctly.

Related

Cannot write ssh key to Docker container in CMD

I am trying to write ssh keys to docker image using CMD.
I have docker file like below.
FROM public.ecr.aws/ubuntu/ubuntu:18.04_stable
CMD ["sh", "-c", "echo $PUBLIC_KEY >> ./.ssh/id_rsa.pub"]
CMD ["sh", "-c", "echo $PRIVATE_KEY >> ./.ssh/id_rsa"]
I run the container with env var like so:
docker run -it -d -e PUBLIC_KEY="key1" -e PRIVATE_KEY="key2" my-image
As result, writing both of them doesn't work. However, when I manually docker exec these 2 cmd against the running container, it will write both public key and private key to the correct location.
Can anyone explain this? How should I make the CMD work?
CMD is a way to define a default command when starting a container. There can only be one default command. In the example you have given, the second CMD will be the default command, and the first CMD will not run. The default command will run only when you do not specify a command to run on the command line, i.e. as part of the command line
docker run [OPTIONS] IMAGE[:TAG|#DIGEST] [COMMAND] [ARG...]
if you provide a COMMAND, the CMD in the dockerfile will not be run.
When you issue docker exec, you explicitly run the command line, so it will always run.

Append argument to ENTRYPOINT in Docker run where some args are already defined

I'm using ENTRYPOINT to pass in an argument when running docker run but I cannot get the runtime argument to surface in my script as an argument.
Dockerfile
FROM debian:latest
ENV a="my arg in Dockerfile"
COPY . .
RUN chmod +x myscript.sh
ENTRYPOINT ["/bin/bash", "-c", "/myscript.sh ${a}"]
with myscript.sh
#!/bin/bash
echo "From script: $#"
When I run docker build -t my_image . then docker run -it --rm my_image I get the result as expected: From script: my arg in Dockerfile
But when I run docker run -it --rm my_image from_run I get the same result: From script: my arg in Dockerfile so the "from_run" is not being passed down to the script through ENTRYPOINT.
I read that arguments passed after the image name is appended to the ENTRYPOINT but clearly I'm not understanding something here.
Same result when using the exec form as opposed to JSON form:
ENTRYPOINT /myscript.sh ${a}
https://docs.docker.com/engine/reference/run/#entrypoint-default-command-to-execute-at-runtime
The main container command is made up of two parts. The string you pass after the docker run image-name replaces the Dockerfile CMD, and it's appended to the Dockerfile ENTRYPOINT.
For your docker run command to work, you need to provide the command you want to run as ENTRYPOINT and its arguments as CMD. You do not need an environment variable here. However, it is important that both parts use JSON-array syntax and that neither invokes a shell. If ENTRYPOINT includes a shell then things get syntactically complex (see #KamilCuk's answer); if CMD includes a shell then it won't get invoked but the command will be invoked with /bin/sh and -c as parameters instead.
FROM debian:latest
COPY myscript.sh /usr/local/bin/myscript # preserves execute permissions
ENTRYPOINT ["myscript"] # in a $PATH directory
CMD ["my", "arg", "in", "Dockerfile"]
docker run --rm the-image
docker run --rm the-image my arg from command line
If you want the initial set of command-line arguments to be included and the docker run arguments to be appended, you can move them into the ENTRYPOINT line; note that the docker run --entrypoint is syntactically awkward if you ever do decide you need to remove some of the options.
ENTRYPOINT ["myscript", "--first-default", "--second-default"]
# CMD []
docker run --rm the-image
docker run --rm the-image --user-option
docker run --entrypoint myscript the-image --first-default --no-second-default
If you can update your application to accept options as environment variables in addition to command-line settings, this makes all of this syntactically easier.
ENV FIRST_DEFAULT=yes
ENV SECOND_DEFAULT=yes
CMD ["myscript"]
docker run --rm the-image
docker run --rm -e USER_OPTION=yes the-image
docker run --rm -e SECOND_DEFAULT=no the-image
Bash is Bash, see bash manual how -c pases arguments. The following:
/bin/bash -c "/myscript.sh ${a}" from_run
passes only one argument to myscript.sh and that is unquoted $a, so $a undergoes word splitting and filename expansion. And the argument from_run is assigned to $0. I would do:
ENTRYPOINT ["/bin/bash", "-c", "./myscript.sh \"$a\" \"$#\"", "--"]
Note that it's typical to use upper case (and unique names) for environment variables $a.

I want to run a script during container run based on env variable that I pass

I want to run a script during run time and not during image build.
The script runs based on env variable that I pass during container run.
Script:
#!/bin/bash
touch $env
Docker file
FROM busybox
ENV env parm
RUN mkdir PRATHAP
ADD apt.sh /PRATHAP
WORKDIR /PRATHAP
RUN chmod 777 apt.sh
CMD sh apt.sh
when I try to run: docker container run -it -e env=test.txt sh
the script is not running
I am just getting the sh terminal. If I remove it the the container is not alive.. Please help me how to achieve it
Your docker run starts sh which overrides your CMD in Dockerfile. To get around this, you need to replicate the original CMD via the command line.
$ docker run -it -e env=test.txt <image:tag> sh -c "./init.sh; sh"
Remember that a Docker container runs a single command, and then exits. If you docker run your image without overriding the command, the only thing the container will do is touch a file inside the isolated container filesystem, and then it will promptly exit.
If you need to do some startup-time setup, a useful pattern is to write it into an entrypoint script. When a container starts up, Docker runs whatever you have named as the ENTRYPOINT, passing the CMD as additional parameters (or it just runs CMD if there is no ENTRYPOINT). You can use the special shell command exec "$#" to run the command. So revisiting your script as an entrypoint script:
#!/bin/sh
# ^^ busybox image doesn't have bash (nor does alpine)
# Do the first-time setup
touch "$env"
# Launch the main container process
exec "$#"
In your Dockerfile set this script to be the ENTRYPOINT, and then whatever long-running command you actually want the container to do to be the CMD.
FROM busybox
WORKDIR /PRATHAP # Also creates the directory
COPY apt.sh . # Generally prefer COPY to ADD
RUN chmod 0755 apt.sh # Not world-writable
ENV env parm
ENTRYPOINT ["./apt.sh"] # Must be JSON-array syntax
# Do not need to name interpreter, since
# it is executable with #! line
CMD sh # Or whatever the container actually does

How to pass dynamic values to Docker container?

I am running a perl script in Dockerfile and I would like to pass dynamic command line arguments to the perl script while running docker image(container).
Ex: CMD perl test.pl <args>. I am new to Docker.
Is there any possible way to pass dynamic values to the docker container like
docker run <image name> <args>?
You could use an Entrypoint script:
$ docker run [OPTIONS] IMAGE[:TAG|#DIGEST] [COMMAND] [ARG...]
and
If the image also specifies an ENTRYPOINT then the CMD or COMMAND get appended as arguments to the ENTRYPOINT.
So depending on your Dockerfile you'd have something like this (python sample app):
FROM jfloff/alpine-python:3.6
# add entrypoint script
USER root
COPY start.sh /
RUN chmod a+x /start.sh
ENTRYPOINT ["/start.sh"]
CMD ["arg1"]
and start.sh:
#!/bin/bash
echo $1
# don't exit
/usr/bin/tail -f /dev/null
Now you can do something like:
15:19 $ docker run f49b567f05f1 Hello
Hello
15:21 $ docker run f49b567f05f1
arg1
Now if your script is set up to take those arguments, you should be able to run it as you want. Reference from Docker is attached, search for "Overriding Dockerfile image defaults" in this and then look in the CMD section.
Or check this Post.
I am not sure whether you can do it with CMD but if you just want to execute the perl script with some passed in arguments use ENTRYPOINT.
ENTRYPOINT ["perl", "test.pl"]
CMD ["default-arg"]
Run the container with:
docker -run <image-name> overriding-arg

Enter into docker container after shell script execution is complete

If I want to execute one shell script as ENTRYPOINT and enter into docker container when shell script execution is complete.
My Dockerfile has following lines at the end:
WORKDIR artifacts
ENTRYPOINT ./my_shell.sh
When I run it with following command, it executes shell script but doesn't enter into docker container.
docker run -it testub /bin/bash
Can someone please let me know if I am missing anything here?
There are two options that control what a container runs when it starts, the entrypoint (ENTRYPOINT) and the command (CMD). They follow the following logic:
If the entrypoint is defined, then it is run with the value for the command included as additional arguments.
If the entrypoint is not defined, then the command is run by itself.
You can override one or both of the values defined in the image. docker run -it --entrypoint /bin/sh testub would run /bin/sh instead of ./my_shell.sh, overriding the entrypoint. And docker run -it testub /bin/bash will override the command, making the container start with ./my_shell.sh /bin/bash.
The quick answer is to run docker run -it --entrypoint /bin/bash testub and from there, kick off your ./my_shell.sh. A better solution is to update ./my_shell.sh to check for any additional parameters and run them with the following at the end of the script:
if [ $# -gt 0 ]; then
exec "$#"
fi

Resources