Can you pass flags to the command that docker runs? - docker

The documentation for the run command follows the following syntax:
docker run [OPTIONS] IMAGE[:TAG|#DIGEST] [COMMAND] [ARG...]
however I've found at times that I want to pass a flag to [COMMAND].
For example, I've been working with this image, where the [COMMAND] as specified in the Dockerfile is:
CMD ["/bin/bash", "-c", "/opt/solr/bin/solr -f"]
Is there any way to tack on flags to "/opt/solr/bin/solr -f" so that it's in the form "/opt/solr/bin/solr -f [-MY FLAGS]"?
Do I need to edit the DockerFile or is there some built in functionality for this?

There is a special directive ENTRYPOINT which fits your needs. Unlike CMD it will add additional flags at the end of your command.
For example, you can write
ENTRYPOINT ["python"]
and run it with
docker run <image_name> -c "print(1)"
Note, that this only will work if you write command in exec form (via ["...", "..."]), otherwise ENTRYPOINT will invoke shell and pass your args there, not to your script.
More generally, you can combine ENTRYPOINT and CMD
ENTRYPOINT ["ping"]
CMD ["www.google.com"]
Where CMD means default args for your ENTRYPOINT. Now you can run both of
docker run <image_name>
docker run <image_name> yandex.ru
and only CMD will be replaced.
Full reference about how ENTRYPOINT and CMD interact can be found here

The CMD directive of a Dockerfile is the command that would be run when the container starts if no command was specified in the docker run command.
The main purpose of a CMD is to provide defaults for an executing container.
In your case, just use the docker run command as follow to override the default command specified in the Dockerfile:
docker run makuk66/docker-solr /bin/bash -c "/opt/solr/bin/solr -f [your flags]"

Related

Cannot write ssh key to Docker container in CMD

I am trying to write ssh keys to docker image using CMD.
I have docker file like below.
FROM public.ecr.aws/ubuntu/ubuntu:18.04_stable
CMD ["sh", "-c", "echo $PUBLIC_KEY >> ./.ssh/id_rsa.pub"]
CMD ["sh", "-c", "echo $PRIVATE_KEY >> ./.ssh/id_rsa"]
I run the container with env var like so:
docker run -it -d -e PUBLIC_KEY="key1" -e PRIVATE_KEY="key2" my-image
As result, writing both of them doesn't work. However, when I manually docker exec these 2 cmd against the running container, it will write both public key and private key to the correct location.
Can anyone explain this? How should I make the CMD work?
CMD is a way to define a default command when starting a container. There can only be one default command. In the example you have given, the second CMD will be the default command, and the first CMD will not run. The default command will run only when you do not specify a command to run on the command line, i.e. as part of the command line
docker run [OPTIONS] IMAGE[:TAG|#DIGEST] [COMMAND] [ARG...]
if you provide a COMMAND, the CMD in the dockerfile will not be run.
When you issue docker exec, you explicitly run the command line, so it will always run.

Append argument to ENTRYPOINT in Docker run where some args are already defined

I'm using ENTRYPOINT to pass in an argument when running docker run but I cannot get the runtime argument to surface in my script as an argument.
Dockerfile
FROM debian:latest
ENV a="my arg in Dockerfile"
COPY . .
RUN chmod +x myscript.sh
ENTRYPOINT ["/bin/bash", "-c", "/myscript.sh ${a}"]
with myscript.sh
#!/bin/bash
echo "From script: $#"
When I run docker build -t my_image . then docker run -it --rm my_image I get the result as expected: From script: my arg in Dockerfile
But when I run docker run -it --rm my_image from_run I get the same result: From script: my arg in Dockerfile so the "from_run" is not being passed down to the script through ENTRYPOINT.
I read that arguments passed after the image name is appended to the ENTRYPOINT but clearly I'm not understanding something here.
Same result when using the exec form as opposed to JSON form:
ENTRYPOINT /myscript.sh ${a}
https://docs.docker.com/engine/reference/run/#entrypoint-default-command-to-execute-at-runtime
The main container command is made up of two parts. The string you pass after the docker run image-name replaces the Dockerfile CMD, and it's appended to the Dockerfile ENTRYPOINT.
For your docker run command to work, you need to provide the command you want to run as ENTRYPOINT and its arguments as CMD. You do not need an environment variable here. However, it is important that both parts use JSON-array syntax and that neither invokes a shell. If ENTRYPOINT includes a shell then things get syntactically complex (see #KamilCuk's answer); if CMD includes a shell then it won't get invoked but the command will be invoked with /bin/sh and -c as parameters instead.
FROM debian:latest
COPY myscript.sh /usr/local/bin/myscript # preserves execute permissions
ENTRYPOINT ["myscript"] # in a $PATH directory
CMD ["my", "arg", "in", "Dockerfile"]
docker run --rm the-image
docker run --rm the-image my arg from command line
If you want the initial set of command-line arguments to be included and the docker run arguments to be appended, you can move them into the ENTRYPOINT line; note that the docker run --entrypoint is syntactically awkward if you ever do decide you need to remove some of the options.
ENTRYPOINT ["myscript", "--first-default", "--second-default"]
# CMD []
docker run --rm the-image
docker run --rm the-image --user-option
docker run --entrypoint myscript the-image --first-default --no-second-default
If you can update your application to accept options as environment variables in addition to command-line settings, this makes all of this syntactically easier.
ENV FIRST_DEFAULT=yes
ENV SECOND_DEFAULT=yes
CMD ["myscript"]
docker run --rm the-image
docker run --rm -e USER_OPTION=yes the-image
docker run --rm -e SECOND_DEFAULT=no the-image
Bash is Bash, see bash manual how -c pases arguments. The following:
/bin/bash -c "/myscript.sh ${a}" from_run
passes only one argument to myscript.sh and that is unquoted $a, so $a undergoes word splitting and filename expansion. And the argument from_run is assigned to $0. I would do:
ENTRYPOINT ["/bin/bash", "-c", "./myscript.sh \"$a\" \"$#\"", "--"]
Note that it's typical to use upper case (and unique names) for environment variables $a.

How to pass dynamic values to Docker container?

I am running a perl script in Dockerfile and I would like to pass dynamic command line arguments to the perl script while running docker image(container).
Ex: CMD perl test.pl <args>. I am new to Docker.
Is there any possible way to pass dynamic values to the docker container like
docker run <image name> <args>?
You could use an Entrypoint script:
$ docker run [OPTIONS] IMAGE[:TAG|#DIGEST] [COMMAND] [ARG...]
and
If the image also specifies an ENTRYPOINT then the CMD or COMMAND get appended as arguments to the ENTRYPOINT.
So depending on your Dockerfile you'd have something like this (python sample app):
FROM jfloff/alpine-python:3.6
# add entrypoint script
USER root
COPY start.sh /
RUN chmod a+x /start.sh
ENTRYPOINT ["/start.sh"]
CMD ["arg1"]
and start.sh:
#!/bin/bash
echo $1
# don't exit
/usr/bin/tail -f /dev/null
Now you can do something like:
15:19 $ docker run f49b567f05f1 Hello
Hello
15:21 $ docker run f49b567f05f1
arg1
Now if your script is set up to take those arguments, you should be able to run it as you want. Reference from Docker is attached, search for "Overriding Dockerfile image defaults" in this and then look in the CMD section.
Or check this Post.
I am not sure whether you can do it with CMD but if you just want to execute the perl script with some passed in arguments use ENTRYPOINT.
ENTRYPOINT ["perl", "test.pl"]
CMD ["default-arg"]
Run the container with:
docker -run <image-name> overriding-arg

Passing arguments from CMD in docker

I have got below Dockerfile.
FROM node:boron
# Create app directory
RUN mkdir -p /usr/src/akamai
WORKDIR /usr/src/akamai
# Install app dependencies
COPY package.json /usr/src/akamai/
RUN npm install
# Bundle app source
COPY . /usr/src/akamai
#EXPOSE 8080
CMD ["node", "src/akamai-client.js", "purge", "https://www.example.com/main.css"]
Below is the command which I run from CMD after the docker image build
docker run -it "akamaiapi" //It executes the CMD command as given in above Dockerfile.
CMD ["node", "src/akamai-client.js", "purge", "https://www.example.com/main.css"] //I want these two arguments directly passed from docker command instead hard-coded in Dockerfile, so my Docker run commands could be like these:
docker run -it "akamaiapi" queue
docker run -it "akamaiapi" purge "https://www.example.com/main.css"
docker run -it "akamaiapi" purge-status "b9f80d960602b9f80d960602b9f80d960602"
You can do that through a combination of ENTRYPOINT and CMD.
The ENTRYPOINT specifies a command that will always be executed when the container starts.
The CMD specifies arguments that will be fed to the ENTRYPOINT.
So, with Dockerfile:
FROM node:boron
...
ENTRYPOINT ["node", "src/akamai-client.js"]
CMD ["purge", "https://www.example.com/main.css"]
The default behavior of a running container:
docker run -it akamaiapi
would be like command :
node src/akamai-client.js purge "https://www.example.com/main.css"
And if you do :
docker run -it akamaiapi queue
The underlying execution in the container would be like:
node src/akamai-client.js queue

Enter into docker container after shell script execution is complete

If I want to execute one shell script as ENTRYPOINT and enter into docker container when shell script execution is complete.
My Dockerfile has following lines at the end:
WORKDIR artifacts
ENTRYPOINT ./my_shell.sh
When I run it with following command, it executes shell script but doesn't enter into docker container.
docker run -it testub /bin/bash
Can someone please let me know if I am missing anything here?
There are two options that control what a container runs when it starts, the entrypoint (ENTRYPOINT) and the command (CMD). They follow the following logic:
If the entrypoint is defined, then it is run with the value for the command included as additional arguments.
If the entrypoint is not defined, then the command is run by itself.
You can override one or both of the values defined in the image. docker run -it --entrypoint /bin/sh testub would run /bin/sh instead of ./my_shell.sh, overriding the entrypoint. And docker run -it testub /bin/bash will override the command, making the container start with ./my_shell.sh /bin/bash.
The quick answer is to run docker run -it --entrypoint /bin/bash testub and from there, kick off your ./my_shell.sh. A better solution is to update ./my_shell.sh to check for any additional parameters and run them with the following at the end of the script:
if [ $# -gt 0 ]; then
exec "$#"
fi

Resources