How to pass dynamic values to Docker container? - docker

I am running a perl script in Dockerfile and I would like to pass dynamic command line arguments to the perl script while running docker image(container).
Ex: CMD perl test.pl <args>. I am new to Docker.
Is there any possible way to pass dynamic values to the docker container like
docker run <image name> <args>?

You could use an Entrypoint script:
$ docker run [OPTIONS] IMAGE[:TAG|#DIGEST] [COMMAND] [ARG...]
and
If the image also specifies an ENTRYPOINT then the CMD or COMMAND get appended as arguments to the ENTRYPOINT.
So depending on your Dockerfile you'd have something like this (python sample app):
FROM jfloff/alpine-python:3.6
# add entrypoint script
USER root
COPY start.sh /
RUN chmod a+x /start.sh
ENTRYPOINT ["/start.sh"]
CMD ["arg1"]
and start.sh:
#!/bin/bash
echo $1
# don't exit
/usr/bin/tail -f /dev/null
Now you can do something like:
15:19 $ docker run f49b567f05f1 Hello
Hello
15:21 $ docker run f49b567f05f1
arg1
Now if your script is set up to take those arguments, you should be able to run it as you want. Reference from Docker is attached, search for "Overriding Dockerfile image defaults" in this and then look in the CMD section.
Or check this Post.

I am not sure whether you can do it with CMD but if you just want to execute the perl script with some passed in arguments use ENTRYPOINT.
ENTRYPOINT ["perl", "test.pl"]
CMD ["default-arg"]
Run the container with:
docker -run <image-name> overriding-arg

Related

Cannot write ssh key to Docker container in CMD

I am trying to write ssh keys to docker image using CMD.
I have docker file like below.
FROM public.ecr.aws/ubuntu/ubuntu:18.04_stable
CMD ["sh", "-c", "echo $PUBLIC_KEY >> ./.ssh/id_rsa.pub"]
CMD ["sh", "-c", "echo $PRIVATE_KEY >> ./.ssh/id_rsa"]
I run the container with env var like so:
docker run -it -d -e PUBLIC_KEY="key1" -e PRIVATE_KEY="key2" my-image
As result, writing both of them doesn't work. However, when I manually docker exec these 2 cmd against the running container, it will write both public key and private key to the correct location.
Can anyone explain this? How should I make the CMD work?
CMD is a way to define a default command when starting a container. There can only be one default command. In the example you have given, the second CMD will be the default command, and the first CMD will not run. The default command will run only when you do not specify a command to run on the command line, i.e. as part of the command line
docker run [OPTIONS] IMAGE[:TAG|#DIGEST] [COMMAND] [ARG...]
if you provide a COMMAND, the CMD in the dockerfile will not be run.
When you issue docker exec, you explicitly run the command line, so it will always run.

Append argument to ENTRYPOINT in Docker run where some args are already defined

I'm using ENTRYPOINT to pass in an argument when running docker run but I cannot get the runtime argument to surface in my script as an argument.
Dockerfile
FROM debian:latest
ENV a="my arg in Dockerfile"
COPY . .
RUN chmod +x myscript.sh
ENTRYPOINT ["/bin/bash", "-c", "/myscript.sh ${a}"]
with myscript.sh
#!/bin/bash
echo "From script: $#"
When I run docker build -t my_image . then docker run -it --rm my_image I get the result as expected: From script: my arg in Dockerfile
But when I run docker run -it --rm my_image from_run I get the same result: From script: my arg in Dockerfile so the "from_run" is not being passed down to the script through ENTRYPOINT.
I read that arguments passed after the image name is appended to the ENTRYPOINT but clearly I'm not understanding something here.
Same result when using the exec form as opposed to JSON form:
ENTRYPOINT /myscript.sh ${a}
https://docs.docker.com/engine/reference/run/#entrypoint-default-command-to-execute-at-runtime
The main container command is made up of two parts. The string you pass after the docker run image-name replaces the Dockerfile CMD, and it's appended to the Dockerfile ENTRYPOINT.
For your docker run command to work, you need to provide the command you want to run as ENTRYPOINT and its arguments as CMD. You do not need an environment variable here. However, it is important that both parts use JSON-array syntax and that neither invokes a shell. If ENTRYPOINT includes a shell then things get syntactically complex (see #KamilCuk's answer); if CMD includes a shell then it won't get invoked but the command will be invoked with /bin/sh and -c as parameters instead.
FROM debian:latest
COPY myscript.sh /usr/local/bin/myscript # preserves execute permissions
ENTRYPOINT ["myscript"] # in a $PATH directory
CMD ["my", "arg", "in", "Dockerfile"]
docker run --rm the-image
docker run --rm the-image my arg from command line
If you want the initial set of command-line arguments to be included and the docker run arguments to be appended, you can move them into the ENTRYPOINT line; note that the docker run --entrypoint is syntactically awkward if you ever do decide you need to remove some of the options.
ENTRYPOINT ["myscript", "--first-default", "--second-default"]
# CMD []
docker run --rm the-image
docker run --rm the-image --user-option
docker run --entrypoint myscript the-image --first-default --no-second-default
If you can update your application to accept options as environment variables in addition to command-line settings, this makes all of this syntactically easier.
ENV FIRST_DEFAULT=yes
ENV SECOND_DEFAULT=yes
CMD ["myscript"]
docker run --rm the-image
docker run --rm -e USER_OPTION=yes the-image
docker run --rm -e SECOND_DEFAULT=no the-image
Bash is Bash, see bash manual how -c pases arguments. The following:
/bin/bash -c "/myscript.sh ${a}" from_run
passes only one argument to myscript.sh and that is unquoted $a, so $a undergoes word splitting and filename expansion. And the argument from_run is assigned to $0. I would do:
ENTRYPOINT ["/bin/bash", "-c", "./myscript.sh \"$a\" \"$#\"", "--"]
Note that it's typical to use upper case (and unique names) for environment variables $a.

I want to run a script during container run based on env variable that I pass

I want to run a script during run time and not during image build.
The script runs based on env variable that I pass during container run.
Script:
#!/bin/bash
touch $env
Docker file
FROM busybox
ENV env parm
RUN mkdir PRATHAP
ADD apt.sh /PRATHAP
WORKDIR /PRATHAP
RUN chmod 777 apt.sh
CMD sh apt.sh
when I try to run: docker container run -it -e env=test.txt sh
the script is not running
I am just getting the sh terminal. If I remove it the the container is not alive.. Please help me how to achieve it
Your docker run starts sh which overrides your CMD in Dockerfile. To get around this, you need to replicate the original CMD via the command line.
$ docker run -it -e env=test.txt <image:tag> sh -c "./init.sh; sh"
Remember that a Docker container runs a single command, and then exits. If you docker run your image without overriding the command, the only thing the container will do is touch a file inside the isolated container filesystem, and then it will promptly exit.
If you need to do some startup-time setup, a useful pattern is to write it into an entrypoint script. When a container starts up, Docker runs whatever you have named as the ENTRYPOINT, passing the CMD as additional parameters (or it just runs CMD if there is no ENTRYPOINT). You can use the special shell command exec "$#" to run the command. So revisiting your script as an entrypoint script:
#!/bin/sh
# ^^ busybox image doesn't have bash (nor does alpine)
# Do the first-time setup
touch "$env"
# Launch the main container process
exec "$#"
In your Dockerfile set this script to be the ENTRYPOINT, and then whatever long-running command you actually want the container to do to be the CMD.
FROM busybox
WORKDIR /PRATHAP # Also creates the directory
COPY apt.sh . # Generally prefer COPY to ADD
RUN chmod 0755 apt.sh # Not world-writable
ENV env parm
ENTRYPOINT ["./apt.sh"] # Must be JSON-array syntax
# Do not need to name interpreter, since
# it is executable with #! line
CMD sh # Or whatever the container actually does

how to pass command line arguments to a python script running in docker

I have a python file called perf_alarm_checker.py, this python file requires two command line arguments: python perf_alarm_checker.py -t something -d something, the Dockerfile looks like this:
# Base image
FROM some base image
ADD perf_alarm_checker.py /perf-test/
CMD python perf_alarm_checker.py
How to pass the two command line arguments, -t and -d to docker run? I tried docker run -w /perf-test alarm-checker -t something -d something but doesn't work.
Use an ENTRYPOINT instead of CMD and then you can use command line options in the docker run like in your example.
ENTRYPOINT ["python", "perf_alarm_checker.py"]
You cannot use -t and -d as you intend, as those are options for docker run.
-t starts a terminal.
-d starts the docker container as a daemon.
For setting environment variables in your Dockerfile use the ENV command.
ENV <key>=<value>
See the Dockerfile reference.
Another option is to pass environment variables through docker run:
docker run ... -e "key=value" ...
See the docker run reference.
Those environment variables can be accessed from the CMD.
CMD python perf_alarm_checker.py -t $ENV1 -d $ENV2

Can you pass flags to the command that docker runs?

The documentation for the run command follows the following syntax:
docker run [OPTIONS] IMAGE[:TAG|#DIGEST] [COMMAND] [ARG...]
however I've found at times that I want to pass a flag to [COMMAND].
For example, I've been working with this image, where the [COMMAND] as specified in the Dockerfile is:
CMD ["/bin/bash", "-c", "/opt/solr/bin/solr -f"]
Is there any way to tack on flags to "/opt/solr/bin/solr -f" so that it's in the form "/opt/solr/bin/solr -f [-MY FLAGS]"?
Do I need to edit the DockerFile or is there some built in functionality for this?
There is a special directive ENTRYPOINT which fits your needs. Unlike CMD it will add additional flags at the end of your command.
For example, you can write
ENTRYPOINT ["python"]
and run it with
docker run <image_name> -c "print(1)"
Note, that this only will work if you write command in exec form (via ["...", "..."]), otherwise ENTRYPOINT will invoke shell and pass your args there, not to your script.
More generally, you can combine ENTRYPOINT and CMD
ENTRYPOINT ["ping"]
CMD ["www.google.com"]
Where CMD means default args for your ENTRYPOINT. Now you can run both of
docker run <image_name>
docker run <image_name> yandex.ru
and only CMD will be replaced.
Full reference about how ENTRYPOINT and CMD interact can be found here
The CMD directive of a Dockerfile is the command that would be run when the container starts if no command was specified in the docker run command.
The main purpose of a CMD is to provide defaults for an executing container.
In your case, just use the docker run command as follow to override the default command specified in the Dockerfile:
docker run makuk66/docker-solr /bin/bash -c "/opt/solr/bin/solr -f [your flags]"

Resources