Cannot write ssh key to Docker container in CMD - docker

I am trying to write ssh keys to docker image using CMD.
I have docker file like below.
FROM public.ecr.aws/ubuntu/ubuntu:18.04_stable
CMD ["sh", "-c", "echo $PUBLIC_KEY >> ./.ssh/id_rsa.pub"]
CMD ["sh", "-c", "echo $PRIVATE_KEY >> ./.ssh/id_rsa"]
I run the container with env var like so:
docker run -it -d -e PUBLIC_KEY="key1" -e PRIVATE_KEY="key2" my-image
As result, writing both of them doesn't work. However, when I manually docker exec these 2 cmd against the running container, it will write both public key and private key to the correct location.
Can anyone explain this? How should I make the CMD work?

CMD is a way to define a default command when starting a container. There can only be one default command. In the example you have given, the second CMD will be the default command, and the first CMD will not run. The default command will run only when you do not specify a command to run on the command line, i.e. as part of the command line
docker run [OPTIONS] IMAGE[:TAG|#DIGEST] [COMMAND] [ARG...]
if you provide a COMMAND, the CMD in the dockerfile will not be run.
When you issue docker exec, you explicitly run the command line, so it will always run.

Related

Run a command line when starting a docker container

As far as I'm concerned you can run a command line when building an image with RUN or when running a container with CMD. Is there anyway to do so when starting a docker container?
My goal is to run the gcloud datastore automatically just after typing docker start my_container_name.
If this is possible, which changes should I apply to my Dockerfile?
(I have already installed all the packages required and I can run that command after docker run --name my_container_name -i -t my_image_name but I want it to be run also when starting the container)
Docker execute RUN command when you build the image.
Docker execute ENTRYPOINT command when you start the container. CMD goes as arguments to ENTRYPOINT. Both of these can be overridden when you create a container from an image. Their purpose in Dockerfile is to provide defaults for future when you or someone else will be creating containers from this image.
Consider the example:
FROM debian:buster
RUN apt update && apt install procps
ENTRYPOINT ["/usr/bin/ps"]
CMD ["aux"]
The RUN command adds ps command to the image, ENTRYPOINT and CMD are not executed but they will be when you run the container:
# create a container named 'ps' using default CMD and ENTRYPOINT
docker run --name ps my_image
# equivalent to /usr/bin/ps aux
# start the existing container 'ps'
docker start ps
# equivalent to /usr/bin/ps aux
# override CMD
docker run my_image au
# equivalent to /usr/bin/ps au
# override both CMD and ENTRYPOINT
docker run --entrypoint=/bin/bash my_image -c 'echo "Hello, world!"'
# will print Hello, world! instead of using ps aux
# no ENTRYPOINT, only CMD
docker run --entrypoint="" my_image /bin/bash -c 'echo "Hello, world!"'
# the output is the same as above
Each time you use docker run you create a container. The used ENTRYPOINT and CMD are saved as container properties and executed each time you start the container.

How to pass dynamic values to Docker container?

I am running a perl script in Dockerfile and I would like to pass dynamic command line arguments to the perl script while running docker image(container).
Ex: CMD perl test.pl <args>. I am new to Docker.
Is there any possible way to pass dynamic values to the docker container like
docker run <image name> <args>?
You could use an Entrypoint script:
$ docker run [OPTIONS] IMAGE[:TAG|#DIGEST] [COMMAND] [ARG...]
and
If the image also specifies an ENTRYPOINT then the CMD or COMMAND get appended as arguments to the ENTRYPOINT.
So depending on your Dockerfile you'd have something like this (python sample app):
FROM jfloff/alpine-python:3.6
# add entrypoint script
USER root
COPY start.sh /
RUN chmod a+x /start.sh
ENTRYPOINT ["/start.sh"]
CMD ["arg1"]
and start.sh:
#!/bin/bash
echo $1
# don't exit
/usr/bin/tail -f /dev/null
Now you can do something like:
15:19 $ docker run f49b567f05f1 Hello
Hello
15:21 $ docker run f49b567f05f1
arg1
Now if your script is set up to take those arguments, you should be able to run it as you want. Reference from Docker is attached, search for "Overriding Dockerfile image defaults" in this and then look in the CMD section.
Or check this Post.
I am not sure whether you can do it with CMD but if you just want to execute the perl script with some passed in arguments use ENTRYPOINT.
ENTRYPOINT ["perl", "test.pl"]
CMD ["default-arg"]
Run the container with:
docker -run <image-name> overriding-arg

Enter into docker container after shell script execution is complete

If I want to execute one shell script as ENTRYPOINT and enter into docker container when shell script execution is complete.
My Dockerfile has following lines at the end:
WORKDIR artifacts
ENTRYPOINT ./my_shell.sh
When I run it with following command, it executes shell script but doesn't enter into docker container.
docker run -it testub /bin/bash
Can someone please let me know if I am missing anything here?
There are two options that control what a container runs when it starts, the entrypoint (ENTRYPOINT) and the command (CMD). They follow the following logic:
If the entrypoint is defined, then it is run with the value for the command included as additional arguments.
If the entrypoint is not defined, then the command is run by itself.
You can override one or both of the values defined in the image. docker run -it --entrypoint /bin/sh testub would run /bin/sh instead of ./my_shell.sh, overriding the entrypoint. And docker run -it testub /bin/bash will override the command, making the container start with ./my_shell.sh /bin/bash.
The quick answer is to run docker run -it --entrypoint /bin/bash testub and from there, kick off your ./my_shell.sh. A better solution is to update ./my_shell.sh to check for any additional parameters and run them with the following at the end of the script:
if [ $# -gt 0 ]; then
exec "$#"
fi

how to pass command line arguments to a python script running in docker

I have a python file called perf_alarm_checker.py, this python file requires two command line arguments: python perf_alarm_checker.py -t something -d something, the Dockerfile looks like this:
# Base image
FROM some base image
ADD perf_alarm_checker.py /perf-test/
CMD python perf_alarm_checker.py
How to pass the two command line arguments, -t and -d to docker run? I tried docker run -w /perf-test alarm-checker -t something -d something but doesn't work.
Use an ENTRYPOINT instead of CMD and then you can use command line options in the docker run like in your example.
ENTRYPOINT ["python", "perf_alarm_checker.py"]
You cannot use -t and -d as you intend, as those are options for docker run.
-t starts a terminal.
-d starts the docker container as a daemon.
For setting environment variables in your Dockerfile use the ENV command.
ENV <key>=<value>
See the Dockerfile reference.
Another option is to pass environment variables through docker run:
docker run ... -e "key=value" ...
See the docker run reference.
Those environment variables can be accessed from the CMD.
CMD python perf_alarm_checker.py -t $ENV1 -d $ENV2

Can you pass flags to the command that docker runs?

The documentation for the run command follows the following syntax:
docker run [OPTIONS] IMAGE[:TAG|#DIGEST] [COMMAND] [ARG...]
however I've found at times that I want to pass a flag to [COMMAND].
For example, I've been working with this image, where the [COMMAND] as specified in the Dockerfile is:
CMD ["/bin/bash", "-c", "/opt/solr/bin/solr -f"]
Is there any way to tack on flags to "/opt/solr/bin/solr -f" so that it's in the form "/opt/solr/bin/solr -f [-MY FLAGS]"?
Do I need to edit the DockerFile or is there some built in functionality for this?
There is a special directive ENTRYPOINT which fits your needs. Unlike CMD it will add additional flags at the end of your command.
For example, you can write
ENTRYPOINT ["python"]
and run it with
docker run <image_name> -c "print(1)"
Note, that this only will work if you write command in exec form (via ["...", "..."]), otherwise ENTRYPOINT will invoke shell and pass your args there, not to your script.
More generally, you can combine ENTRYPOINT and CMD
ENTRYPOINT ["ping"]
CMD ["www.google.com"]
Where CMD means default args for your ENTRYPOINT. Now you can run both of
docker run <image_name>
docker run <image_name> yandex.ru
and only CMD will be replaced.
Full reference about how ENTRYPOINT and CMD interact can be found here
The CMD directive of a Dockerfile is the command that would be run when the container starts if no command was specified in the docker run command.
The main purpose of a CMD is to provide defaults for an executing container.
In your case, just use the docker run command as follow to override the default command specified in the Dockerfile:
docker run makuk66/docker-solr /bin/bash -c "/opt/solr/bin/solr -f [your flags]"

Resources