pass in command in 'exec form' with docker run - docker

https://docs.docker.com/engine/reference/builder/#cmd says that in a Dockerfile
CMD ["executable","param1","param2"]
is the 'exec form' and
CMD command param1 param2
is the 'shell form', and they are run slightly differently.
Note: Unlike the shell form, the exec form does not invoke a command shell
But when you are passing in the command with docker run, I can find any way to make it run without a command shell.
docker run container-name echo test # shell form I guess
What is the exec form?

Why guessing? Just check.
docker run ubuntu sh -c 'echo "Hello world!"'
creates a container one may discover with docker container ls -a
and inspect by id.
The inspect command reveals that no additional sh -c was added except the explicit one:
"Cmd": [
"sh",
"-c",
"echo \"Hello world!\""
],
Thus current hypothesis is that it uses exec form.
Just to mention: my tests for docker-compose run have shown the same results.
This is very poorly documented both for docker and docker-compose, that's true. The --entrypoint option is slightly luckier and is documented at dockerfile docs:
You can override the ENTRYPOINT setting using --entrypoint, but this can only set the binary to exec (no sh -c will be used).

Related

Cannot write ssh key to Docker container in CMD

I am trying to write ssh keys to docker image using CMD.
I have docker file like below.
FROM public.ecr.aws/ubuntu/ubuntu:18.04_stable
CMD ["sh", "-c", "echo $PUBLIC_KEY >> ./.ssh/id_rsa.pub"]
CMD ["sh", "-c", "echo $PRIVATE_KEY >> ./.ssh/id_rsa"]
I run the container with env var like so:
docker run -it -d -e PUBLIC_KEY="key1" -e PRIVATE_KEY="key2" my-image
As result, writing both of them doesn't work. However, when I manually docker exec these 2 cmd against the running container, it will write both public key and private key to the correct location.
Can anyone explain this? How should I make the CMD work?
CMD is a way to define a default command when starting a container. There can only be one default command. In the example you have given, the second CMD will be the default command, and the first CMD will not run. The default command will run only when you do not specify a command to run on the command line, i.e. as part of the command line
docker run [OPTIONS] IMAGE[:TAG|#DIGEST] [COMMAND] [ARG...]
if you provide a COMMAND, the CMD in the dockerfile will not be run.
When you issue docker exec, you explicitly run the command line, so it will always run.

pass parameters to entrypoint in the docker

I have a console tool that can be executed like this
tool -r -b -n -x -k 'Some data'
I want to run the tool in the container but pass arguments from outside.
My Dockerfile installs the tool and the dependencies. I set there the entry point as
ENTRYPOINT ["tool"]
I want to execute it like this
docker exec --env USER=user1 .. -r -b -n -x
where it would be equal to tool -r -b -n -x. But it fails because exec doesn't have the parameter -r. How to make it pass the parameters to the container itself?
docker exec execute an aribtrary command in the container and does not take ENTRYPOINT into account.
If you want to add arguments to the ENTRYPOINT you should pass them when you execute docker run, you canno't pass arguments to the ENTRYPOINT once the container is started.

override default docker run from host with options/arguments

FROM alpine:3.5
CMD ["echo", "hello world"]
So after building docker build -t hello . I can run hello by calling docker run hello and I get the output hello world.
Now let's assume I wish to run ls or sh - this is fine. But what I really want is to be able to pass arguments. e.g. ls -al, or even tail -f /dev/null to keep the container running without having to change the Dockerfile
How do I go about doing this? my attempt at exec mode fails miserably... docker run hello --cmd=["ls", "-al"]
Anything after the image name in the docker run command becomes the new value of CMD. So you can run:
docker run hello ls -al
Note that if an ENTRYPOINT is defined, the ENTRYPOINT will receive the value of CMD as args rather than running CMD directly. So you can define an entrypoint as a shell script with something like:
#!/bin/sh
echo "running the entrypoint code"
# if no args are passed, default to a /bin/sh shell
if [ $# -eq 0 ]; then
set -- /bin/sh
fi
# run the "CMD" with exec to replace the pid 1 of this shell script
exec "$#"
Q. But what I really want is to be able to pass arguments. e.g. ls -al, or even tail -f /dev/null to keep the container running without having to change the Dockerfile
This is just achieved with:
docker run -d hello tail -f /dev/null
So the container is running in background, and it let you to execute arbitrary commands inside it:
docker exec <container-id> ls -la
And, for example a shell:
docker exec -it <container-id> bash
Also, I recommend you what #BMitch says.

Enter into docker container after shell script execution is complete

If I want to execute one shell script as ENTRYPOINT and enter into docker container when shell script execution is complete.
My Dockerfile has following lines at the end:
WORKDIR artifacts
ENTRYPOINT ./my_shell.sh
When I run it with following command, it executes shell script but doesn't enter into docker container.
docker run -it testub /bin/bash
Can someone please let me know if I am missing anything here?
There are two options that control what a container runs when it starts, the entrypoint (ENTRYPOINT) and the command (CMD). They follow the following logic:
If the entrypoint is defined, then it is run with the value for the command included as additional arguments.
If the entrypoint is not defined, then the command is run by itself.
You can override one or both of the values defined in the image. docker run -it --entrypoint /bin/sh testub would run /bin/sh instead of ./my_shell.sh, overriding the entrypoint. And docker run -it testub /bin/bash will override the command, making the container start with ./my_shell.sh /bin/bash.
The quick answer is to run docker run -it --entrypoint /bin/bash testub and from there, kick off your ./my_shell.sh. A better solution is to update ./my_shell.sh to check for any additional parameters and run them with the following at the end of the script:
if [ $# -gt 0 ]; then
exec "$#"
fi

Running a script in Docker container by accepting -e parameter

I have a very simple script called as myscript.sh
echo "this is test " > /tmp/myfile.txt
echo $TEST >> /tmp/myfile.txt
I have stored this script in my disk which i plan to pass it to the container as a volume like this below
docker run -d --name test \
-v /home/docker/test/myscript.sh:/tmp/myscript.sh \
-e TESTING=just-a-test \
test
The Dockerfile looks like this below
FROM ubuntu
CMD ["bash", "/tmp/myscript.sh"]
So the thought process is to get this script executed and get the result as a file myfile.txt which would contain the -e passed.
Instead i am getting
docker#boot2docker:~/test$ docker exec -it test /bin/bash Error
response from daemon: Container test is not running
Which means that this simplest program did not execute as a container.
I Could not figure it out.
The container ran, executed the script, then exited. A container only runs as long as its main process. When that stops, the container stops.
A simpler test would be to change your test script to:
#!/bin/bash
echo $TEST
I would change your Dockerfile to copy the file in and remove the "bash" part of the CMD instruction:
FROM ubuntu
COPY myscript.sh /myscript.sh
CMD /myscript.sh
Now rebuild and run:
$ docker build -t test .
...
$ docker run -e TEST=VAL test
...
The container should echo the value of the test variable and exit. (I haven't tested any of this, so apologies for any mistakes).
The answer to this question is to use the entrypoint instead of cmd.
I did some research and i came up with the solution that looks like this
ENTRYPOINT ["bash", "<script>"]
To run the script just use
docker run -d --name [--privileged] -p : \
- v /script.sh:/tmp/script.sh \
where -v <> : YOU CAN ALSO USE THE WGET TO GET THE SCRIPT LIKE MOST PEOPLE AND EXECUTE AT RUNTIME.
Appreciate all the people who tried to solve the query

Resources