override default docker run from host with options/arguments - docker

FROM alpine:3.5
CMD ["echo", "hello world"]
So after building docker build -t hello . I can run hello by calling docker run hello and I get the output hello world.
Now let's assume I wish to run ls or sh - this is fine. But what I really want is to be able to pass arguments. e.g. ls -al, or even tail -f /dev/null to keep the container running without having to change the Dockerfile
How do I go about doing this? my attempt at exec mode fails miserably... docker run hello --cmd=["ls", "-al"]

Anything after the image name in the docker run command becomes the new value of CMD. So you can run:
docker run hello ls -al
Note that if an ENTRYPOINT is defined, the ENTRYPOINT will receive the value of CMD as args rather than running CMD directly. So you can define an entrypoint as a shell script with something like:
#!/bin/sh
echo "running the entrypoint code"
# if no args are passed, default to a /bin/sh shell
if [ $# -eq 0 ]; then
set -- /bin/sh
fi
# run the "CMD" with exec to replace the pid 1 of this shell script
exec "$#"

Q. But what I really want is to be able to pass arguments. e.g. ls -al, or even tail -f /dev/null to keep the container running without having to change the Dockerfile
This is just achieved with:
docker run -d hello tail -f /dev/null
So the container is running in background, and it let you to execute arbitrary commands inside it:
docker exec <container-id> ls -la
And, for example a shell:
docker exec -it <container-id> bash
Also, I recommend you what #BMitch says.

Related

Require environment variables to be given to image when run using `-e`

I have a container image that requires an environment variable to be set in order to run. But if run with -d, unless the container is monitored, the person running the container won't notice something is missing. Is there a way that docker [container] run checks that an environment variable is given to the container before starting it.
In detach mode it not possible to print message that env is required, in your word when running with -d, but you can try a workaround:
Dockerfile
FROM alpine
COPY entrypoint.sh /usr/bin/
RUN chmod +x /usr/bin/entrypoint.sh
ENTRYPOINT ["entrypoint.sh"]
entrypoint.sh
#!/bin/sh
echo "starting container $hostname"
if [ -z "$REQUIRED_ENV" ]; then
echo "Container failed to start, pls pass -e REQUIRED_ENV=sometest"
exit 1
fi
echo "starting container with $REQUIRED_ENV"
#your long-running command from CMD
exec "$#"
So when you run with
docker run -it --name envtest --rm env-test-image
it will exit with the message
starting container
Container failed to start, pls pass -e REQUIRED_ENV=sometest
The workaround with detach mode
docker run -it --name envtest -d --rm env-test-image && docker logs envtest
No: there is currently no way to make Docker aware of "dependencies" in the form of environment variables. This answer isĀ for those who actually came here just looking for a canonical way to exit an entrypoint script in case of missing values.
A POSIX shell takes the option -u (error when expanding unset variables) and -e (exit the shell when any statement returns an error/non-zero exit code). We also have : to evaluate-but-not-invoke an expression. Putting these together, we should be able to make an entrypoint.sh that works irrespective of sh-implementation like this:
#!/bin/sh -e -u
( : $USERNAME )
( : $PASSWORD )
exec "$#"
which will exit the script with an error akin to PASSWORD: parameter not set.
Another way of doing the same thing, but in context of the original example with a custom error message, could make use of the "Display Error if Null or Unset" operator (${parm:?error}) and look something like so:
#!/bin/sh
echo "starting container $hostname"
( : ${REQUIRED_ENV?"pls pass -e REQUIRED_ENV=sometest"} ) || exit 1
echo "starting container with $REQUIRED_ENV"
#your long-running command from CMD
exec "$#"

Entering docker container with exec losing PATH environment variable

Here is my Dockerfile:
FROM ros:kinetic-ros-core-xenial
CMD ["bash"]
If I run docker build -t ros . && docker run -it ros, and then from within the container echo $PATH, I'll get:
/opt/ros/kinetic/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
If I exec into the container (docker exec -it festive_austin bash) and run echo $PATH, I'll get:
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
Why are the environment variables different? How can I get a new bash process on the container with the same initial environment?
The ENTRYPOINT command is only invoked on docker run, not on docker exec.
I assume that this /ros_entrypoint.sh script is responsible for adding stuff to PATH. If so, then you could do something like this for docker exec:
docker exec -it <CONTAINER_ID> /ros_entrypoint.sh bash
docker exec only gets environment variables defined in Dockerfile with instruction ENV. With docker exec [...] bash you additionally get those defined somewhere for bash.
Add this line to your Dockerfile:
ENV PATH=/opt/ros/kinetic/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
or shorter:
ENV PATH=/opt/ros/kinetic/bin:$PATH
This is old question but since it's where google directed me I thought I'll share solution I ended up using.
In your entrypoint script add a section similar to this:
cat >> ~/.bashrc << EOF
export PATH="$PATH"
export OTHER="$OTHER"
EOF
Once you rebuild your image you can exec into your container (notice bash is invoked in interactive mode):
docker run -d --rm --name container-name your_image
docker exec -it container-name /bin/bash -i
If you echo $PATH now it should be the same as what you have set in .bashrc

Enter into docker container after shell script execution is complete

If I want to execute one shell script as ENTRYPOINT and enter into docker container when shell script execution is complete.
My Dockerfile has following lines at the end:
WORKDIR artifacts
ENTRYPOINT ./my_shell.sh
When I run it with following command, it executes shell script but doesn't enter into docker container.
docker run -it testub /bin/bash
Can someone please let me know if I am missing anything here?
There are two options that control what a container runs when it starts, the entrypoint (ENTRYPOINT) and the command (CMD). They follow the following logic:
If the entrypoint is defined, then it is run with the value for the command included as additional arguments.
If the entrypoint is not defined, then the command is run by itself.
You can override one or both of the values defined in the image. docker run -it --entrypoint /bin/sh testub would run /bin/sh instead of ./my_shell.sh, overriding the entrypoint. And docker run -it testub /bin/bash will override the command, making the container start with ./my_shell.sh /bin/bash.
The quick answer is to run docker run -it --entrypoint /bin/bash testub and from there, kick off your ./my_shell.sh. A better solution is to update ./my_shell.sh to check for any additional parameters and run them with the following at the end of the script:
if [ $# -gt 0 ]; then
exec "$#"
fi

How to continue running scripts when exiting docker containers

My script is as follows:
# start a ubuntu container in the background
docker run -it --name ub -d ubuntu /bin/bash
sleep 1
# run a command in the container
docker exec -it ub bash
echo 234
# exit the container
exit
sleep 1
# do something else
echo 123
But the script would just stop right after exit and hang there. Does anyone know why is that?
p.s: My Docker version is: 17.03.0-ce, build 60ccb22
You have given -it during the run command. which opens up the /bin/bash of your container and waits there. The next command wont get executed until the first command execution is completed.
It's better to create a script file and move it inside the container while making the docker. and run the script on starting the docker. You may specify that using a CMD in the docker file.
You won't be needing an additional exec command.
The corresponding Dockerfile would be
FROM ubuntu:latest
COPY <path-to-script> <dest>
CMD [" <path-to-script> "]
You have to create the script file along with the Dockerfile. Build the docker using the command
docker build -t <image-name> <location of Dockerfile>
The execution command would be
docker run -d --name <name> -d ubuntu <path-to-script>

Running a script in Docker container by accepting -e parameter

I have a very simple script called as myscript.sh
echo "this is test " > /tmp/myfile.txt
echo $TEST >> /tmp/myfile.txt
I have stored this script in my disk which i plan to pass it to the container as a volume like this below
docker run -d --name test \
-v /home/docker/test/myscript.sh:/tmp/myscript.sh \
-e TESTING=just-a-test \
test
The Dockerfile looks like this below
FROM ubuntu
CMD ["bash", "/tmp/myscript.sh"]
So the thought process is to get this script executed and get the result as a file myfile.txt which would contain the -e passed.
Instead i am getting
docker#boot2docker:~/test$ docker exec -it test /bin/bash Error
response from daemon: Container test is not running
Which means that this simplest program did not execute as a container.
I Could not figure it out.
The container ran, executed the script, then exited. A container only runs as long as its main process. When that stops, the container stops.
A simpler test would be to change your test script to:
#!/bin/bash
echo $TEST
I would change your Dockerfile to copy the file in and remove the "bash" part of the CMD instruction:
FROM ubuntu
COPY myscript.sh /myscript.sh
CMD /myscript.sh
Now rebuild and run:
$ docker build -t test .
...
$ docker run -e TEST=VAL test
...
The container should echo the value of the test variable and exit. (I haven't tested any of this, so apologies for any mistakes).
The answer to this question is to use the entrypoint instead of cmd.
I did some research and i came up with the solution that looks like this
ENTRYPOINT ["bash", "<script>"]
To run the script just use
docker run -d --name [--privileged] -p : \
- v /script.sh:/tmp/script.sh \
where -v <> : YOU CAN ALSO USE THE WGET TO GET THE SCRIPT LIKE MOST PEOPLE AND EXECUTE AT RUNTIME.
Appreciate all the people who tried to solve the query

Resources