If I pass an environment variable which is set inside of the container as an argument to docker run, my shell evaluates it. For example:
I want my container to print the value of $FOO which is bar. None of these will work:
# Prints blank line
$ docker run -e FOO=bar ubuntu echo $FOO
# Prints '$FOO'
$ docker run -r FOO=bar ubuntu echo \$FOO
It works if you run echo in a shell:
$ sudo docker run --rm -e FOO=bar ubuntu bash -c 'echo $FOO'
bar
This is because echo is a command (/bin/echo), but it's the shell that does the variable substitution.
Related
I try to pass env variable into my docker container and then print it
docker exec -e VAR1=1 backend echo $VAR1
But in this case I get an empty output. Why variable is not set into container?
from official docker documentation.
COMMAND should be executable, a chained or a quoted command will not work. Example: docker exec -ti my_container "echo a && echo b" will not work, but docker exec -ti my_container sh -c "echo a && echo b" will.
https://docs.docker.com/engine/reference/commandline/exec/
Why is the environment variable not visible to the command that is run as the entrypoint?
Examples:
$docker run -it -e "name=JD" --entrypoint 'echo' ubuntu 'Hello $name'
Hello $name
$ docker run -it -e "name=JD" --entrypoint 'echo' ubuntu "Hello $name"
Hello
But when I start the shell the environment variable is there:
$ docker run -it -e "name=JD" ubuntu /bin/bash
root#c3e513390184:/# echo "$name"
JD
Why in the first case with the echo as entrypoint it does not find the env variable set?
First case
docker run -it -e "name=JD" --entrypoint 'echo' ubuntu 'Hello $name'
Single quotation mark always prevents string from variable expansion. Whatever you write in single quotes remains unchanged. Try echo '$PWD' in your terminal and you will see $PWD as the output. Try echo "$PWD" and you will get your working directory printed.
Second case
docker run -it -e "name=JD" --entrypoint 'echo' ubuntu "Hello $name"
Your code is being expanded before running Docker. Your shell expands whole string and then executes it. At this moment you dont have $name declared and you get it empty. That means inside container you get "Hello " command, not "Hello $name".
If you want to echo environment variable from inside container, simplest way is to wrap script into sh-file to prevent its expansion and pass this file to container.
Third case is obvious I guess and doesn't need explanation.
Nothing shows up for env var FOO even though I've set it:
$ docker run -e FOO=foofoo ubuntubr echo $PATH
/bin:/usr/ucb:/usr/bin:/usr/sbin:/sbin:/usr/etc:/etc
$ docker run -e FOO=foofoo ubuntubr echo $FOO
$
What did I do wrong?
But I was able to modify the path:
docker run -e PATH=/nopath ubuntubr echo $PATH
docker: Error response from daemon: OCI runtime create failed: container_linux.go:296: starting container process caused "exec: \"echo\": executable file not found in $PATH": unknown.
ERRO[0000] error waiting for container: context canceled
Then why isn't docker run -e FOO=foofoo ubuntubr echo $FOO isn't printing foofoo?
The variables are set. The way you are trying to verify that they are set is wrong.
It's called "variable expansion" and I quote from the answer #larsks has given here:
$ docker run -it alpine echo $HOME
/home/lars
$ docker run -it alpine echo '$HOME'
$HOME
$ docker run -it alpine sh -c 'echo $HOME'
/root
it prints the $HOME variable of your host, not the container's. In your case $FOO doesn't exit on the host, so it prints an empty line
it prints $HOME (like a string)
this works, and prints the $HOME variable of the container
Example for your case:
$ docker run -e FOO=foofoo alpine sh -c 'echo $FOO'
foofoo
Thanks #vmonteco for pointing out docker inspect for debuging. If you want to learn more on what your containers were actually doing, below are the CMD set for the 3 different cases discussed previously:
"Cmd": ["echo"]
"Cmd": ["echo","$FOO"]
"Cmd": ["sh","-c","echo $FOO"]
Here is my Dockerfile:
FROM ros:kinetic-ros-core-xenial
CMD ["bash"]
If I run docker build -t ros . && docker run -it ros, and then from within the container echo $PATH, I'll get:
/opt/ros/kinetic/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
If I exec into the container (docker exec -it festive_austin bash) and run echo $PATH, I'll get:
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
Why are the environment variables different? How can I get a new bash process on the container with the same initial environment?
The ENTRYPOINT command is only invoked on docker run, not on docker exec.
I assume that this /ros_entrypoint.sh script is responsible for adding stuff to PATH. If so, then you could do something like this for docker exec:
docker exec -it <CONTAINER_ID> /ros_entrypoint.sh bash
docker exec only gets environment variables defined in Dockerfile with instruction ENV. With docker exec [...] bash you additionally get those defined somewhere for bash.
Add this line to your Dockerfile:
ENV PATH=/opt/ros/kinetic/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
or shorter:
ENV PATH=/opt/ros/kinetic/bin:$PATH
This is old question but since it's where google directed me I thought I'll share solution I ended up using.
In your entrypoint script add a section similar to this:
cat >> ~/.bashrc << EOF
export PATH="$PATH"
export OTHER="$OTHER"
EOF
Once you rebuild your image you can exec into your container (notice bash is invoked in interactive mode):
docker run -d --rm --name container-name your_image
docker exec -it container-name /bin/bash -i
If you echo $PATH now it should be the same as what you have set in .bashrc
FROM alpine:3.5
CMD ["echo", "hello world"]
So after building docker build -t hello . I can run hello by calling docker run hello and I get the output hello world.
Now let's assume I wish to run ls or sh - this is fine. But what I really want is to be able to pass arguments. e.g. ls -al, or even tail -f /dev/null to keep the container running without having to change the Dockerfile
How do I go about doing this? my attempt at exec mode fails miserably... docker run hello --cmd=["ls", "-al"]
Anything after the image name in the docker run command becomes the new value of CMD. So you can run:
docker run hello ls -al
Note that if an ENTRYPOINT is defined, the ENTRYPOINT will receive the value of CMD as args rather than running CMD directly. So you can define an entrypoint as a shell script with something like:
#!/bin/sh
echo "running the entrypoint code"
# if no args are passed, default to a /bin/sh shell
if [ $# -eq 0 ]; then
set -- /bin/sh
fi
# run the "CMD" with exec to replace the pid 1 of this shell script
exec "$#"
Q. But what I really want is to be able to pass arguments. e.g. ls -al, or even tail -f /dev/null to keep the container running without having to change the Dockerfile
This is just achieved with:
docker run -d hello tail -f /dev/null
So the container is running in background, and it let you to execute arbitrary commands inside it:
docker exec <container-id> ls -la
And, for example a shell:
docker exec -it <container-id> bash
Also, I recommend you what #BMitch says.