Docker: not being able to set env variable - docker

Nothing shows up for env var FOO even though I've set it:
$ docker run -e FOO=foofoo ubuntubr echo $PATH
/bin:/usr/ucb:/usr/bin:/usr/sbin:/sbin:/usr/etc:/etc
$ docker run -e FOO=foofoo ubuntubr echo $FOO
$
What did I do wrong?
But I was able to modify the path:
docker run -e PATH=/nopath ubuntubr echo $PATH
docker: Error response from daemon: OCI runtime create failed: container_linux.go:296: starting container process caused "exec: \"echo\": executable file not found in $PATH": unknown.
ERRO[0000] error waiting for container: context canceled
Then why isn't docker run -e FOO=foofoo ubuntubr echo $FOO isn't printing foofoo?

The variables are set. The way you are trying to verify that they are set is wrong.
It's called "variable expansion" and I quote from the answer #larsks has given here:
$ docker run -it alpine echo $HOME
/home/lars
$ docker run -it alpine echo '$HOME'
$HOME
$ docker run -it alpine sh -c 'echo $HOME'
/root
it prints the $HOME variable of your host, not the container's. In your case $FOO doesn't exit on the host, so it prints an empty line
it prints $HOME (like a string)
this works, and prints the $HOME variable of the container
Example for your case:
$ docker run -e FOO=foofoo alpine sh -c 'echo $FOO'
foofoo
Thanks #vmonteco for pointing out docker inspect for debuging. If you want to learn more on what your containers were actually doing, below are the CMD set for the 3 different cases discussed previously:
"Cmd": ["echo"]
"Cmd": ["echo","$FOO"]
"Cmd": ["sh","-c","echo $FOO"]

Related

Why variable in docker is empty?

I try to pass env variable into my docker container and then print it
docker exec -e VAR1=1 backend echo $VAR1
But in this case I get an empty output. Why variable is not set into container?
from official docker documentation.
COMMAND should be executable, a chained or a quoted command will not work. Example: docker exec -ti my_container "echo a && echo b" will not work, but docker exec -ti my_container sh -c "echo a && echo b" will.
https://docs.docker.com/engine/reference/commandline/exec/

docker run --entrypoint "python /path/to/file.py" cause "no such file or directory: unknown." error

Running docker run -it --entrypoint "bash" fnb-backend and then python /app/main/src/api/frontend/customer_api/customer_api.py in container shell works fine.
Consider Dockerfile below:
FROM python:2.7.18-slim
COPY main/requirements.txt .
RUN cat requirements.txt | xargs -n 1 pip install --no-cache-dir; exit 0
ENV PYTHONUNBUFFERED True
COPY . ./app/
RUN mkdir -p /app/main/logs/flask/ && touch /app/main/logs/flask/webhook_api.log
# below works
#ENTRYPOINT python /app/main/src/api/frontend/customer_api/customer_api.py
The commented out line works fine too.
However running docker run --entrypoint "python /app/main/src/api/frontend/customer_api/customer_api.py" fnb-backend cause:
docker: Error response from daemon: OCI runtime create failed: container_linux.go:367: starting container process caused: exec: "python /app/main/src/api/frontend/customer_api/customer_api.py": stat python /app/main/src/api/frontend/customer_api/customer_api.py: no such file or directory: unknown.
ERRO[0000] error waiting for container: context canceled
What you used in Dockerfile is shell form of ENTRYPOINT, see next:
ENTRYPOINT command param1 param2
What you used in docker run using --entrypoint could override the command of ENTRYPOINT in Dockerfile, this means it can just act as command, not the syntax of command param1 param2, see next:
Passing --entrypoint will clear out any default command set on the image
In fact, for --entrypoint, docker will treat it as a whole command, so if you use command param1 param2, it will try to find the executable of command param1 param2, it will result file not found.
For you, the correct way is next which put params to command of docker run:
docker run --entrypoint python fnb-backend /app/main/src/api/frontend/customer_api/customer_api.py

Require environment variables to be given to image when run using `-e`

I have a container image that requires an environment variable to be set in order to run. But if run with -d, unless the container is monitored, the person running the container won't notice something is missing. Is there a way that docker [container] run checks that an environment variable is given to the container before starting it.
In detach mode it not possible to print message that env is required, in your word when running with -d, but you can try a workaround:
Dockerfile
FROM alpine
COPY entrypoint.sh /usr/bin/
RUN chmod +x /usr/bin/entrypoint.sh
ENTRYPOINT ["entrypoint.sh"]
entrypoint.sh
#!/bin/sh
echo "starting container $hostname"
if [ -z "$REQUIRED_ENV" ]; then
echo "Container failed to start, pls pass -e REQUIRED_ENV=sometest"
exit 1
fi
echo "starting container with $REQUIRED_ENV"
#your long-running command from CMD
exec "$#"
So when you run with
docker run -it --name envtest --rm env-test-image
it will exit with the message
starting container
Container failed to start, pls pass -e REQUIRED_ENV=sometest
The workaround with detach mode
docker run -it --name envtest -d --rm env-test-image && docker logs envtest
No: there is currently no way to make Docker aware of "dependencies" in the form of environment variables. This answer isĀ for those who actually came here just looking for a canonical way to exit an entrypoint script in case of missing values.
A POSIX shell takes the option -u (error when expanding unset variables) and -e (exit the shell when any statement returns an error/non-zero exit code). We also have : to evaluate-but-not-invoke an expression. Putting these together, we should be able to make an entrypoint.sh that works irrespective of sh-implementation like this:
#!/bin/sh -e -u
( : $USERNAME )
( : $PASSWORD )
exec "$#"
which will exit the script with an error akin to PASSWORD: parameter not set.
Another way of doing the same thing, but in context of the original example with a custom error message, could make use of the "Display Error if Null or Unset" operator (${parm:?error}) and look something like so:
#!/bin/sh
echo "starting container $hostname"
( : ${REQUIRED_ENV?"pls pass -e REQUIRED_ENV=sometest"} ) || exit 1
echo "starting container with $REQUIRED_ENV"
#your long-running command from CMD
exec "$#"

Entering docker container with exec losing PATH environment variable

Here is my Dockerfile:
FROM ros:kinetic-ros-core-xenial
CMD ["bash"]
If I run docker build -t ros . && docker run -it ros, and then from within the container echo $PATH, I'll get:
/opt/ros/kinetic/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
If I exec into the container (docker exec -it festive_austin bash) and run echo $PATH, I'll get:
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
Why are the environment variables different? How can I get a new bash process on the container with the same initial environment?
The ENTRYPOINT command is only invoked on docker run, not on docker exec.
I assume that this /ros_entrypoint.sh script is responsible for adding stuff to PATH. If so, then you could do something like this for docker exec:
docker exec -it <CONTAINER_ID> /ros_entrypoint.sh bash
docker exec only gets environment variables defined in Dockerfile with instruction ENV. With docker exec [...] bash you additionally get those defined somewhere for bash.
Add this line to your Dockerfile:
ENV PATH=/opt/ros/kinetic/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
or shorter:
ENV PATH=/opt/ros/kinetic/bin:$PATH
This is old question but since it's where google directed me I thought I'll share solution I ended up using.
In your entrypoint script add a section similar to this:
cat >> ~/.bashrc << EOF
export PATH="$PATH"
export OTHER="$OTHER"
EOF
Once you rebuild your image you can exec into your container (notice bash is invoked in interactive mode):
docker run -d --rm --name container-name your_image
docker exec -it container-name /bin/bash -i
If you echo $PATH now it should be the same as what you have set in .bashrc

override default docker run from host with options/arguments

FROM alpine:3.5
CMD ["echo", "hello world"]
So after building docker build -t hello . I can run hello by calling docker run hello and I get the output hello world.
Now let's assume I wish to run ls or sh - this is fine. But what I really want is to be able to pass arguments. e.g. ls -al, or even tail -f /dev/null to keep the container running without having to change the Dockerfile
How do I go about doing this? my attempt at exec mode fails miserably... docker run hello --cmd=["ls", "-al"]
Anything after the image name in the docker run command becomes the new value of CMD. So you can run:
docker run hello ls -al
Note that if an ENTRYPOINT is defined, the ENTRYPOINT will receive the value of CMD as args rather than running CMD directly. So you can define an entrypoint as a shell script with something like:
#!/bin/sh
echo "running the entrypoint code"
# if no args are passed, default to a /bin/sh shell
if [ $# -eq 0 ]; then
set -- /bin/sh
fi
# run the "CMD" with exec to replace the pid 1 of this shell script
exec "$#"
Q. But what I really want is to be able to pass arguments. e.g. ls -al, or even tail -f /dev/null to keep the container running without having to change the Dockerfile
This is just achieved with:
docker run -d hello tail -f /dev/null
So the container is running in background, and it let you to execute arbitrary commands inside it:
docker exec <container-id> ls -la
And, for example a shell:
docker exec -it <container-id> bash
Also, I recommend you what #BMitch says.

Resources