Docker passing environment variable - docker

Here is my DockerFile, it has a default EN, which I would like to override upon container deployment if specified.
ENV domain example.com
CMD ["cd","/etc/httpd/conf.d/"]
CMD [ "cp", "VirtualHost", "${domain}" ]
However when passing EN using -e command
docker run -it -e domain="test.com" container_id
I'm able to login to the container, echo $domain and it displays EV which has been passed however the copy command didn't copy the file.
Any ideas on what possibly I'm doing wrong?
Thanks

You can't have two CMD lines, the second one will simply override the first. But in your case, I think you want to use the WORKDIR command to set the directory, not CMD e.g:
WORKDIR /etc/httpd/conf.d/
This should set the current directory for all following instructions and container start-up.
BTW I'm not sure how you're logging into this container - when you run it, the cp command will fire and the container will exit once it has completed. If you override the CMD to get a shell (for example with docker run -it mycontainer bash) the cp command will never be executed.

Related

Exported environmental variable via docker entrypoint are not shown when logging into the container

Assume a simple Dockerfile
FROM php-fpm:8
ADD entrypoint.sh .
RUN chmod +x entrypoint.sh
ENTRYPOINT ["./entrypoint.sh"]
CMD ["php-fpm"]
In the entry-point script I just export a variable and print the environment
#!/bin/bash
set -e
export FOO=bar
env // just print the environment while entry point is running
exec "$#"
Then I build the image as myimage and use it to deploy a stack in docker swarm mode
docker stack deploy -c docker-compose.yml teststack
// The docker-compose.yml file used is the following:
app:
image: myimage:latest
environment:
APP_ENV: production
Now the question: If a check the logs of the app service I can see (because of the env command in the entry point) that the FOO variable is exported
docker service logs teststack_app
teststack_app.1.nbcqgnspn1te#soulatso | PWD=/var/www/html
teststack_app.1.nbcqgnspn1te#soulatso | FOO=bar
teststack_app.1.nbcqgnspn1te#soulatso | HOME=/root
However if I login in the running container and manually run env the FOO variable is not shown
docker container exec -it teststack_app.1.nbcqgnspn1tebirfatqiogmwp bash
root#df9c6d9c5f98:/var/www/html# env // run env inside the container
PWD=/var/www/html
HOME=/root
// No FOO variable :(
What I am missing here?
A debugging shell you launch via docker exec isn't a child process of the main container process, and doesn't run the entrypoint itself, so it doesn't see the environment variables that are set there.
Depending on what you're trying to do, there are a couple of options to get around this.
If you're just trying to inspect what your image build produced, you can launch a debugging container instead. The command you pass here will override the CMD in the Dockerfile, and when your entrypoint script does something like exec "$#" to run the command it gets passed, it will run this command instead. This lets you inspect things in an environment just after your entrypoint's first-time setup has happened.
docker-compose run app env | grep FOO
docker-compose run app bash
Or, if the only thing your entrypoint script is to set environment variables, you can explicitly invoke it.
docker-compose exec app ./entrypoint.sh bash
It is important that your entrypoint script accept an ordinary command as parameters. If it is a shell script, it should use something like exec "$#" to launch the main container process. If your entrypoint ignores its parameters and launches a fixed command, or if you've set ENTRYPOINT to a language interpreter and CMD to a script name, these debugging techniques will not work.

I want to run a script during container run based on env variable that I pass

I want to run a script during run time and not during image build.
The script runs based on env variable that I pass during container run.
Script:
#!/bin/bash
touch $env
Docker file
FROM busybox
ENV env parm
RUN mkdir PRATHAP
ADD apt.sh /PRATHAP
WORKDIR /PRATHAP
RUN chmod 777 apt.sh
CMD sh apt.sh
when I try to run: docker container run -it -e env=test.txt sh
the script is not running
I am just getting the sh terminal. If I remove it the the container is not alive.. Please help me how to achieve it
Your docker run starts sh which overrides your CMD in Dockerfile. To get around this, you need to replicate the original CMD via the command line.
$ docker run -it -e env=test.txt <image:tag> sh -c "./init.sh; sh"
Remember that a Docker container runs a single command, and then exits. If you docker run your image without overriding the command, the only thing the container will do is touch a file inside the isolated container filesystem, and then it will promptly exit.
If you need to do some startup-time setup, a useful pattern is to write it into an entrypoint script. When a container starts up, Docker runs whatever you have named as the ENTRYPOINT, passing the CMD as additional parameters (or it just runs CMD if there is no ENTRYPOINT). You can use the special shell command exec "$#" to run the command. So revisiting your script as an entrypoint script:
#!/bin/sh
# ^^ busybox image doesn't have bash (nor does alpine)
# Do the first-time setup
touch "$env"
# Launch the main container process
exec "$#"
In your Dockerfile set this script to be the ENTRYPOINT, and then whatever long-running command you actually want the container to do to be the CMD.
FROM busybox
WORKDIR /PRATHAP # Also creates the directory
COPY apt.sh . # Generally prefer COPY to ADD
RUN chmod 0755 apt.sh # Not world-writable
ENV env parm
ENTRYPOINT ["./apt.sh"] # Must be JSON-array syntax
# Do not need to name interpreter, since
# it is executable with #! line
CMD sh # Or whatever the container actually does

Exporting a environment variable in Entrypoint file not work?

I have some problems with exporting an environment variable into docker Entrypoint file.
This is my docker file content:
FROM ubuntu:16.04
ADD entrypoint.sh .
RUN chmod 777 entrypoint.sh
ENTRYPOINT ["./entrypoint.sh"]
CMD ["/bin/bash"]
In the Entrypoint file, I try to run command "export TOKEN=$client_token". Then, I create a container with that image file and I run "docker exec -it /bin/bash" command and into the container I continue run "set" command to show all environment variables. So,I can not find the $TOKEN variable that I exported before.
How can I export an environment variable into the entrypoint file?
You must inject your host environment variable (client_token) into the docker container using '-e' when running:
docker run -it --rm -e client_token=<whatever> <your image>
This works for example with this kind of entrypoint:
#!/bin/bash
export TOKEN=$client_token
echo "The TOKEN is: ${TOKEN}"
# do stuff ...
If you don't know the token value when the container was run, you should inject during attachment (docker exec) and perform required operations inside, but probably it is not valid for you if running container already needed that information.
docker exec -it -e TOKEN=<whatever> <your container>
BRs

Docker bind-mount not working as expected within AWS EC2 Instance

I have created the following Dockerfile to run a spring-boot app: myapp within an EC2 instance.
# Use an official java runtime as a parent image
FROM openjdk:8-jre-alpine
# Add a user to run our application so that it doesn't need to run as root
RUN adduser -D -s /bin/sh myapp
# Set the current working directory to /home/myapp
WORKDIR /home/myapp
#copy the app to be deployed in the container
ADD target/myapp.jar myapp.jar
#create a file entrypoint-dos.sh and put the project entrypoint.sh content in it
ADD entrypoint.sh entrypoint-dos.sh
#Get rid of windows characters and put the result in a new entrypoint.sh in the container
RUN sed -e 's/\r$//' entrypoint-dos.sh > entrypoint.sh
#set the file as an executable and set myapp as the owner
RUN chmod 755 entrypoint.sh && chown myapp:myapp entrypoint.sh
#set the user to use when running the image to myapp
USER myapp
# Make port 9010 available to the world outside this container
EXPOSE 9010
ENTRYPOINT ["./entrypoint.sh"]
Because I need to access myapp's logs from the EC2 host machine, i want to bind-mount a folder into the logs folder sitting within "myapp" container here: /home/myapp/logs
This is the command that i use to run the image in the ec2 console:
docker run -p 8090:9010 --name myapp myapp:latest -v home/ec2-user/myapp:/home/myapp/logs
The container starts without any issues, but the mount is not achieved as noticed in the following docker inspect extract:
...
"Mounts": [],
...
I have tried the followings actions but ended up with the same result:
--mount type=bind instead of -v
use volumes instead of bind-mount
I have even tried the --privileged option
In the Dockerfile: I tried to use the USER root instead of myapp
I believe that, this has nothing to do with the ec2 machine but my container. Since running other containers with bind-mounts on the same host works like a charm.
I am pretty sure i am messing up with my Dockerfile.
But what am i doing wrong in that Dockerfile ?
or
What am i missing out ?
Here you have the entrypoint.sh if needed:
#!/bin/sh
echo "The app is starting ..."
exec java ${JAVA_OPTS} -Djava.security.egd=file:/dev/./urandom -jar -Dspring.profiles.active=${SPRING_ACTIVE_PROFILES} "${HOME}/myapp.jar" "$#"
I think the issue might be the order of the options on the command line. Docker expects the last two arguments to be the image id/name and (optionally) a command/args to run as pid 1.
https://docs.docker.com/engine/reference/run/
The basic docker run command takes this form:
$ docker run [OPTIONS] IMAGE[:TAG|#DIGEST] [COMMAND] [ARG...]
You have the mount options (-v in the example you provided) after the image name (myall:latest). I'm not sure but perhaps the -v ... is being interpreted as arguments to be passed to your entrypoint script (which are being ignored) and docker run isn't seeing as a mount option.
Also, the source of the mount here (home/ec2-user/myapp) doesn't start with a leading forward slash (/), which, I believe, will make it relative to where the docker run command is executed from. You should make sure the source path starts with a forward slash (i.e. /home/ec2-user/myapp) so that you're sure it will always mount the directory you expect. I.e. -v /home/ec2-user...
Have you tried this order:
docker run -p 8090:9010 --name myapp -v /home/ec2-user/myapp:/home/myapp/logs myapp:latest

Enter into docker container after shell script execution is complete

If I want to execute one shell script as ENTRYPOINT and enter into docker container when shell script execution is complete.
My Dockerfile has following lines at the end:
WORKDIR artifacts
ENTRYPOINT ./my_shell.sh
When I run it with following command, it executes shell script but doesn't enter into docker container.
docker run -it testub /bin/bash
Can someone please let me know if I am missing anything here?
There are two options that control what a container runs when it starts, the entrypoint (ENTRYPOINT) and the command (CMD). They follow the following logic:
If the entrypoint is defined, then it is run with the value for the command included as additional arguments.
If the entrypoint is not defined, then the command is run by itself.
You can override one or both of the values defined in the image. docker run -it --entrypoint /bin/sh testub would run /bin/sh instead of ./my_shell.sh, overriding the entrypoint. And docker run -it testub /bin/bash will override the command, making the container start with ./my_shell.sh /bin/bash.
The quick answer is to run docker run -it --entrypoint /bin/bash testub and from there, kick off your ./my_shell.sh. A better solution is to update ./my_shell.sh to check for any additional parameters and run them with the following at the end of the script:
if [ $# -gt 0 ]; then
exec "$#"
fi

Resources