Workaround to docker run "--env-file" supplied file not being evaluated as expected - docker

My current setup for running a docker container is on the lines of this:
I've got a main.env file:
# Main
export PRIVATE_IP=\`echo localhost\`
export MONGODB_HOST="$PRIVATE_IP"
export MONGODB_URL="mongodb://$MONGODB_HOST:27017/development"
In my service file (upstart), I source this file . /path/to/main.env
I then call docker run with multiple -e for each of the environment variables I want inside of the container. In this case I would call something like: docker run -e MONGODB_URL=$MONGODB_URL ubuntu bash
I would then expect MONGODB_URL inside of the container to equal mongodb://localhost:27017/development. Notice that in reality echo localhost is replaced by a curl to amazon's api for an actual PRIVATE_IP.
This becomes a bit unwieldy when you start having more and more environment variables you need to give your container. There is a fine point to see here which is that the environment variables need to be resolved at run time, such as with a call to curl or by referring to other env variables.
The solution I was hoping to use is:
calling docker run with an --env-file parameter such as this:
# Main
PRIVATE_IP=\`echo localhost\`
MONGODB_HOST="$PRIVATE_IP"
MONGODB_URL="mongodb://$MONGODB_HOST:27017/development"
Then my docker run command would be significantly shortened to docker run --env-file=/path/to/main.env ubuntu bash (keep in mind usually I've got around 12-15 environment variables.
This is where I hit my problem which is that inside the container none of the variables resolve as expected. Instead I end up with:
PRIVATE_IP=`echo localhost`
MONGODB_HOST="$PRIVATE_IP"
MONGODB_URL="mongodb://$MONGODB_HOST:27017/development"
I could circumvent this by doing the following:
Sourcing the main.env file.
Creating a file containing just the names of the variables I want (meaning docker would search for them in the environment).
Then calling docker run with this file as an argument to --env-file. This would work but would mean I would need to maintain two files instead of one, and really wouldn't be that big of an improvement of the current situation.
What I would prefer is to have the variables resolve as expected.
The closest question to mine that I could find is:
12factor config approach with Docker

Ceate a .env file
example: test=123 val=Guru
Execute command
docker run -it --env-file=.env bash
Inside the bash verify using
echo $test (should print 123)

Both --env and --env-file setup variables as is and do not replace nested variables.
Solomon Hykes talks about configuring containers at run time and the the various approaches. The one that should work for you is to volume mounting the main.env from host into the container and sourcing it.

So I just faced this issue as well, what solved it for me was I specified the --env-file or -e KEY=VAL before the name of the container image. For example
Broken:
docker run my-image --env-file .env
Fixed:
docker run --env-file .env my-image

creating an ENV file that is nothing more than key/value pairs can be processed in normal shell commands and appended to the environment. Look at the bash -a pragma.

What you can do is create a startup script that can be run when the container starts. So if your current docker file looks something like this
From ...
...
CMD command
Change it to
From ...
...
ADD start.sh start.sh
CMD ["start.sh"]
In your start.sh script do the following:
export PRIVATE_IP=\`echo localhost\`
export MONGODB_HOST="$PRIVATE_IP"
export MONGODB_URL="mongodb://$MONGODB_HOST:27017/development"
command

I had a very similar problem to this. If I passed the contents of the env file to docker as separate -e directives then everything ran fine however if I passed the file using --env-file the container failed to run properly.
Turns out there were some spurious line endings in the file (I had copied from windows and ran docker in Ubuntu). When I removed them the container ran the same with --env or --env-file.

I had this issue when using docker run in a separate run script run.sh file, since I wanted the credentials ADMIN_USER and ADMIN_PASSWORD to be accessible in the container, but not show up in the command.
Following the other answers and passing a separate environment file with --env or --env-file didn't work for my image (though it worked for the Bash image). What worked was creating a separate env file...
# env.list
ADMIN_USER='username'
ADMIN_PASSWORD='password'
...and sourcing it in the run script when launching the container:
# run.sh
source env.list
docker run -d \
-e ADMIN_USER=$INFLUXDB_ADMIN_USER \
-e ADMIN_PASSWORD=$INFLUXDB_ADMIN_PASSWORD \
image_repo/name:tag

Related

How to set proxy inside docker container using powershell

I am working on microsoft translator and api is not working inside container.
I am trying to set proxy server inside my docker container but it is not working I tried to run on PowerShell it works
[Environment]::SetEnvironmentVariable("HTTP_PROXY", "http://1.1.1.1:3128", [EnvironmentVariableTarget]::Machine)
[Environment]::SetEnvironmentVariable("HTTPS_PROXY", "http://1.1.1.1:3128", [EnvironmentVariableTarget]::Machine)
But when I tried to run same commands inside docker container it is not executing, it gave me error .
docker container exec -it microsofttranslator /bin/sh
ERROR
/bin/sh: 1: Syntax error: word unexpected (expecting ")")
The error is because in your start script of docker container, your syntax cannot be executed by plain sh, you should use bash instead.
I have re-produced with a simple example.
cat sh_bash.sh
winner=bash_or_sh
if [[ ( $winner == "bash_or_sh" ) ]]
then
echo " bash is winner"
else
echo "sh is looser"
fi
$ sh sh_bash.sh
sh_bash.sh: 2: Syntax error: word unexpected (expecting ")")
$ bash sh_bash.sh
bash is winner
So, try docker container exec -it microsofttranslator /bin/bash
Should you need to pass proxy env variables , please read
this
There could be various reasons for it, Considering there is not much detail, I will point some of the common issues that might be there.
If you are using any script in Dockerfile. Although your script can be run by sh however it might require bash. In such cases you might need to add/install bash in your dockerfile.
Also there could be some syntax error. e.g. Some extra spaces done by editor.
Ensure your editor is making sure that files that have been edited and uploaded from a Windows machine to a Linux machine work. If not please use some command like dos2unix on your files. If you are using windows, You can go through Notepad++, and ensure that Encoding is UTF-8 not UTF-8 BOM
And to run the docker container with proxy inside them. You can go through this solution.
How to configure docker container proxy?
This is one of the common issue which might be causing this, otherwise there could be many other reasons.
If you have a dockerfile, could you add these lines and give it a try
#Adding proxy
ENV HTTP_PROXY "http://1.1.1.1:3128"
ENV HTTPS_PROXY "http://1.1.1.1:3128"
ENV NO_PROXY "" #if needed
you can easily set up a proxy for your specific container or for all containers just by using these two environmental variables HTTP_PROXY and HTTPS_PROXY
1. For spcific container
Proxy For specific container using Dockerfile:
#Add these env vars to your dockerfile
ENV HTTP_PROXY="http://1.1.1.1:3128"
ENV HTTPS_PROXY="http://1.1.1.1:3128"
Proxy for specific container without defining them in Dockerfile:
docker run -d -e HTTP_PROXY="http://1.1.1.1:3128" -e HTTPS_PROXY="http://1.1.1.1:3128" image:tag
2. For all containers
You to execute bellow mentioned commands:
mkdir /etc/systemd/system/docker.service.d
vim /etc/systemd/system/docker.service.d/http-proxy.conf
Paste bellow mentioned content into the file and save it
[Service]
Environment="HTTP_PROXY=http://user01:password#10.10.10.10:8080/"
Environment="HTTPS_PROXY=https://user01:password#10.10.10.10:8080/"
Environment="NO_PROXY= hostname.example.com,172.10.10.10"
# reload the systemd daemon
systemctl daemon-reload
# restart docker
systemctl restart docker
# Verify that the configuration has been loaded
systemctl show docker --property Environment

Execute local shell script using docker run interactive

Can I execute a local shell script within a docker container using docker run -it ?
Here is what I can do:
$ docker run -it 5ee0b7440be5
bash-4.2# echo "Hello"
Hello
bash-4.2# exit
exit
I have a shell script on my local machine
hello.sh:
echo "Hello"
I would like to execute the local shell script within the container and read the value returned:
$ docker run -it 5e3337440be5 #Some way of passing a reference to hello.sh to the container.
Hello
A specific design goal of Docker is that you can't. A container can't access the host filesystem at all, except to the extent that an administrator explicitly mounts parts of the filesystem into the container. (See #tentative's answer for a way to do this for your use case.)
In most cases this means you need to COPY all of the scripts and support tools into your image. You can create a container running any command you want, and one typical approach is to set the image's CMD to do "the normal thing the container will normally do" (like run a Web server) but to allow running the container with a different command (an admin task, a background worker, ...).
# Dockerfile
FROM alpine
...
COPY hello.sh /usr/local/bin
...
EXPOSE 80
CMD httpd -f -h /var/www
docker build -t my/image .
docker run -d -p 8000:80 --name web my/image
docker run --rm --name hello my/image \
hello.sh
In normal operation you should not need docker exec, though it's really useful for debugging. If you are in a situation where you're really stuck, you need more diagnostic tools to be understand how to reproduce a situation, and you don't have a choice but to look inside the running container, you can also docker cp the script or tool into the container before you docker exec there. If you do this, remember that the image also needs to contain any dependencies for the tool (interpreters like Python or GNU Bash, C shared libraries), and that any docker cpd files will be lost when the container exits.
You can use a bind-mount to mount a local file to the container and execute it. When you do that, however, be aware that you'll need to be providing the container process with write/execute access to the folder or specific script you want to run. Depending on your objective, using Docker for this purpose may not be the best idea.
See #David Maze's answer for reasons why. However, here's how you can do it:
Assuming you're on a Unix based system and the hello.sh script is in your current directory, you can mount that single script to the container with -v $(pwd)/hello.sh:/home/hello.sh.
This command will mount the file to your container, start your shell in the folder where you mounted it, and run a shell:
docker run -it -v $(pwd)/hello.sh:/home/hello.sh --workdir /home ubuntu:20.04 /bin/sh
root#987eb876b:/home ./hello.sh
Hello World!
This command will run that script directly and save the output into the variable output:
output=$(docker run -it -v $(pwd)/hello.sh:/home/test.sh ubuntu:20.04 /home/hello.sh)
echo $output
Hello World!
References for more information:
https://docs.docker.com/storage/bind-mounts/#start-a-container-with-a-bind-mount
https://docs.docker.com/storage/bind-mounts/#use-a-read-only-bind-mount

docker entrypoint script fails to run

I have a script startScript.sh that I want to run after the docker container completely starts (including loading all services that the container is supposed to start)
After that I want to run a script: startScript.sh
This is what I do:
sudo docker run -p 8080:8080 <docker image name> " /bin/bash -c ./startScript.sh"
However this gives me an error:
WFLYSRV0073: Invalid option '/bin/bash'
Even tried different shells still same error. Even tried passing just the script file name. Still did not help.
Note: I know that the above file is in the container in the root folder: /
In fact I once entered the container by doing: sudo docker exec and manually ran that script file and it worked.
But when I try to automatically do it as above, it does not work for me.
Some questions:
1. Please suggest what could be the issue.
2. I want to run that script after the container has started completely and is up and running - including all the services that are part of it. Is this the right way to even do it? Or does this try to run while the container is starting up?
When you pass arguments after the image name, you are not modifying the entrypoint, but the command (CMD). It seems your image has WFLYSRV0073 as entrypoint, which makes the actual executed binary be your entrypoint, with your command as arguments. Which makes WFLYSRV0073 fail when trying to parse /bin/bash as an argument.
To run just your script, you could override the image's entrypoint with an empty string, making it run your command's first element. Notice I also remove the quotes, or else Docker will search for a binary with the name containing spaces, which of course doesn't exist.
sudo docker run --entrypoint "" -p 8080:8080 <docker image name> /bin/bash -c ./startScript.sh
However this is probably not what you want: it won't run what the image should actually be running, only your setup script. The correct thing to do here is to modify the image's Dockerfile to run the setup script as the entrypoint, and at the end of it run the script's current entrypoint (the actual thing you want to run).
Alternatively, if you do not control the image you are running, you can use FROM <the current image> in a new Dockerfile to build another image based on it, setting the entrypoint to your script.
Edit:
An example of how the above can be done can be seen in MariaDB's entrypoint: you first start a temporary server, run your setup, then restart it, running the definitive service (which is the CMD) at the end.
The above solutions are good in case you want to perform initialization for an image, but if you just want to be able to run a script for development reasons instead of doing it consistently on the image's entrypoint, you can copy it to your container and then use docker exec <container name> your-command and-arguments to run it.

How to set docker env file that inside the image

i am a totally docker newb, so sorry for that
i have stand-alone docker image (some node app),
that i want to run in different environments.
i want to set up the env file with run RUN --env-file <path>
How ever, i want to use the env files that inside the image (so i can use different files per env),
and not on server.
so would be the path inside image.
is there any way to do so?
perhaps like "cp" (docker cp [OPTIONS] CONTAINER:<path>)
but doesn't seem to work.
what the best practice here?
am i making sense?
Thanks!!
Docker bind mounts are a fairly effective way to inject configuration files like this into a running container. I would not try to describe every possible configuration in your built image; instead, let that be configuration that's pushed in from the host.
Pick some single specific file to hold the configuration. For the sake of argument, let's say it's /usr/src/app/env. Set up your application however it's built to read that file at startup time. Either make sure the application can still start up if the file is missing, or build your image with some file there with reasonable default settings.
Now when you run your container, it will always read settings from that known file; but, you can specify a host file that will be there:
docker run -v $PWD/env.development:/usr/src/app/env myimage
Now you can locally have an env.development that specifies extended logging and a local database, and an env.production with minimal logging and pointing at your production database. If you set up a third environment (say a shared test database with some known data in it) you can just run the container with this new configuration, without rebuilding it.
Following is the command to run docker
docker run [OPTIONS] IMAGE [COMMAND] [ARG...]
Example
docker run --name test -it debian
focus on following switch
--env , -e Set environment variables
--env-file You can pass environment variables to your containers with the -e flag.
An example from a startup script:
sudo docker run -d -t -i -e REDIS_NAMESPACE='staging' \
-e POSTGRES_ENV_POSTGRES_PASSWORD='foo' \
-e POSTGRES_ENV_POSTGRES_USER='bar' \
-e POSTGRES_ENV_DB_NAME='mysite_staging' \
-e POSTGRES_PORT_5432_TCP_ADDR='docker-db-1.hidden.us-east-1.rds.amazonaws.com' \
-e SITE_URL='staging.mysite.com' \
-p 80:80 \
--link redis:redis \
--name container_name dockerhub_id/image_name
In case, you have many environment variables and especially if they're meant to be secret, you can use an env-file:
$ docker run --env-file ./env.list ubuntu bash
The --env-file flag takes a filename as an argument and expects each
line to be in the VAR=VAL format, mimicking the argument passed to
--env. Comment lines need only be prefixed with #

set environment variable in running docker contianer

I need to set environment variable in a running docker container. I am already aware of the way of setting environment variable while creating a container. As far I found there is no available straight forward way to do this with docker and docker is planning to add something with new version 1.13.
But I found that some people able to manage it which is not working for me now. I tried following ways but did not work for me-
docker exec -it -u=root test /bin/bash -c "export port=8090"
echo "export port=8090" to /etc/bash.bashrc using a script and then source it
docker exec -it test /bin/bash -c "source /etc/bash.bashrc"
configuring the whole thing in a script and run it from host also did not work. While running script from host all the other command successfully executes except "export port=8090" or "source /etc/bash.bashrc" or "source /root/.bashrc".
Can anyone explain why sourcing file from host does not work in docker container even when I set user("-u=root")? Can anyone help me to solve this? When I source the file from inside the container it works perfectly. But in my case I have to do it from host machine
NOTE:, I am using docker 1.12 and tried the above in ubuntu:16.04 and ubuntu:14.04
If you have a running process in the docker and you are attempting to change the environment variable in the docker so the running process will dynamically change - this will not work. The environment variables of a process are set when it starts. You can see here ways to overcome that, but I don't think that is the right way to go.
I would instead, have a configuration file that the file reads (or listens to) periodically. And when you want to change the configuration change the file.
If this isn't your scenario, please describe your scenario so we can better assist you.
I find a way to provide environment variable to a running container. Fist upgrade your docker-engine. I am using V1.12.5.
create a script with environment variables-
#!/bin/bash
echo "export VAR1=VAL1
export VAR2=VAL2" >> /etc/bash.bashrc
source /etc/bash.bashrc
Now start a container. Here, 'test' is the container name:
docker run -idt --name=test ubuntu
Copy your script to container:
docker cp script.sh test:/
Run the script :
docker exec -it test /bin/bash -c "/script.sh"
Restart your container:
docker restart test
Go to container shell
docker exec -it test /bin/bash
Check the variable
echo $VAR1

Resources