How to set default docker environment variables - docker

I'd like to set the env variable SERVICE_CHECK_TTL for all containers by default. Can I somehow use the docker deamon for that like this broken example of setting a default env variable for all containers:
ExecStart=/usr/bin/docker daemon --env SERVICE_CHECK_TTL=30s -H fd://
The failing example is part of the docker.service file. The env variable SERVICE_CHECK_TTL is used by the Registrator that registers containers in Consul.
EDIT:
I don't want to set this env variable in a Dockerfile or a docker-compose file if there is another way of setting env variables that are the same for all containers (default). The reason is that I'd like to avoid changing every single Dockerfiles and every single docker-compose file.

The ENV directive in a Dockerfile is designed for that, have a look at the docker docs, they are very good.
So let's suppose all your containers use debian Jessie, you could put in a Dockerfile FROM debian
ENV xxx yyy, then build your specific debian docker build -t mydebian . and then, when you build your containers, your Dockerfile always starts with FROM mydebian
You now have your specific ENV value for all your containers
Of course, you may replace debian by ubuntu, centos or any other

Use this command (remember to change the Docker Host) :
docker exec -i CONTAINER_ID /bin/bash -c "export DOCKER_HOST=tcp://localhost:port"
OR
echo 'export DOCKER_HOST=tcp://localhost:port' >> ~/.bashrc

Related

How to set proxy inside docker container using powershell

I am working on microsoft translator and api is not working inside container.
I am trying to set proxy server inside my docker container but it is not working I tried to run on PowerShell it works
[Environment]::SetEnvironmentVariable("HTTP_PROXY", "http://1.1.1.1:3128", [EnvironmentVariableTarget]::Machine)
[Environment]::SetEnvironmentVariable("HTTPS_PROXY", "http://1.1.1.1:3128", [EnvironmentVariableTarget]::Machine)
But when I tried to run same commands inside docker container it is not executing, it gave me error .
docker container exec -it microsofttranslator /bin/sh
ERROR
/bin/sh: 1: Syntax error: word unexpected (expecting ")")
The error is because in your start script of docker container, your syntax cannot be executed by plain sh, you should use bash instead.
I have re-produced with a simple example.
cat sh_bash.sh
winner=bash_or_sh
if [[ ( $winner == "bash_or_sh" ) ]]
then
echo " bash is winner"
else
echo "sh is looser"
fi
$ sh sh_bash.sh
sh_bash.sh: 2: Syntax error: word unexpected (expecting ")")
$ bash sh_bash.sh
bash is winner
So, try docker container exec -it microsofttranslator /bin/bash
Should you need to pass proxy env variables , please read
this
There could be various reasons for it, Considering there is not much detail, I will point some of the common issues that might be there.
If you are using any script in Dockerfile. Although your script can be run by sh however it might require bash. In such cases you might need to add/install bash in your dockerfile.
Also there could be some syntax error. e.g. Some extra spaces done by editor.
Ensure your editor is making sure that files that have been edited and uploaded from a Windows machine to a Linux machine work. If not please use some command like dos2unix on your files. If you are using windows, You can go through Notepad++, and ensure that Encoding is UTF-8 not UTF-8 BOM
And to run the docker container with proxy inside them. You can go through this solution.
How to configure docker container proxy?
This is one of the common issue which might be causing this, otherwise there could be many other reasons.
If you have a dockerfile, could you add these lines and give it a try
#Adding proxy
ENV HTTP_PROXY "http://1.1.1.1:3128"
ENV HTTPS_PROXY "http://1.1.1.1:3128"
ENV NO_PROXY "" #if needed
you can easily set up a proxy for your specific container or for all containers just by using these two environmental variables HTTP_PROXY and HTTPS_PROXY
1. For spcific container
Proxy For specific container using Dockerfile:
#Add these env vars to your dockerfile
ENV HTTP_PROXY="http://1.1.1.1:3128"
ENV HTTPS_PROXY="http://1.1.1.1:3128"
Proxy for specific container without defining them in Dockerfile:
docker run -d -e HTTP_PROXY="http://1.1.1.1:3128" -e HTTPS_PROXY="http://1.1.1.1:3128" image:tag
2. For all containers
You to execute bellow mentioned commands:
mkdir /etc/systemd/system/docker.service.d
vim /etc/systemd/system/docker.service.d/http-proxy.conf
Paste bellow mentioned content into the file and save it
[Service]
Environment="HTTP_PROXY=http://user01:password#10.10.10.10:8080/"
Environment="HTTPS_PROXY=https://user01:password#10.10.10.10:8080/"
Environment="NO_PROXY= hostname.example.com,172.10.10.10"
# reload the systemd daemon
systemctl daemon-reload
# restart docker
systemctl restart docker
# Verify that the configuration has been loaded
systemctl show docker --property Environment

Can Docker environment variables be used as an dynamic entrypoint runtime arg?

I'm trying to parameterize my Dockerfile running nodeJS so I can have my entrypoint command's args be customizable on docker run so I can maintain one container artifact that can be deployed repeatedly with variations to some runtime args.
I've tried a few different ways, the most basic being
ENV CONFIG_FILE=default.config.js
ENTRYPOINT node ... --config ${CONFIG_FILE}
What I'm finding is that whatever value is defaulted remains in my docker run command even if I'm using -e to pass in new values. Such as
docker run -e CONFIG_FILE=desired.config.js
Another Dockerfile form I've tried is this:
ENTRYPOINT node ... --config ${CONFIG_FILE:-default.config.js}
Not specifying the environment variable with an ENV directive, but using bash expansion to specify a default value if nonexistent or null is found. This gives me the same behavior though.
Lastly, the last thing I tried was to create a bash script file that contains the same entrypoint command, then ADD it to the docker context and invoke it in my ENTRYPOINT. And this also seems to give the same behavior.
Is what I'm attempting even possible?
EDIT:
Here is a minimal dockerfile that reproduces this behavior for me:
FROM alpine
ENV CONFIG "no"
ENTRYPOINT echo "CONFIG=${CONFIG}"
Here is the build command:
docker build -f test.Dockerfile -t test .
Here is the run command, which echoes no despite the -e arg:
docker run -t test -e CONFIG=yes
Some additional details,
I'm running OSX sierra with a Docker version of 18.09.2, build 6247962

Override ENV variable in base docker image

I have a base docker image, call it docker-image with Dockerfile
FROM Ubuntu
ENV USER default
CMD ['start-application']
a customized docker image, based on docker-image
FROM docker-image
ENV USER username
I want to overwrite USER Environment Variable without changing the base-image, (before the application starts), is that possible?
If you cannot build another image, as described in "Dockerfile Overriding ENV variable", you at least can modify it when starting the container with docker run -e
See "ENV (environment variables)"
the operator can set any environment variable in the container by using one or more -e flags, even overriding those mentioned above, or already defined by the developer with a Dockerfile ENV
$ docker run -e "deep=purple" -e today --rm alpine env
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=d2219b854598
deep=purple <=============

How to properly give argument to docker entrypoint when building a container (docker run ...)?

My goal is to share properly an docker image between 2 servers.
I need to give the name of the hostname when creating my containers.
How can I give an arg to docker run command that will take account by the entrypoint script.
You can use the option e of docker run command like this :
docker run -it -e ARG1=foo -e ARG2=bar ubuntu
It the previous example, we define 2 args called ARG1 and ARG2.
For the hostname, when you specify option --hostname of Docker run, it will set a variable in env HOSTNAME which can be used.

Workaround to docker run "--env-file" supplied file not being evaluated as expected

My current setup for running a docker container is on the lines of this:
I've got a main.env file:
# Main
export PRIVATE_IP=\`echo localhost\`
export MONGODB_HOST="$PRIVATE_IP"
export MONGODB_URL="mongodb://$MONGODB_HOST:27017/development"
In my service file (upstart), I source this file . /path/to/main.env
I then call docker run with multiple -e for each of the environment variables I want inside of the container. In this case I would call something like: docker run -e MONGODB_URL=$MONGODB_URL ubuntu bash
I would then expect MONGODB_URL inside of the container to equal mongodb://localhost:27017/development. Notice that in reality echo localhost is replaced by a curl to amazon's api for an actual PRIVATE_IP.
This becomes a bit unwieldy when you start having more and more environment variables you need to give your container. There is a fine point to see here which is that the environment variables need to be resolved at run time, such as with a call to curl or by referring to other env variables.
The solution I was hoping to use is:
calling docker run with an --env-file parameter such as this:
# Main
PRIVATE_IP=\`echo localhost\`
MONGODB_HOST="$PRIVATE_IP"
MONGODB_URL="mongodb://$MONGODB_HOST:27017/development"
Then my docker run command would be significantly shortened to docker run --env-file=/path/to/main.env ubuntu bash (keep in mind usually I've got around 12-15 environment variables.
This is where I hit my problem which is that inside the container none of the variables resolve as expected. Instead I end up with:
PRIVATE_IP=`echo localhost`
MONGODB_HOST="$PRIVATE_IP"
MONGODB_URL="mongodb://$MONGODB_HOST:27017/development"
I could circumvent this by doing the following:
Sourcing the main.env file.
Creating a file containing just the names of the variables I want (meaning docker would search for them in the environment).
Then calling docker run with this file as an argument to --env-file. This would work but would mean I would need to maintain two files instead of one, and really wouldn't be that big of an improvement of the current situation.
What I would prefer is to have the variables resolve as expected.
The closest question to mine that I could find is:
12factor config approach with Docker
Ceate a .env file
example: test=123 val=Guru
Execute command
docker run -it --env-file=.env bash
Inside the bash verify using
echo $test (should print 123)
Both --env and --env-file setup variables as is and do not replace nested variables.
Solomon Hykes talks about configuring containers at run time and the the various approaches. The one that should work for you is to volume mounting the main.env from host into the container and sourcing it.
So I just faced this issue as well, what solved it for me was I specified the --env-file or -e KEY=VAL before the name of the container image. For example
Broken:
docker run my-image --env-file .env
Fixed:
docker run --env-file .env my-image
creating an ENV file that is nothing more than key/value pairs can be processed in normal shell commands and appended to the environment. Look at the bash -a pragma.
What you can do is create a startup script that can be run when the container starts. So if your current docker file looks something like this
From ...
...
CMD command
Change it to
From ...
...
ADD start.sh start.sh
CMD ["start.sh"]
In your start.sh script do the following:
export PRIVATE_IP=\`echo localhost\`
export MONGODB_HOST="$PRIVATE_IP"
export MONGODB_URL="mongodb://$MONGODB_HOST:27017/development"
command
I had a very similar problem to this. If I passed the contents of the env file to docker as separate -e directives then everything ran fine however if I passed the file using --env-file the container failed to run properly.
Turns out there were some spurious line endings in the file (I had copied from windows and ran docker in Ubuntu). When I removed them the container ran the same with --env or --env-file.
I had this issue when using docker run in a separate run script run.sh file, since I wanted the credentials ADMIN_USER and ADMIN_PASSWORD to be accessible in the container, but not show up in the command.
Following the other answers and passing a separate environment file with --env or --env-file didn't work for my image (though it worked for the Bash image). What worked was creating a separate env file...
# env.list
ADMIN_USER='username'
ADMIN_PASSWORD='password'
...and sourcing it in the run script when launching the container:
# run.sh
source env.list
docker run -d \
-e ADMIN_USER=$INFLUXDB_ADMIN_USER \
-e ADMIN_PASSWORD=$INFLUXDB_ADMIN_PASSWORD \
image_repo/name:tag

Resources