How to load Docker environment variables in container - docker

I'm building an image that is based on an ubuntu image with systemd. I need to start
TigerVNC as a service which depends on some environment variables that I have defined in my
Dockerfile, like the password.
FROM ubuntu-systemd
ENV VNC_PW="some-password"
ENTRYPOINT ["/lib/systemd/systemd"]
The unit file for this service has a line that is:
ExecStart=/usr/sbin/runuser -l root -c "/some/script.sh"
Since systemd has its own environment, I don't have access to the environment variables
defined in my Dockerfile. I was expecting that running the script as root with
a login shell (the '-l' flag) would give me access to these variables but it does not.
I know the variables I need are in /proc/1/environ but I don't know how to load them, for
example adding something in the .profile file for root.
Thank you.

A little bit tricky but I can load the environment variables in the /some/script.sh in the ExecStart property by including the following line:
export `xargs --null --max-args=1 echo < /proc/1/environ`

Related

How do I pass in configuration settings to a docker image for local development?

I'm working on a dotnet core docker container (not aspnet), I'd like to specify configuration options for it through appsettings.json. These values will eventually be filled in through environment variables in kubernetes.
However, for local development, how do we easily pass in these settings without storing them in the container?
You can map local volumes to docker -v local_path:container_path.
If you gonna use kubernetes you can use ConfigMap as well.
You can pass env variables while running the container with -e flag of the command docker run.
With this method, you’ll have to pass each variable in the command line. For example, docker run -e VAR1=value1 -e VAR2=value2
If this gets cumbersome, you can write these values to an env file and use this file like so, docker run --env-file=filename
For reference, you can check out the official docs.

Docker environmental variables from a file

I am trying to use one Dockerfile for both my production and development. The only difference between the production and development are the environment variables I set. Therefore I would like someway import the environment variables from a file. Before using Docker I would simply do the following
. ./setvars
./main.py
However if change ./main.py with the Docker equivalent
. ./setvars
docker run .... ./main.py
then the variables will be on the host and not accessible from the Docker instance. Of course a quick and dirty hack would be to make a file with
#!/bin/bash
. ./setvars
./main.py
and run that in the instance. That would however be really annoying, since I got lots of scripts I would like to run (with the same environment variables), and would then have to create a extra script for everyone of those.
Are there any other solution to get my environment variables inside docker without using a different Dockerfile and the method I described above?
Your best options is to use either the -e flag, or the --env-file of the docker run command.
The -e flag allows you to specify key/value pairs of env variable,
for example:
docker run -e ENVIRONMENT=PROD
You can use several time the -e flag to define multiple env
variables. For example, the docker registry itself is configurable
with -e flags, see:
https://docs.docker.com/registry/deploying/#running-a-domain-registry
The --env-file allow you to specify a file. But each line of the file
must be of type VAR=VAL
Full documentation:
https://docs.docker.com/engine/reference/commandline/run/#set-environment-variables-e-env-env-file

Can I run a bash script from a file in a separate docker volume before a container starts?

I have a situation where i need to source a bash environment file that lives in a separate volume (pulled in via volumes_from in docker-compose) when a container starts so that all future commands run on the container will execute under that bash environment (it runs some scripts and sets a lot of dynamic variables pulled in from other places). The reason I'm using a volume instead of just adding this command directly to the image is because the environment file I need to include is outside the Dockerfile context, and Dockerfiles don't support that.
I tried adding a source /path/to/volume/envfile line in to the root user's .bashrc file in the hopes that it would be run when the container started, but that didn't work. I'm assuming that's because the volumes aren't actually mounted until after the container / shell has started and .bashrc commands have already run (which makes sense).
Does anyone have any idea on how I can accomplish something like this? I'm open to alternative methods, however the one thing I can't change here is moving the file I need inside of the Docker context, as that would break quite a number of other things.
My (slightly edited) Dockerfile and docker-compose.yml files: https://gist.github.com/joeellis/235d90799eb647ab00ec
EDIT: And as a test, I'm trying to run a rake db:create:all on the container, like docker-compose run app rake db:create:all which is returning an error that the environment file I need cannot be found / loaded. Interestingly enough, if I shell into the container and run the command, it all seems to work just great. So maybe when a container is given a command via run, it doesn't necessarily open up a shell, but uses something else?
The problem is that the shell within which your /src/app/bin/start-app is ran is not an interactive shell => .bashrc is not read!
You can fix this by doing this:
Add the env file to instead:
/root/.profile
RUN echo "source /src/puppet/path/to/file/env/bootstrap" >> /root/.profile
And also run your command as login shell via (sh is bash anyhow for you via the hack in the Dockerfile :) ):
command: "sh -lc '/src/app/bin/start-app'"
as the command. This should work fine :)
The problem really is just missing to source that file because you're running in a non-interactive shell when running via the docker command instruction.
It works when you shell into the container, because bam you got an interactive shell that sources that file :)

Is it possible to customize environment variable by linking two docker containers?

I've created a docker image for my database server and one for the web application. Using the documentation I'm able to link between both container using the environment variables as follow:
value="jdbc:postgresql://${DB_PORT_5432_TCP_ADDR}:${DB_PORT_5432_TCP_PORT}/db_name"
It works fine now but it would be better that the environment variables are more general and without containing a static port number. Something like:
value="jdbc:postgresql://${DB_URL}:${DB_PORT}/db_name"
Is there anyway to link between the environment variables? for example by using the ENV command in the dockerfile ENV DB_URL=$DB_PORT_5432_TCP_ADDR or by using the argument --env by running the image docker run ... -e DB_URL=$DB_PORT_5432_TCP_ADDR docker_image ?
Without building this kind of functionality into your docker startup shell scripts or other orchestration mechanism, this is not possible at the moment to create environment variables like you are describing here. You do mention a couple of workarounds. However, the problem at least with using the -e DB_URL=... in your docker run command is that your $DB_PORT_5432_TCP_ADDR environment variable is not known at runtime, and so you will not be able to set this value when you run it. Typically, this is what your orchestration layer is used for, service discovery and passing this kind of data among your containers. There is at least one workaround mentioned here on SO that involves constructing a special shell script that you put in your CMD or ENTRYPOINT directives that passes the environment variable to the container.

How to send files as arguments of docker commands?

I'm not sure that I'm trying to do it the right way, but I would like to use docker.io as a way to package some programs that need to be run from the host.
However these applications take filenames as arguments and need to have at least read access. Some other applications generate files as output and the user expects to retrieve those files.
What is the docker way of dealing with files as program parameters?
Start Docker with a mounted volume and use this to directory to manipulate files.
See: https://docs.docker.com/engine/tutorials/dockervolumes/
If you have apps that require args when they're run then you can just inject your parameters as environment variables when you run your docker container
e.g.
docker run -e ENV_TO_INJECT=my_value .....
Then in your entrypoint (or cmd) make sure you just run a shell script
e.g. (in Dockerfile)
CMD["/my/path/to/run.sh"]
Then in your run.sh file that gets run at container launch you can just access the environment variables
e.g.
./runmything.sh $ENV_TO_INJECT
Would that work for you?

Resources