Docker share environment variables using volumes - docker

How can I share environment variables since the --link feature was deprecated?
The Docker documentation (https://docs.docker.com/network/links/) states
Warning: The --link flag is a legacy feature of Docker. It may
eventually be removed. Unless you absolutely need to continue using
it, we recommend that you use user-defined networks to facilitate
communication between two containers instead of using --link. One
feature that user-defined networks do not support that you can do with
--link is sharing environment variables between containers. However, you can use other mechanisms such as volumes to share environment
variables between containers in a more controlled way.
But how do I share environment variable by using volumes? I did not find anything about environment variables in the volumes section.
The problem that I have is that I want to set a database password as environment variable when I start the container. Some other container loads data into the database and for that needs to connect to it and provide the credentials. So far the loading container discovered the password on its own by reading the environment variable. How do I do that now without --link?

Generally, you do it by explicitly providing the same environment variable to other containers. This is easy if you're using a docker-compose.yml to manage your containers, because then you can do this:
version: 3
services:
database:
image: mysql
environment:
MYSQL_ROOT_PASSWORD: $MYSQL_ROOT_PASSWORD
frontend:
image: webserver
environment:
MYSQL_ROOT_PASSWORD: $MYSQL_ROOT_PASSWORD
Then if you set MYSQL_ROOT_PASSWORD in your .env file, the same value will be provided to both the database and frontend container. If you're not using docker-compose, you can still simplify things by using an environment file. Create a file named, e.g., database.env that contains:
MYSQL_ROOT_PASSWORD=secret
Then point your containers at that using docker run --env-file database.env ....
You can't share environment variables using volumes, but you can of course share files. So another option would be to have your database container write a file containing the password to a shared volume, and then read that in your other containers.

Related

Failed to read environment variables from the file declared in env_file

In my docker-compose.yml, I defined two services, app and db.
version: "3.7"
services:
app:
image: my_app
container_name: my-app
ports:
- ${MY_PORT}:${MY_PORT}
env_file:
- ./app.env
...
depends_on:
- db
environment:
- DATABASE_URL=${DB_URL}
db:
image: my_db
container_name: my-db
env_file:
- ./db.env
ports:
- ${DB_PORT}:${DB_PORT}
As you can see above, I have defined two env files, app.env and db.env in the env_file option of app and db services.
app.env:
MY_PORT=8081
db.env:
DB_PORT=4040
DB_URL=postgres://myapp:app#db:4040/myapp
I want to check if my docker-compose can successfully read the environment variables. So, I run the command docker-compose config. However the output is
$ docker-compose config
WARNING: The MY_PORT variable is not set. Defaulting to a blank string.
WARNING: The DB_URL variable is not set. Defaulting to a blank string.
WARNING: The DB_PORT variable is not set. Defaulting to a blank string.
ERROR: The Compose file './docker-compose.yml' is invalid because:
services.app.ports is invalid: Invalid port ":", should be [[remote_ip:]remote_port[-remote_port]:]port[/protocol]
services.db.ports is invalid: Invalid port ":", should be [[remote_ip:]remote_port[-remote_port]:]port[/protocol]
Why my docker compose can't read environment variables from those env files I declared in the env_file option in my docker-compose.yml?
Besides, I have another question, that's I understand that normally the env file shouldn't be version controlled since it could contain credentials. How normally should the env file be used for different environment e.g. development, staging and production environments? Imaging different environment has different values for those variables. Could someone please provide some examples?
The reason this is failing, is that the environment variables that you are defining the the external named app.env and db.env files, and specifying in the env_file option, are only being set inside the container that is started - and are not used for variable expansion inside the docker-compose.yml file when parsed by docker-compose.
This is easily confused with the option of supplying a file named .env in the same location as the docker-compose.yml file. Since docker-compose will look for a file specifically named .env next to the docker-compose.yml file (or next to the file that you are specifying with the -f switch) - and use the environment variables in that file for variable expansion in the docker-compose.yml file, before parsing it.
In other words:
The env_file option
Will set environment variables inside your container, is is just a convenience feature that allows you to externalise the environment variables from the docker-compose.yml file
Environment variables in these files will NOT be used for variable expansion in the docker-compose.yml file before parsed by docker-compose.
The .env file
Will be used for environment variable expansion inside the docker-compose.yml file before parsing.
Will NOT set environment variables inside the started container.
Suggested solution to the first question
If you migrate your values into a single .env file and place it in the same directory as your docker-compose.yml file, this should work.
Second question
As I understand your second question, you are asking how the .env file, or the env_file option should be used to configure your services for your different environments.
I do not think that there is a simple and single answer to this. It can be solved in a number of ways. But it also depends on what you are deploying to? Is it kubernetes? Docker swarm? Or just a single node docker host?
Kubernetes and Docker swarm have different means of helping you out with this.
Kubernetes secrets
Docker swarm secrets
Those are highly secure solutions, where operators of the secrets can be limited, and the secrets will not be seen by developers or operators that do not have access.
But for the single node docker host, not operating in swarm mode (secrets only work in swarm mode), there really isn't a lot of fancy options. You will have to manage this pretty manually in your build and deploy pipes as far as I am aware.
You are right that the sensitive configuration of your services, should not go in the same repository as the service definition. Things like root password for a database, or credentials to your service discovery service for your production environment do not need to live next to the sources.
Traditionally, another repository would contain this - giving you the oppotunity to limit the group of people that have this access. The build/deployment server/service will check out the new revision of your service, build it perhaps, and then check out the configuration repository and start the services with the configurations from there. And, make sure to remove the configuration files afterwards.
That would be the solution I would recommend for a single node docker host deployment regime - two repositories, and some scripting that ensures that the correct .env file is put in place during deployment, and removed again.
I hope this is helpful?

Is it possible to read network name in docker-compose from env?

I'm trying to not hard-code my network name since its for an open source project (and I have multiple instances running on the same server for different apps).
Is it possible to use environment variables when defining the network?
This doesn't work:
networks:
${DOCKER_NETWORK_NAME}:
name: ${DOCKER_NETWORK_NAME}
Compose has an internal notion of a project name and most Docker object names are prefixed with that name. For example, if you are in a directory named foo and your Compose file has
networks:
something:
and you run docker network ls, you will see a network named foo_something.
I would generally recommend not manually specifying the names of networks, volumes, or containers. You can choose any name you want to be used within the docker-compose.yml file and it will be scoped to that file.
Conversely, this requires that different installations of the system either be in directories with different names, set the COMPOSE_PROJECT_NAME environment variable (possibly in a .env file), or consistently use the docker-compose -p flag.
In the very specific case of networks, Compose provides a network named default which is the default if you don't actually have networks: blocks. There's not really any downside to using this, and most applications won't need multiple internal networks. I'd just leave out networks: entirely.

Running application within Docker containers

If someone may know, does it need to be separate Dockerfile for a database and service itself in case if you want to run an application within Docker containers?
It's not quite clear where to specify the external database and server name, is it in the .env file?
https://github.com/gurock/testrail-docker/blob/master/README.md
http://docs.gurock.com/testrail-admin/installation-docker/migrating-upgrading-testrail
Yes, you should run both application and Database in a separate container.
It's not quite clear where to specify the external database and server
name, is it in the .env file?
You have two option to speicy Environment variable
.env file
Envrionment Variables
place the .env file in the root of your docker-compose and specify this in your docker-compose file.
services:
api:
image: 'node:6-alpine'
env_file:
- .env
Using Environment
environment:
MYSQL_USER: "${DB_USER:-testrail}"
MYSQL_PASSWORD: "${DB_PWD:-testrail}"
MYSQL_DATABASE: "${DB_NAME:-testrail}"
MYSQL_ROOT_PASSWORD: "${DB_ROOT_PWD:-my-secret-password}"
MYSQL_ALLOW_EMPTY_PASSWORD: 'false'
does it need to be separate Dockerfile for a database and service
Better to use offical database image, and for service, you can customize the image, but you provided link is the better choice for you to start with docker-compose.yml.
Also, the documentation of docker-compose is already given in the link.
Theoretically you can have an application and the database running in the same container, but this will have kinds of unintended consequences for example if the database falls over the application might still be running, but docker won't notice that the database fell over if it is not aware of it.
Something to wrap your mind around when running the database in a container is data persistence, which means that data would survive even when the container is killed or deleted and that once you create the container again the container would still be able to access the databases and other data.
Here is a good article explaining volumes in docker in the context of running mysql in its own container with a volume to hold the data:
https://severalnines.com/database-blog/mysql-docker-containers-understanding-basics
In context of the repo that you linked it seems there is a separate Dockerfile for the database and that you have the option to choose to use either Mariadb or MySQL, see here:
https://github.com/gurock/testrail-docker/tree/master/Dockerfiles/testrail_mariadb
and here:
https://github.com/gurock/testrail-docker/tree/master/Dockerfiles/testrail_mysql

Makefile: extract an env variable from a docker container and use it in another docker container

I have an app that runs on several docker containers. To simplify my problem let's say I have 3 containers : one for MySQL and 2 for 2 instances of the api (sharing the same volume where the code is but with a different env specifying different database settings) as configured in the following docker-compose.yml
services:
api-1:
image: mynamespace/my-image-name:1.0
environment:
DB_NAME: db_api_1
api-2:
image: mynamespace/my-image-name:1.0
environment:
DB_NAME: db_api_2
In a Makefile I have rules for deploying the containers and installing the database for each of my api instances.
What I am trying to achieve is to create a make rule that dumps a database given an env. Knowing that I have no MySQL client installed on my api instances, I thought there should be a way to extract the env variables I need (with printenv VARNAME) from an api container then use it in the database container.
Anyone knows how this could be achieved ?
Assuming that it's an environment variable that you set using the -e option to docker run, you could do something like this:
docker exec api_container sh -c 'echo $VARNAME'
If it is environment variable that was set inside the container e.g. from a script, then you're mostly out of luck. You could of course inspect /proc/<pid>/environ, but that's hacky and I wouldn't recommend it.
It also sounds as if you would benefit from using something like docker-compose to manage your containers.

Managing dev/test/prod environments with Docker

There seems to be sparse conflicting information around on this subject. Im new to Docker and need some help. I have several docker containers to run an application, some require different config files for local development as they do for production. I don't seem to be able to find a neat way to automate this with Docker.
My containers that include custom config are Nginx, Freeradius and my code/data container is Laravel therefore requires a .env.php file (L4.2 at the moment).
I have tried Dockers environment variables in docker compose:
docker-compose.yml:
freeradius:
env_file: ./env/freeradius.env
./env/freeradius.env
DB_HOST=123.56.12.123
DB_DATABASE=my_database
DB_USER=me
DB_PASS=itsasecret
Except I can't pick those variables up in /etc/freeradius/mods-enabled/sql where they need to be.
How can I get Docker to run as a 'local' container with local config, or as a 'production' container with production config without having to actually build different containers, and without having to attach to each container to manually config them. I need it automated as this is to eventually be used on quite a large production environment which will have a large cluster of servers with many instances.
Happy to learn Ansible if this is how people achieve this.
If you can't use environment variables to configure the application (which is my understanding of the problem), then the other option is to use volumes to provide the config files.
You can use either "data volume containers" (which are containers with the sole purpose of sharing files and directories) with volumes_from, or you can use a named volume.
Data Volume container
If the go with the "data volume container" route, you would create a container with all the environment configuration files. Every service that needs a file uses volumes_from: - config. In dev you'd have something like:
configs:
build: dev-configs/
freeradius:
volumes_from:
- configs
The dev-configs directory will need a Dockerfile to build the image, which will have a bunch of VOLUME directives for all the config paths.
For production (and other environments) you can create an override file which replaces the configs service with a different container:
docker-compose.prod.yml:
configs:
build: prod-configs/
You'll probably have other settings you want to change between dev and prod, which can go into this file as well. Then you run compose with the override file:
docker-compose -f docker-compose.yml -f docker-compose.prod.yml up -d
You can learn more about this here: http://docs.docker.com/compose/extends/#multiple-compose-files
Named Volume
If you go with the "named volume" route, it's a bit easier to configure. On dev you create a volume with docker volume create thename and put some files into it. In your config you use it directly:
freeradius:
volumes:
- thename:/etc/freeradius/mods-enabled/sql
In production you'll either need to create that named volume on every host, or use a volume driver plugin that supports multihost (I believe flocker is one example of this).
Runtime configs using Dockerize
Finally, another option that doesn't involve volumes is to use https://github.com/jwilder/dockerize which lets you generate the configs at runtime from environment variables.

Resources