In my docker-compose.yml, I defined two services, app and db.
version: "3.7"
services:
app:
image: my_app
container_name: my-app
ports:
- ${MY_PORT}:${MY_PORT}
env_file:
- ./app.env
...
depends_on:
- db
environment:
- DATABASE_URL=${DB_URL}
db:
image: my_db
container_name: my-db
env_file:
- ./db.env
ports:
- ${DB_PORT}:${DB_PORT}
As you can see above, I have defined two env files, app.env and db.env in the env_file option of app and db services.
app.env:
MY_PORT=8081
db.env:
DB_PORT=4040
DB_URL=postgres://myapp:app#db:4040/myapp
I want to check if my docker-compose can successfully read the environment variables. So, I run the command docker-compose config. However the output is
$ docker-compose config
WARNING: The MY_PORT variable is not set. Defaulting to a blank string.
WARNING: The DB_URL variable is not set. Defaulting to a blank string.
WARNING: The DB_PORT variable is not set. Defaulting to a blank string.
ERROR: The Compose file './docker-compose.yml' is invalid because:
services.app.ports is invalid: Invalid port ":", should be [[remote_ip:]remote_port[-remote_port]:]port[/protocol]
services.db.ports is invalid: Invalid port ":", should be [[remote_ip:]remote_port[-remote_port]:]port[/protocol]
Why my docker compose can't read environment variables from those env files I declared in the env_file option in my docker-compose.yml?
Besides, I have another question, that's I understand that normally the env file shouldn't be version controlled since it could contain credentials. How normally should the env file be used for different environment e.g. development, staging and production environments? Imaging different environment has different values for those variables. Could someone please provide some examples?
The reason this is failing, is that the environment variables that you are defining the the external named app.env and db.env files, and specifying in the env_file option, are only being set inside the container that is started - and are not used for variable expansion inside the docker-compose.yml file when parsed by docker-compose.
This is easily confused with the option of supplying a file named .env in the same location as the docker-compose.yml file. Since docker-compose will look for a file specifically named .env next to the docker-compose.yml file (or next to the file that you are specifying with the -f switch) - and use the environment variables in that file for variable expansion in the docker-compose.yml file, before parsing it.
In other words:
The env_file option
Will set environment variables inside your container, is is just a convenience feature that allows you to externalise the environment variables from the docker-compose.yml file
Environment variables in these files will NOT be used for variable expansion in the docker-compose.yml file before parsed by docker-compose.
The .env file
Will be used for environment variable expansion inside the docker-compose.yml file before parsing.
Will NOT set environment variables inside the started container.
Suggested solution to the first question
If you migrate your values into a single .env file and place it in the same directory as your docker-compose.yml file, this should work.
Second question
As I understand your second question, you are asking how the .env file, or the env_file option should be used to configure your services for your different environments.
I do not think that there is a simple and single answer to this. It can be solved in a number of ways. But it also depends on what you are deploying to? Is it kubernetes? Docker swarm? Or just a single node docker host?
Kubernetes and Docker swarm have different means of helping you out with this.
Kubernetes secrets
Docker swarm secrets
Those are highly secure solutions, where operators of the secrets can be limited, and the secrets will not be seen by developers or operators that do not have access.
But for the single node docker host, not operating in swarm mode (secrets only work in swarm mode), there really isn't a lot of fancy options. You will have to manage this pretty manually in your build and deploy pipes as far as I am aware.
You are right that the sensitive configuration of your services, should not go in the same repository as the service definition. Things like root password for a database, or credentials to your service discovery service for your production environment do not need to live next to the sources.
Traditionally, another repository would contain this - giving you the oppotunity to limit the group of people that have this access. The build/deployment server/service will check out the new revision of your service, build it perhaps, and then check out the configuration repository and start the services with the configurations from there. And, make sure to remove the configuration files afterwards.
That would be the solution I would recommend for a single node docker host deployment regime - two repositories, and some scripting that ensures that the correct .env file is put in place during deployment, and removed again.
I hope this is helpful?
Related
I need some help with the following template:
services:
nginx:
image: nginx
restart: unless-stopped
labels:
- "traefik.enable=true"
- "traefik.http.routers.nginx-${COMPOSE_PROJECT_NAME}.rule=Host(`fuu.bar`)"
networks:
- treafik
My goal is to create a template which I can use e. g. in portainer with almost zero configuration.
I thought that the following variables are available in docker-compose config but the expression ${COMPOSE_PROJECT_NAME} results in an empty string: docker-compose config
services:
nginx:
image: nginx
restart: unless-stopped
labels:
- "traefik.enable=true"
- "traefik.http.routers.nginx-.rule=Host(`fuu.bar`)"
networks:
- treafik
Are there any default environment variables provided by docker-compose which I can use for environment interpolation?
---- Update
I use traefik (v2) as a reverse proxy. To make the containers available through treafik, you need to define routers on every service. The router name has to be unique. Lets imagine you deploy 2 or more stacks of the above template. The router name has to be unique for all services across all stacks. Because Im a lazy guy, I tried to simply integrate the environment variable COMPOSE_PROJECT_NAME (which I know is already unique in my setup because every stack must have a unique name). But the variable is not available when deploying the stack.
Of course, I could simply define the variable COMPOSE_PROJECT_NAME by myself in a .env-file, but i hoped that there are any default environment variables provided by docker.
You can use environment variables to passing strings to your docker file.
There are many ways through docker documentation. For example:
You can set default values for any environment variables referenced in the Compose file, or used to configure Compose, in an environment file named .env. The .env file path is as follows:
Starting with +v1.28, .env file is placed at the base of the project
directory
Project directory can be explicitly defined with the --file option or
COMPOSE_FILE environment variable. Otherwise, it is the current
working directory where the docker compose command is executed
(+1.28).
For previous versions, it might have trouble resolving .env file with
--file or COMPOSE_FILE. To work around it, it is recommended to use --project-directory, which overrides the path for the .env file. This inconsistency is addressed in +v1.28 by limiting the filepath to the
project directory.
How can I share environment variables since the --link feature was deprecated?
The Docker documentation (https://docs.docker.com/network/links/) states
Warning: The --link flag is a legacy feature of Docker. It may
eventually be removed. Unless you absolutely need to continue using
it, we recommend that you use user-defined networks to facilitate
communication between two containers instead of using --link. One
feature that user-defined networks do not support that you can do with
--link is sharing environment variables between containers. However, you can use other mechanisms such as volumes to share environment
variables between containers in a more controlled way.
But how do I share environment variable by using volumes? I did not find anything about environment variables in the volumes section.
The problem that I have is that I want to set a database password as environment variable when I start the container. Some other container loads data into the database and for that needs to connect to it and provide the credentials. So far the loading container discovered the password on its own by reading the environment variable. How do I do that now without --link?
Generally, you do it by explicitly providing the same environment variable to other containers. This is easy if you're using a docker-compose.yml to manage your containers, because then you can do this:
version: 3
services:
database:
image: mysql
environment:
MYSQL_ROOT_PASSWORD: $MYSQL_ROOT_PASSWORD
frontend:
image: webserver
environment:
MYSQL_ROOT_PASSWORD: $MYSQL_ROOT_PASSWORD
Then if you set MYSQL_ROOT_PASSWORD in your .env file, the same value will be provided to both the database and frontend container. If you're not using docker-compose, you can still simplify things by using an environment file. Create a file named, e.g., database.env that contains:
MYSQL_ROOT_PASSWORD=secret
Then point your containers at that using docker run --env-file database.env ....
You can't share environment variables using volumes, but you can of course share files. So another option would be to have your database container write a file containing the password to a shared volume, and then read that in your other containers.
I have writte a docker-compose.yml file to create a multi-container app with nodejs and mongodb (so 2 containers), and I want to make some options configurable, as the choice of server address and port.
To do that, I have written what follows in docker-compose.yml to set them as env variables:
..
web:
environment:
- PORT=3000
- ADDRESS=xxx.xxx.xxx.xxx
..
Within the source code of my application, I use process.env.PORT and process.env.ADDRESS to refer to these variables.
But how can I do if I want to change those values, and for example set PORT=3001?
I have to use docker-compose build and docker-compose up again and build all the application, included mongodb container?
I have to use docker-compose build and docker-compose up again and build all the application, included mongodb container?
Not build, just up. They are runtime options, not build options. Changing them in the docker-compose.yml file and then doing a docker-compose up again should recreate the containers with the new environment variables.
Alternatively, you can specify environment variables outside the docker-compose.yml file to make for easier changes:
One of these methods is to use a .env file in the same folder that you execute docker-compose form (see https://docs.docker.com/compose/env-file/ for information on it).
Another option is to interpolate environment variables from your shell. You could specify them such as - PORT=${APP_PORT} and then export APP_PORT=3000 on your shell before running docker-compose. See https://docs.docker.com/compose/environment-variables/ for more information.
I'm trying to get a docker-compose file working with multiple .env files, and I'm not having any luck. I'm trying to setup three .env files:
global settings that are the same across all container instances
environment-specific settings (stuff just for test or dev)
local settings - overridable things that a developer might need to change in case they have conflicts with, say, a port number
My docker-compose.yml file looks like this:
version: '2'
services:
db:
env_file:
- ./.env
- ./.env.${ENV}
- ./.env.local
image: postgres
ports:
- ${POSTGRES_PORT}:5432
.env looks like this:
POSTGRES_USER=myapp
and the .env.development looks like this:
POSTGRES_PASSWORD=supersecretpassword
POSTGRES_HOST=localhost
POSTGRES_PORT=25432
POSTGRES_DB=myapp_development
.env.local doesn't exist in this case.
After running ENV=development docker-compose up, I receive the following output:
$ ENV=development docker-compose up
WARNING: The POSTGRES_PASSWORD variable is not set. Defaulting to a blank string.
WARNING: The POSTGRES_DB variable is not set. Defaulting to a blank string.
WARNING: The POSTGRES_PORT variable is not set. Defaulting to a blank string.
ERROR: The Compose file './docker-compose.yml' is invalid because:
services.db.ports is invalid: Invalid port ":5432", should be [[remote_ip:]remote_port[-remote_port]:]port[/protocol]
From that error message, it looks like none of my environment variables are being used. I just upgraded to the newest available docker-compose as well - same errors:
$ docker-compose --version
docker-compose version 1.8.0-rc1, build 9bf6bc6
Any ideas here? Would be nice to have a single docker-compose.yml that would work across multiple environments.
In order to apply different/multiple env_files depending on the running environment, such as development/staging/production, I think a better way for docker-compose is to use multiple docker-compose yml files.
For example:
1. Start with a base file that defines the canonical configuration for the services.
docker-compose.yml
web:
image: example/my_web_app:latest
env_file:
- .env
2. Add the override file for development, as its name implies, can contain configuration overrides for existing services or entirely new services.
docker-compose.override.yml
web:
build: .
volumes:
- '.:/code'
ports:
- 8883:80
env_file:
- .env.dev
When you run docker-compose up it reads the overrides automatically.
3. Create another override file for the production environment.
docker-compose.prod.yml
web:
ports:
- 80:80
env_file:
- .env.prod
To deploy with this production Compose file you can run
docker-compose -f docker-compose.yml -f docker-compose.prod.yml up
Note
My Docker version:
$ docker -v
Docker version 18.06.1-ce, build e68fc7a
$ docker-compose -v
docker-compose version 1.22.0, build f46880fe
Reference: https://docs.docker.com/compose/extends/
Keep in mind that there are 2 different environments where you are defining variables. The host machine where you are executing the docker-compose command, and the container itself (running the db service in your case).
Your docker-compose.yml file has access to your host's environment variables. Hence ENV is reachable from the docker-compose command, but not these in your .env files.
On the contrary, the value for ENV is not reachable inside the container, but all variables defined in your .env files will.
I don't know if you really need your db container to access the variables defined on your .env.development. But at least seem that your host machine needs to have the content of that file, so when the docker-compose command is called, the POSTGRES_PORT variable is defined.
To fix your specific problem you would need to define the environment variables on your host machine too, not only for the container. You could do something like this:
#Set for host
ENV=development
#Also sets the variables on the host
source ./.env.$ENV
#POSTGRES_PORT defined in .env.development is used here
docker-compose up
#since env_file also contains .env.development, the variables will be reachable from the container.
Hope that helps.
There is a misconception regarding the .env file and the env_file option in the docker-compose.yml, as it is very ambiguous. Shin points it out very nicely in the github issue docker-compose doesn't use env_file. I will just quote his summary:
Variable substitution in your docker-compose.yml file will be pulled (in decreasing order of priority) from your shell's environment and your .env file.
Variables available in your container are a combination of values found in your env_file files and values described in the environment section of the service.
Those are two entirely separate sets of features.
while reading this page: https://docs.docker.com/compose/environment-variables/
and from my understanding, you should do the following:
for the global variables(that should not change) make an env file like so:
VAR1=VALUE1
VAR2=VALUE2
and for the others(that might change) you should add their name under environment in docker-compose.yml like this:
environment:
- VAR1
- VAR2
this will take the VAR1 and VAR2 values from the shell you are running docker-compose.
I hope this helps.
There seems to be sparse conflicting information around on this subject. Im new to Docker and need some help. I have several docker containers to run an application, some require different config files for local development as they do for production. I don't seem to be able to find a neat way to automate this with Docker.
My containers that include custom config are Nginx, Freeradius and my code/data container is Laravel therefore requires a .env.php file (L4.2 at the moment).
I have tried Dockers environment variables in docker compose:
docker-compose.yml:
freeradius:
env_file: ./env/freeradius.env
./env/freeradius.env
DB_HOST=123.56.12.123
DB_DATABASE=my_database
DB_USER=me
DB_PASS=itsasecret
Except I can't pick those variables up in /etc/freeradius/mods-enabled/sql where they need to be.
How can I get Docker to run as a 'local' container with local config, or as a 'production' container with production config without having to actually build different containers, and without having to attach to each container to manually config them. I need it automated as this is to eventually be used on quite a large production environment which will have a large cluster of servers with many instances.
Happy to learn Ansible if this is how people achieve this.
If you can't use environment variables to configure the application (which is my understanding of the problem), then the other option is to use volumes to provide the config files.
You can use either "data volume containers" (which are containers with the sole purpose of sharing files and directories) with volumes_from, or you can use a named volume.
Data Volume container
If the go with the "data volume container" route, you would create a container with all the environment configuration files. Every service that needs a file uses volumes_from: - config. In dev you'd have something like:
configs:
build: dev-configs/
freeradius:
volumes_from:
- configs
The dev-configs directory will need a Dockerfile to build the image, which will have a bunch of VOLUME directives for all the config paths.
For production (and other environments) you can create an override file which replaces the configs service with a different container:
docker-compose.prod.yml:
configs:
build: prod-configs/
You'll probably have other settings you want to change between dev and prod, which can go into this file as well. Then you run compose with the override file:
docker-compose -f docker-compose.yml -f docker-compose.prod.yml up -d
You can learn more about this here: http://docs.docker.com/compose/extends/#multiple-compose-files
Named Volume
If you go with the "named volume" route, it's a bit easier to configure. On dev you create a volume with docker volume create thename and put some files into it. In your config you use it directly:
freeradius:
volumes:
- thename:/etc/freeradius/mods-enabled/sql
In production you'll either need to create that named volume on every host, or use a volume driver plugin that supports multihost (I believe flocker is one example of this).
Runtime configs using Dockerize
Finally, another option that doesn't involve volumes is to use https://github.com/jwilder/dockerize which lets you generate the configs at runtime from environment variables.