I need some help with the following template:
services:
nginx:
image: nginx
restart: unless-stopped
labels:
- "traefik.enable=true"
- "traefik.http.routers.nginx-${COMPOSE_PROJECT_NAME}.rule=Host(`fuu.bar`)"
networks:
- treafik
My goal is to create a template which I can use e. g. in portainer with almost zero configuration.
I thought that the following variables are available in docker-compose config but the expression ${COMPOSE_PROJECT_NAME} results in an empty string: docker-compose config
services:
nginx:
image: nginx
restart: unless-stopped
labels:
- "traefik.enable=true"
- "traefik.http.routers.nginx-.rule=Host(`fuu.bar`)"
networks:
- treafik
Are there any default environment variables provided by docker-compose which I can use for environment interpolation?
---- Update
I use traefik (v2) as a reverse proxy. To make the containers available through treafik, you need to define routers on every service. The router name has to be unique. Lets imagine you deploy 2 or more stacks of the above template. The router name has to be unique for all services across all stacks. Because Im a lazy guy, I tried to simply integrate the environment variable COMPOSE_PROJECT_NAME (which I know is already unique in my setup because every stack must have a unique name). But the variable is not available when deploying the stack.
Of course, I could simply define the variable COMPOSE_PROJECT_NAME by myself in a .env-file, but i hoped that there are any default environment variables provided by docker.
You can use environment variables to passing strings to your docker file.
There are many ways through docker documentation. For example:
You can set default values for any environment variables referenced in the Compose file, or used to configure Compose, in an environment file named .env. The .env file path is as follows:
Starting with +v1.28, .env file is placed at the base of the project
directory
Project directory can be explicitly defined with the --file option or
COMPOSE_FILE environment variable. Otherwise, it is the current
working directory where the docker compose command is executed
(+1.28).
For previous versions, it might have trouble resolving .env file with
--file or COMPOSE_FILE. To work around it, it is recommended to use --project-directory, which overrides the path for the .env file. This inconsistency is addressed in +v1.28 by limiting the filepath to the
project directory.
Related
I am trying to setup a cluster of 3 RavenDB instances using docker-compose and I am having problems with the RavenDB server not picking up the values in the RAVEN_ environment variables.
At first, I was running a single instance, using this docker-compose file:
version: '3'
services:
ravendb:
build:
context: .
dockerfile: Dockerfile
ports:
- "8080:8080"
- "38888:38888"
volumes:
- ../data:/opt/RavenDB/Server/RavenData
With a simple Dockerfile that used the latest raven image and simply copied a settings.json file into the container.
FROM ravendb/ravendb
COPY settings.json /opt/RavenDB/Server/settings.json
{
"License.Eula.Accepted": true,
"License": {/*License here*/},
"Setup.Mode": "Unsecured",
"Security.UnsecuredAccessAllowed": "PublicNetwork",
"ServerUrl": "http://0.0.0.0:8080",
"ServerUrl.Tcp": "tcp://0.0.0.0:38888"
}
Now that I am trying to setup 3 instances, I wanted to avoid this way of creating the containers, since I would have to have different Dockerfiles and a settings.json file for each one.
Therefore, I thought of using a single docker-compose file that creates three containers and configures each one with enviroment variables.
I started with a single instance, to see if any problems would arise:
version: '3'
services:
raven1:
container_name: raven1
image: ravendb/ravendb
ports:
- "8080:8080"
- "38888:38888"
environment:
- RAVEN_Security_UnsecuredAccessAllowed=PublicNetwork
- RAVEN_Setup_Mode=Unsecured
- RAVEN_License_Eula_Accepted=true
- "RAVEN_ServerUrl=http://0.0.0.0:8080"
- "RAVEN_ServerUrl_Tcp=tcp://0.0.0.0:38888"
volumes:
- ../data:/opt/RavenDB/Server/RavenData
And arise they did! Despite the environment variables being set correctly, they are not picked up by the server, and the settings.json file is the default one.
root#8ad95cc439d4:/opt/RavenDB/Server# env
RAVEN_ARGS=
RAVEN_Security_UnsecuredAccessAllowed=PublicNetwork
RAVEN_AUTO_INSTALL_CA=true
RAVEN_ServerUrl=http://0.0.0.0:8080
RAVEN_SETTINGS=
RAVEN_ServerUrl_Tcp=tcp://0.0.0.0:38888
RAVEN_IN_DOCKER=true
RAVEN_Setup_Mode=Unsecured
RAVEN_License_Eula_Accepted=true
RAVEN_DataDir=RavenData
root#8ad95cc439d4:/opt/RavenDB/Server# cat settings.json
{
"Security.UnsecuredAccessAllowed": "PrivateNetwork"
}
Any idea why this might be happening? I can't seem to find any mention of issues regarding this.
Do I understand correctly you expected the configuration from environment variables to be included in the settings.json file in the container?
If that's the case I would like to clarify that passing environment variables does not modify RavenDB's settings.json file. Instead RavenDB loads them up directly from the environment.
Configuration options are loaded in the following order of precedence:
command line arguments
settings.json configuration file
RAVEN_ prefixed environment variables
So if you wanted to override the configuration option Security.UnsecuredAccessAllowed found in settings.json file you would need to either change the file on the container or pass it as a CLI argument --Security.UnsecuredAccessAllowed PublicNetwork.
Both cases are supported by RavenDB docker images:
to clear the default settings.json you can pass RAVEN_SETTINGS={} environment variable to the container.
to pass command line arguments to the RavenDB server binary you can use RAVEN_ARGS environment variable. E.g. RAVEN_ARGS=--Security.UnsecuredAccessAllowed PublicNetwork
Is there a way to pass environment variables from one service to the other inside docker-compose.yml ?
services:
testService:
environment:
TEST_KEY: 1234
testServiceTests:
environment:
TEST_KEY: I want to pull in the value 1234 here from service1
No.
However, there's an alternative. You may provide environment variables to all the services within the Docker Compose file by exposing them either from your shell, when you run the Compose or by using a special .env file, See documentation.
Using this approach, you would have a global (for the Compose) environment variable, say GLOBAL_TEST_KEY (it needn't have a different name) and you would be able to share this across multiple services:
services:
testService:
environment:
TEST_KEY: ${GLOBAL_TEST_KEY}
testServiceTests:
environment:
TEST_KEY: ${GLOBAL_TEST_KEY}
And then: docker-compose run -e GLOBAL_TEST_KEY="Some value" ....
Or, create a file called .env alongside docker-compose.yaml and, in .env:
GLOBAL_TEST_KEY="Some value"
And then: docker-compose run ...
NOTE No need to reference .env as it's included by default
In my docker-compose.yml, I defined two services, app and db.
version: "3.7"
services:
app:
image: my_app
container_name: my-app
ports:
- ${MY_PORT}:${MY_PORT}
env_file:
- ./app.env
...
depends_on:
- db
environment:
- DATABASE_URL=${DB_URL}
db:
image: my_db
container_name: my-db
env_file:
- ./db.env
ports:
- ${DB_PORT}:${DB_PORT}
As you can see above, I have defined two env files, app.env and db.env in the env_file option of app and db services.
app.env:
MY_PORT=8081
db.env:
DB_PORT=4040
DB_URL=postgres://myapp:app#db:4040/myapp
I want to check if my docker-compose can successfully read the environment variables. So, I run the command docker-compose config. However the output is
$ docker-compose config
WARNING: The MY_PORT variable is not set. Defaulting to a blank string.
WARNING: The DB_URL variable is not set. Defaulting to a blank string.
WARNING: The DB_PORT variable is not set. Defaulting to a blank string.
ERROR: The Compose file './docker-compose.yml' is invalid because:
services.app.ports is invalid: Invalid port ":", should be [[remote_ip:]remote_port[-remote_port]:]port[/protocol]
services.db.ports is invalid: Invalid port ":", should be [[remote_ip:]remote_port[-remote_port]:]port[/protocol]
Why my docker compose can't read environment variables from those env files I declared in the env_file option in my docker-compose.yml?
Besides, I have another question, that's I understand that normally the env file shouldn't be version controlled since it could contain credentials. How normally should the env file be used for different environment e.g. development, staging and production environments? Imaging different environment has different values for those variables. Could someone please provide some examples?
The reason this is failing, is that the environment variables that you are defining the the external named app.env and db.env files, and specifying in the env_file option, are only being set inside the container that is started - and are not used for variable expansion inside the docker-compose.yml file when parsed by docker-compose.
This is easily confused with the option of supplying a file named .env in the same location as the docker-compose.yml file. Since docker-compose will look for a file specifically named .env next to the docker-compose.yml file (or next to the file that you are specifying with the -f switch) - and use the environment variables in that file for variable expansion in the docker-compose.yml file, before parsing it.
In other words:
The env_file option
Will set environment variables inside your container, is is just a convenience feature that allows you to externalise the environment variables from the docker-compose.yml file
Environment variables in these files will NOT be used for variable expansion in the docker-compose.yml file before parsed by docker-compose.
The .env file
Will be used for environment variable expansion inside the docker-compose.yml file before parsing.
Will NOT set environment variables inside the started container.
Suggested solution to the first question
If you migrate your values into a single .env file and place it in the same directory as your docker-compose.yml file, this should work.
Second question
As I understand your second question, you are asking how the .env file, or the env_file option should be used to configure your services for your different environments.
I do not think that there is a simple and single answer to this. It can be solved in a number of ways. But it also depends on what you are deploying to? Is it kubernetes? Docker swarm? Or just a single node docker host?
Kubernetes and Docker swarm have different means of helping you out with this.
Kubernetes secrets
Docker swarm secrets
Those are highly secure solutions, where operators of the secrets can be limited, and the secrets will not be seen by developers or operators that do not have access.
But for the single node docker host, not operating in swarm mode (secrets only work in swarm mode), there really isn't a lot of fancy options. You will have to manage this pretty manually in your build and deploy pipes as far as I am aware.
You are right that the sensitive configuration of your services, should not go in the same repository as the service definition. Things like root password for a database, or credentials to your service discovery service for your production environment do not need to live next to the sources.
Traditionally, another repository would contain this - giving you the oppotunity to limit the group of people that have this access. The build/deployment server/service will check out the new revision of your service, build it perhaps, and then check out the configuration repository and start the services with the configurations from there. And, make sure to remove the configuration files afterwards.
That would be the solution I would recommend for a single node docker host deployment regime - two repositories, and some scripting that ensures that the correct .env file is put in place during deployment, and removed again.
I hope this is helpful?
I want to pass environment variables that is readable by applications spin up by docker-compose up.
What is the proper way of using docker-compose up with varying configuration settings?
I don't want to use .env & environment: config as the environment variables are changing frequently & it is insecure to save tokens in a file.
docker-compose run -e does work a bit, but loses many.
It does not map the ports that defined in docker-compose.yml services.
Also multiple services are defined in docker-compose.yml and I don't want to use depends_on just because docker-compose up doesn't work.
Let's say I define service in docker-compose.yml
service-a:
build:
context: .
dockerfile: DockerfileA
command: node serviceA.js
In my serviceA.js, I simply use the environment variable:
console.log("This is ", process.env.KEY, "running in service A");
When I run docker-compose run -e KEY=DockerComposeRun service-a
I do get the environment variable KEY read by serviceA.js
This is DockerComposeRun running in service A
However I could only get one single service running.
I could have use environment: in docker-compose.yml
environment:
- KEY=DockerComposeUp
But in my use case, each docker compose would have different environment variable values, meaning I would need to edit the file each time before I do docker-compose.
Also, not only single service would use the same environment variable, .env even done a better job, but it is not desired.
There doesn't seem to be a way to do the same for docker-compose up
I have tried KEY=DockerComposeUp docker-compose up,
but what I get is undefined .
Export doesn't work for me as well, it seems they are all about using environment variable for docker-compose.yml instead of for the applications in container
To safely pass sensitive configuration data to your containers you can use Docker secrets. Everything passed through Secrets is encrypted.
You can create and manage secrets using the commands below:
docker secret create
docker secret inspect
docker secret ls
docker secret rm
And use them in your docker-compose file, either referring to existing secrets (external) or use a file:
secrets:
my_first_secret:
file: ./secret_data
my_second_secret:
external: true
You can use environment like this:
service-a:
build:
context: .
dockerfile: DockerfileA
command: node serviceA.js
environment:
KEY=DockerComposeRun
Refer at: https://docs.docker.com/compose/environment-variables/
I'm trying to get a docker-compose file working with multiple .env files, and I'm not having any luck. I'm trying to setup three .env files:
global settings that are the same across all container instances
environment-specific settings (stuff just for test or dev)
local settings - overridable things that a developer might need to change in case they have conflicts with, say, a port number
My docker-compose.yml file looks like this:
version: '2'
services:
db:
env_file:
- ./.env
- ./.env.${ENV}
- ./.env.local
image: postgres
ports:
- ${POSTGRES_PORT}:5432
.env looks like this:
POSTGRES_USER=myapp
and the .env.development looks like this:
POSTGRES_PASSWORD=supersecretpassword
POSTGRES_HOST=localhost
POSTGRES_PORT=25432
POSTGRES_DB=myapp_development
.env.local doesn't exist in this case.
After running ENV=development docker-compose up, I receive the following output:
$ ENV=development docker-compose up
WARNING: The POSTGRES_PASSWORD variable is not set. Defaulting to a blank string.
WARNING: The POSTGRES_DB variable is not set. Defaulting to a blank string.
WARNING: The POSTGRES_PORT variable is not set. Defaulting to a blank string.
ERROR: The Compose file './docker-compose.yml' is invalid because:
services.db.ports is invalid: Invalid port ":5432", should be [[remote_ip:]remote_port[-remote_port]:]port[/protocol]
From that error message, it looks like none of my environment variables are being used. I just upgraded to the newest available docker-compose as well - same errors:
$ docker-compose --version
docker-compose version 1.8.0-rc1, build 9bf6bc6
Any ideas here? Would be nice to have a single docker-compose.yml that would work across multiple environments.
In order to apply different/multiple env_files depending on the running environment, such as development/staging/production, I think a better way for docker-compose is to use multiple docker-compose yml files.
For example:
1. Start with a base file that defines the canonical configuration for the services.
docker-compose.yml
web:
image: example/my_web_app:latest
env_file:
- .env
2. Add the override file for development, as its name implies, can contain configuration overrides for existing services or entirely new services.
docker-compose.override.yml
web:
build: .
volumes:
- '.:/code'
ports:
- 8883:80
env_file:
- .env.dev
When you run docker-compose up it reads the overrides automatically.
3. Create another override file for the production environment.
docker-compose.prod.yml
web:
ports:
- 80:80
env_file:
- .env.prod
To deploy with this production Compose file you can run
docker-compose -f docker-compose.yml -f docker-compose.prod.yml up
Note
My Docker version:
$ docker -v
Docker version 18.06.1-ce, build e68fc7a
$ docker-compose -v
docker-compose version 1.22.0, build f46880fe
Reference: https://docs.docker.com/compose/extends/
Keep in mind that there are 2 different environments where you are defining variables. The host machine where you are executing the docker-compose command, and the container itself (running the db service in your case).
Your docker-compose.yml file has access to your host's environment variables. Hence ENV is reachable from the docker-compose command, but not these in your .env files.
On the contrary, the value for ENV is not reachable inside the container, but all variables defined in your .env files will.
I don't know if you really need your db container to access the variables defined on your .env.development. But at least seem that your host machine needs to have the content of that file, so when the docker-compose command is called, the POSTGRES_PORT variable is defined.
To fix your specific problem you would need to define the environment variables on your host machine too, not only for the container. You could do something like this:
#Set for host
ENV=development
#Also sets the variables on the host
source ./.env.$ENV
#POSTGRES_PORT defined in .env.development is used here
docker-compose up
#since env_file also contains .env.development, the variables will be reachable from the container.
Hope that helps.
There is a misconception regarding the .env file and the env_file option in the docker-compose.yml, as it is very ambiguous. Shin points it out very nicely in the github issue docker-compose doesn't use env_file. I will just quote his summary:
Variable substitution in your docker-compose.yml file will be pulled (in decreasing order of priority) from your shell's environment and your .env file.
Variables available in your container are a combination of values found in your env_file files and values described in the environment section of the service.
Those are two entirely separate sets of features.
while reading this page: https://docs.docker.com/compose/environment-variables/
and from my understanding, you should do the following:
for the global variables(that should not change) make an env file like so:
VAR1=VALUE1
VAR2=VALUE2
and for the others(that might change) you should add their name under environment in docker-compose.yml like this:
environment:
- VAR1
- VAR2
this will take the VAR1 and VAR2 values from the shell you are running docker-compose.
I hope this helps.