I am involved in a Docker Compose project and we take advantage of the .env file possibility. However, I discovered that I can not reuse one environment variable while constructing another one, or reuse existing OS-level environment variables.
For example, this doesn't work:
VIRTUAL_HOST=domain.com
LETSENCRYPT_HOST=${VIRTUAL_HOST}
LETSENCRYPT_EMAIL=contact#${VIRTUAL_HOST}
Any ways around it?
Create a entry script similar to this:
#!/usr/bin/env bash
set -e
# Run a substitution because docker don't support nesting variables
export LETSENCRYPT_HOST=$(echo ${LETSENCRYPT_HOST} | envsubst)
export LETSENCRYPT_EMAIL=$(echo ${LETSENCRYPT_EMAIL} | envsubst)
exec "$#"
Related
In docker run one can do
docker run --env-file <(env | grep ^APP_) ...
Is there a similar way for docker-compose?
I would like to avoid physical env file.
The equivalent of --env-file option of the docker cli in docker-compose is the env_file configuration option in the docker-compose file. But I think this requires a physical .env file.
If you want use the environment variables of your host machine, you can define them in docker-compose (with an optional fallback value):
version: "3.9"
services:
app:
image: myapp
environment:
- APP_MYVAR=${APP_MYVAR-fallbackvalue}
It's not so convenient as doing a grep of your ^APP_ vars, but one way to avoid the physical file.
You can do it by supplying multiple compose files, as documented here. In your case the first one is a physical docker-compose.yml and the second one a Compose containing only the environment variables for the needed service. Obviously, env variables must be properly formatted, so a sed that prepends the string - is necessary because they are added as YAML list.
docker-compose -f docker-compose.yml -f <(printf "services:
your_service:
environment:\n$(env | grep ^APP_ | sed -e "s/^/ - /")"
) up
This is how Docker behaves:
When you supply multiple files, Compose combines them into a single configuration. Compose builds the configuration in the order you supply the files. Subsequent files override and add to their predecessors
I have a simple docker-compose file which is used to launch my containers. I wish to have another yaml file which contains additional, optional containers. It can live in a separate directory. My goal is to find a way to force the namespace of the created swarms so they exist within the same network/use space so they can talk to each other.
compose1.yaml
services:
web:
build: .
compose2.yaml
services:
web1:
build: .
So if i run both of these they would be prepended with the folder they exist in, in my case: a, and b respectively.
I wanted to ensure that they flow together, despite not being in the same file hierarchy.
I have been coming over keywords in the docker-compse documents, and was not sure what the best way to do this in the yaml file would be, but noticed in the CLI, might be able up to update various names.
How does one accomplish this?
Note: I have also created a third file under the b directory, a sibling to compose2.yaml. So i can run those separately and they work just fine.
a/
compose.yaml
b/
compose2.yaml
another.yaml
So i have been able to merge them together by doing: cd /b/ && docker-compose -f compose2.yaml -f another.yaml up -d to run 2 files together, and they exist under the B namespace. Likewise, I can also run them sequentially instead of referencing them in 1 command.
So my question is how can I do something like:
docker-compose --namespace test compose.yaml up
docker-compose --namespace test compose2.yaml up
such that I could view items accordingly with docker? It seems that I would need to consider running the command from under the first shared parent folder?
so if a and b existed under test, I could just do:
cd /test
docker-compose -f a/compose.yaml up -d
docker-compose -f b/compose2.yaml up -d
then my services would be listed as: test_web, test_db-box, etc.
So I found out that one person's namespace is another person's project-name.
That being said, after understanding nuances, the project-name ( -p | --project-name ) is the prepend for the compose services.
docker-compose --project-name foo -f a/compose.yaml up
docker-compose --project-name foo -f b/compose2.yaml up
This will create the services: foo_web_1
The format for this is: %{prepend}%{servicename}%{number}
The issue then is, Can we find a way to implement this CLI property to work from within the YAML file, possibly as a config option for the file. The Docker Compose website information states that you can supply an environment variable ( _ COMPOSE_PROJECT_NAME_ ) to change change the project name from the default of the base directory, BUT not from within a Compose YAML.
If i want to then launch multiple compose files, under a particular project what I would want to do is to just encapsulate it with a BASH or SHELL script.
#/bin/bash
export COMPOSE_PROJECT_NAME=ultimate-project
docker-compose -f a/compose.yaml up -d
docker-compose -f b/compose2.yaml up -d
and that would create services :
ultimate-project_web_1
ultimate-project_web2_1
I have a docker-compose file that allows me to pass the environment variables as a file (.env file). As I have multiple ENV variables, Is there any option in Dockerfile like env_file in docker-compose for passing multiple environment variables during docker build?
This is the docker-compose.yml
services:
web:
image: "node"
links:
- "db"
env_file: "env.app"
AFAIK, there is no such way to inject environment variables using a file during the build step using Dockerfile. However, in most cases, people do end up using an entrypoint script & injecting variables during the docker run or docker-compose up.
In case it's a necessity you might need to write a shell wrapper which will change the values in the Dockerfile dynamically by taking a key-value pair text file as an input or make it something as below but the ENV file name need to be included in Dockerfile.
COPY my-env-vars /
RUN export $(cat my-env-vars | xargs)
It's an open issue - https://github.com/moby/moby/issues/28617
PS - You need to be extra careful while using this approach because the secrets are baked into the image itself.
I have an app that runs on several docker containers. To simplify my problem let's say I have 3 containers : one for MySQL and 2 for 2 instances of the api (sharing the same volume where the code is but with a different env specifying different database settings) as configured in the following docker-compose.yml
services:
api-1:
image: mynamespace/my-image-name:1.0
environment:
DB_NAME: db_api_1
api-2:
image: mynamespace/my-image-name:1.0
environment:
DB_NAME: db_api_2
In a Makefile I have rules for deploying the containers and installing the database for each of my api instances.
What I am trying to achieve is to create a make rule that dumps a database given an env. Knowing that I have no MySQL client installed on my api instances, I thought there should be a way to extract the env variables I need (with printenv VARNAME) from an api container then use it in the database container.
Anyone knows how this could be achieved ?
Assuming that it's an environment variable that you set using the -e option to docker run, you could do something like this:
docker exec api_container sh -c 'echo $VARNAME'
If it is environment variable that was set inside the container e.g. from a script, then you're mostly out of luck. You could of course inspect /proc/<pid>/environ, but that's hacky and I wouldn't recommend it.
It also sounds as if you would benefit from using something like docker-compose to manage your containers.
I'm trying to wrap my head around Docker, but I'm having a hard time figuring it out. I tried to implement it in my small project (MERN stack), and I was thinking how do you distinct between development, (maybe staging), and production environments.
I saw one example where they used 2 Docker files, and 2 docker-compose files, (each pair for one env, so Dockerfile + docker-compose.yml for prod, Dockerfile-dev + docker-compose-dev.yml for dev).
But this just seems like a bit of an overkill for me. I would prefer to have it only in two files.
Also one of the problem is that e.g. for development I want to install nodemon globally, but not for poduction.
In perfect solution I imagine running something like that
docker-compose -e ENV=dev build
docker-compose -e ENV=dev up
Keep in mind, that I still don't fully get docker, so if you caught some of mine misconceptions about docker, you can point them out.
You could take some clues from "Using Compose in production"
You’ll almost certainly want to make changes to your app configuration that are more appropriate to a live environment. These changes may include:
Removing any volume bindings for application code, so that code stays inside the container and can’t be changed from outside
Binding to different ports on the host
Setting environment variables differently (e.g., to decrease the verbosity of logging, or to enable email sending)
Specifying a restart policy (e.g., restart: always) to avoid downtime
Adding extra services (e.g., a log aggregator)
The advice is then not quite similar to the example you mention:
For this reason, you’ll probably want to define an additional Compose file, say production.yml, which specifies production-appropriate configuration. This configuration file only needs to include the changes you’d like to make from the original Compose file.
docker-compose -f docker-compose.yml -f production.yml up -d
This overriding mechanism is better than trying to mix dev and prod logic in one compose file, with environment variable to try and select one.
Note: If you name your second dockerfile docker-compose.override.yml, a simple docker-compose up would read the overrides automatically.
But in your case, a name based on the environment is clearer.
Docker Compose will read docker-compose.yml and docker-compose.override.yml by default. Understanding-Multiple-Compose-Files
You can set a default docker-compose.yml and different overwrite compose file. For example, docker-compose.prod.yml docker-compose.test.yml. Keep them in the same place.
Then create a symbolic link named docker-compose.override.yml for each env.
Track docker-compose.{env}.yml files and add docker-compose.override.yml to .gitignore.
In prod env: ln -s ./docker-compose.prod.yml ./docker-compose.override.yml
In test env: ln -s ./docker-compose.test.yml ./docker-compose.override.yml
The project structure will then look like this:
project\
- docker-compose.yml # tracked
- docker-compose.prod.yml # tracked
- docker-compose.test.yml # tracked
- docker-compose.override.yml # ignored & linked to override composefile for current env
- src/
- ...
Then you have done. In each environment, you can use the compose-file with the same command docker-compose up
If you are not sure, use docker-compose config to check if it's been override properly.