How to conditionally run different configs in docker-compose.yml? - docker

I have configured SSL via certbot on live server. I have a volume mapping for this in nginx section of docker-compose.yml:
volumes:
...
- /etc/letsencrypt:/etc/letsencrypt
This works just fine on the live server but I have a different setup on my local machine, where I run the app and see it on http://localhost. I suppose I don't need SSL on my local machine, so probably I just can exclude this part of setup if it runs locally.
Also this case makes me think I will have to potentially configure some other things differently locally vs live.
So, the question is how to properly distinguish these differences between local and live setups and apply them (semi)automatically depending on the environment?

Environment variables are typically a good way to create simple runtime portability like this, and many tools/apps/packages support runtime configuration by nothing more than environment variables. Unfortunately, nginx is not one of those apps
For nginx, try something like this:
environment:
- NGINX_CONF=localhost.conf
Here, localhost.conf would be your nginx configuration for your local machine. Run an entrypoint.sh of some kind, and symlink the config specified by NGINX_CONF to wherever nginx will pick it up (usually /etc/nginx/conf.d or /etc/nginx/sites-enabled).
ln -s /etc/nginx/conf.d/running.conf /app/nginx-confs/${NGINX_CONF}
This assumes you have all of your confs copied to the container in /app/nginx-confs, but they can live wherever you like. The localhost.conf would serve your site as http://localhost.
For your live server, pass NGINX_CONF=liveserver.conf. This conf would serve https://www.liveserver.com or whatever you host is.
At this point, you can choose which nginx configuration you'll run when you start your container, using environment variables. Even if you don't want to do it this way, hopefully it gets you moving in the right direction and thinking about environment variables as a way to configure at runtime.
There are other, more granular ways of managing nginx confs at runtime. Something like confd or a templating engine like the mustache templating engine are options. There are roundabout ways in nginx with env directive and set_by_lua, but this feels like the most hacky solution to me so I prefer the others.

You can write a Makefile to generate different docker-compose.yml files depending on your needs.
Makefile:
# Makefile
-include config.mk
A_CONFIG_VAR ?= default_value
all: docker-compose.yml
docker-compose.yml: docker-compose.yml.in config.mk
#echo 'Generating docker-compose.yml'
#sed -e 's|##A_CONFIG_VAR##|$(A_CONFIG_VAR)|g' $< > $#
config.mk: put your configuration variables in this file.
# config.mk
A_CONFIG_VAR = "a_value"
docker-compose.yml.in: write an input docker-compose.yml file like so
volumes:
- /path/to/somewhere:##A_CONFIG_VAR##
Change the contents of config.mk to suit your needs. And then run
make
This should generate a docker-compose.yml file for you.

Related

How do I generate a secret key and share it between two containers in docker-compose? Is it possible?

The problem: I'd like to use imgproxy in my project, but I need some way to generate a signing key when the containers are first run. I then need to share that key between two containers: imgproxy, which accepts it in an environment variable, and my server application, where I could read it from wherever needed. The key needs to be unique and random for each deployment. It would be great to avoid having to run any additional commands before docker-compose up to generate these keys.
What I considered so far:
There are docker-compose secrets. Those live in files. You still need to create and fill those files before you start anything.
I can simply instruct the users to generate the key and edit docker-compose.yml to add it there.
Anyway, what's the best/correct way to approach this? This feels like a popular use case, so surely there has to be something I missed?
The best way to handle this is to create the secret externally; in Compose, perhaps in a .env file. This will translate well to other environments and doesn't require changing code at all. This also works well with secrets that require some user intervention to set up (for example, signing a TLS certificate), it will survive a docker-compose down, and it works even if you split the two halves of the application into separate environments.
If these considerations don't matter to you, and it's really important that the startup be autonomous, you could put the secret into a shared file. Decide that one of the containers is "first". Write a script that runs at startup time that generates the secret:
#!/bin/sh
# Create a random token if it doesn't already exist
if [ ! -f /secrets/token ]; then
dd if=/dev/random bs=48 count=1 | base64 > /secrets/token
fi
# Read back the token into an environment variable
SECRET_TOKEN=$(cat /secrets/token)
# Run the main container process
exec "$#"
In your Dockerfile, COPY this script in and make it be the image's ENTRYPOINT. This must use JSON-array syntax, ENTRYPOINT ["entrypoint.sh"]. If you're launching your application via ENTRYPOINT, move that command into CMD instead (or combine a split ENTRYPOINT/CMD into CMD).
Now in your Compose setup, you need to create a volume and share it between the two containers.
version: '3.8'
volumes:
secrets: # empty
services:
imgproxy:
image: ...
volumes:
- secrets:/secrets # matches the path in the entrypoint script
server:
image: ...
volumes:
- secrets:/secrets # could be a different path
(In particular if you're considering eventually running this application on Kubernetes, this approach won't work well. Of the volume types Kubernetes supports, few can be mounted into multiple containers at the same time. There is a native Kubernetes Secret object that's intended for this use, but that then gets back to the original pattern of "create the secret separately".)

Multiple docker .env for multiple servers(prod, dev) that containers use?

I really did not know how to word the title.
In my system I have two instances:
a Prod server
a Dev server
Dev used mostly for testing. In each case I have two versions of AMQP both having different hostnames.
To avoid duplication or unnecessary time rewriting the same code in multiple projects I wanted to use the env file that docker compose has, though everywhere I read, no one discusses this case. That case being that depending on where a stack is deployed is which env file it would use and that env file existing on the swarm itself rather than the individual projects.
Hopefully, I didn't miss anything when explaining this. Summary being two swarms each having their own env file that the containers deployed to it can use. Also if I need to reword anything, I will do so.
you can have multiple .env files and assign them to services in docker-compose.yml like
web:
env_file:
- web-variables.env
nginx:
env_file:
- nginx-variables.env
and if you want to change them for development environemnt you could override the docker-compose.yml with docker-compose.development.ymlfile and then start it with
docker-compose -f docker-compose.yml -f docker-compose.development.yml up -d

docker-compose scaleable way to provide environment variables

I am searching for a scaleable solution to the problem of having numerous possible environments for a containerised app. Let's say am creating a web app and I have the usual deployment environments, develop, testing, production but I also have multiple instances of the app for different clients, client1, client2, client3 etc.
This quickly becomes a mess if I have to create separate docker-compose files:
docker-compose-client1-develop.yml
docker-compose-client1-testing.yml
docker-compose-client1-production.yml
docker-compose-client2-develop.yml
...
Breaking the client specific configuration into a .env file and dockers variable substitution gets me most of the way there, I can now have one docker-compose.yml file and just do:
services:
webapp:
image: 'myimage:latest'
env_file:
- ./clients/${CLIENT}.env # client specific .env file
environment:
- DEPLOY # develop, testing, production
so now I just need the CLIENT and DEPLOY environment variables set when I run docker-compose up which is fine, but I'm wondering about a convenient way to pass those environment variables in to docker-compose. There's the potential (at least during development) for a decent amount of context-switching. Is there a tidy way to pass in different CLIENT and DEPLOY env vars to docker-compose up every time I run it?
What you are trying to achieve is to set environment variables per-command.
Are you running on Linux? Take a look at env command. Just prepend your docker-compose command line like this:
env CLIENT=client1 DEPLOY=production docker-compose ...
On Windows, you may have to do something more complicated (like this), but there could be simpler ways.
Have you tried docker-compose file extending?
For instance you can have base docker-compose.yml file which is the production one and multiple extending files where you only change what needs to be overloaded:
docker-compose.dev.yml
version: '2'
services:
webapp:
env_file: path_to_the_file_env
Then you simply use both:
docker-compose -f docker-compose.yml -f docker-compose.dev.yml up
To spin up the production it's as easy as:
docker-compose up
I personally use this technique a lot in many of my projects.

Docker multiple environments

I'm trying to wrap my head around Docker, but I'm having a hard time figuring it out. I tried to implement it in my small project (MERN stack), and I was thinking how do you distinct between development, (maybe staging), and production environments.
I saw one example where they used 2 Docker files, and 2 docker-compose files, (each pair for one env, so Dockerfile + docker-compose.yml for prod, Dockerfile-dev + docker-compose-dev.yml for dev).
But this just seems like a bit of an overkill for me. I would prefer to have it only in two files.
Also one of the problem is that e.g. for development I want to install nodemon globally, but not for poduction.
In perfect solution I imagine running something like that
docker-compose -e ENV=dev build
docker-compose -e ENV=dev up
Keep in mind, that I still don't fully get docker, so if you caught some of mine misconceptions about docker, you can point them out.
You could take some clues from "Using Compose in production"
You’ll almost certainly want to make changes to your app configuration that are more appropriate to a live environment. These changes may include:
Removing any volume bindings for application code, so that code stays inside the container and can’t be changed from outside
Binding to different ports on the host
Setting environment variables differently (e.g., to decrease the verbosity of logging, or to enable email sending)
Specifying a restart policy (e.g., restart: always) to avoid downtime
Adding extra services (e.g., a log aggregator)
The advice is then not quite similar to the example you mention:
For this reason, you’ll probably want to define an additional Compose file, say production.yml, which specifies production-appropriate configuration. This configuration file only needs to include the changes you’d like to make from the original Compose file.
docker-compose -f docker-compose.yml -f production.yml up -d
This overriding mechanism is better than trying to mix dev and prod logic in one compose file, with environment variable to try and select one.
Note: If you name your second dockerfile docker-compose.override.yml, a simple docker-compose up would read the overrides automatically.
But in your case, a name based on the environment is clearer.
Docker Compose will read docker-compose.yml and docker-compose.override.yml by default. Understanding-Multiple-Compose-Files
You can set a default docker-compose.yml and different overwrite compose file. For example, docker-compose.prod.yml docker-compose.test.yml. Keep them in the same place.
Then create a symbolic link named docker-compose.override.yml for each env.
Track docker-compose.{env}.yml files and add docker-compose.override.yml to .gitignore.
In prod env: ln -s ./docker-compose.prod.yml ./docker-compose.override.yml
In test env: ln -s ./docker-compose.test.yml ./docker-compose.override.yml
The project structure will then look like this:
project\
- docker-compose.yml # tracked
- docker-compose.prod.yml # tracked
- docker-compose.test.yml # tracked
- docker-compose.override.yml # ignored & linked to override composefile for current env
- src/
- ...
Then you have done. In each environment, you can use the compose-file with the same command docker-compose up
If you are not sure, use docker-compose config to check if it's been override properly.

Managing dev/test/prod environments with Docker

There seems to be sparse conflicting information around on this subject. Im new to Docker and need some help. I have several docker containers to run an application, some require different config files for local development as they do for production. I don't seem to be able to find a neat way to automate this with Docker.
My containers that include custom config are Nginx, Freeradius and my code/data container is Laravel therefore requires a .env.php file (L4.2 at the moment).
I have tried Dockers environment variables in docker compose:
docker-compose.yml:
freeradius:
env_file: ./env/freeradius.env
./env/freeradius.env
DB_HOST=123.56.12.123
DB_DATABASE=my_database
DB_USER=me
DB_PASS=itsasecret
Except I can't pick those variables up in /etc/freeradius/mods-enabled/sql where they need to be.
How can I get Docker to run as a 'local' container with local config, or as a 'production' container with production config without having to actually build different containers, and without having to attach to each container to manually config them. I need it automated as this is to eventually be used on quite a large production environment which will have a large cluster of servers with many instances.
Happy to learn Ansible if this is how people achieve this.
If you can't use environment variables to configure the application (which is my understanding of the problem), then the other option is to use volumes to provide the config files.
You can use either "data volume containers" (which are containers with the sole purpose of sharing files and directories) with volumes_from, or you can use a named volume.
Data Volume container
If the go with the "data volume container" route, you would create a container with all the environment configuration files. Every service that needs a file uses volumes_from: - config. In dev you'd have something like:
configs:
build: dev-configs/
freeradius:
volumes_from:
- configs
The dev-configs directory will need a Dockerfile to build the image, which will have a bunch of VOLUME directives for all the config paths.
For production (and other environments) you can create an override file which replaces the configs service with a different container:
docker-compose.prod.yml:
configs:
build: prod-configs/
You'll probably have other settings you want to change between dev and prod, which can go into this file as well. Then you run compose with the override file:
docker-compose -f docker-compose.yml -f docker-compose.prod.yml up -d
You can learn more about this here: http://docs.docker.com/compose/extends/#multiple-compose-files
Named Volume
If you go with the "named volume" route, it's a bit easier to configure. On dev you create a volume with docker volume create thename and put some files into it. In your config you use it directly:
freeradius:
volumes:
- thename:/etc/freeradius/mods-enabled/sql
In production you'll either need to create that named volume on every host, or use a volume driver plugin that supports multihost (I believe flocker is one example of this).
Runtime configs using Dockerize
Finally, another option that doesn't involve volumes is to use https://github.com/jwilder/dockerize which lets you generate the configs at runtime from environment variables.

Resources