Simple docker containers: Build dedicated image or mount config as volume? - docker

I'm putting together a docker-compose.yml file to run the multiple services for a project I'm working on. This project has a Magento and Wordpress website residing under the same domain, with that "same domain" aspect requiring a very simple nginx container to route requests to either service.
So I have this architected as 4 containers (visualisation):
A "magento" container, using an in-house project-specific image.
A "wordpress" container, using an in-house project-specific image.
A "db" container running mysql:5.6, with the init db dumps mounted at /docker-entrypoint-initdb.d.
A "router" container running nginx:alpine with a custom config mounted at /etc/nginx/nginx.conf. This functions as a reverse-proxy with two location directives set up. location / routes to "magento", and location /blog routes to "wordpress".
I want to keep things simple and avoid building unnecessary custom images, but in the context of the "router" I'm not sure what I'm doing is the best approach, or if that would be better off as a project-specific image.
I'm leaning toward my current approach of mounting a custom config into the nginx:alpine container, because the configuration is specific to the stack that's running – it wouldn't make sense as a single standalone container.
So the two methods, without a custom image we have the following in docker-compose.yml
router:
image: nginx:alpine
networks:
- projectnet
ports:
- "80:80"
volumes:
- "./router/nginx.conf:/etc/nginx/nginx.conf"
Otherwise, we have a Dockerfile containing the following, as I've seen suggested across the internet and in other StackOverflow responses.
FROM nginx:alpine
ADD nginx.conf /etc/nginx/
Does anybody have arguments for/against either approach?

If you 'bake in' the nginx config (your second approach)
ADD nginx.conf /etc/nginx/
it makes your docker containers more portable - i.e. they can be downloaded and run on any server capable of running docker and it will just work.
If you use option 1, mounting the config file at run time, then you are transferring one of your dependencies to outside of your container. This makes it a dependency that must be managed outside of docker.
In my opinion it is best to put as many dependencies inside your dockerfiles as possible because it makes them more portable and more automated (great for CI Pipelines for example)
There are reasons for mounting files at run time and these are usually centred around environmentally specific settings (although these can largely be overcome within docker too) or 'sensitive' files that application developers shouldn't or couldn't have access to. For example ssl certificates, database passwords, etc

Related

How to expose files from a docker container through a webserver

I have this website which uses angular for the frontend and has a NodeJs backend. The backend serves the angular files and handles client calls.
As it is now, they are both packages and deployed as one docker image. Meaning, if I change the frontend, I also need to build the backend in order to create a new image. So it makes sense to seperate them.
But if I create an image for the backend and frontend, how can the backend serve files from the frontend container?
Is this the right approach?
I think I would like to have the frontend inside a docker image, so I can do stuff like rollback easily (which is not possible with docker volumes for example)!
Yes! Containerize them to have their own containers is the way to go! This make us deploy/deliver faster and also separate build pipelines to make steps clearer to everyone involved.
I won't bother having backend serving frontend files. I usually create my frontend image with a webserver (eg nginx:alpine), since frontend and backend can be separately deployed to different machines or systems. And don't forget to use multi-stage builds to minimize image size.
But if you must do that, I guess you can use docker-compose to have them in one network, and then, forward requests of those static files from backend to the frontend webserver. (Just some hacks, there must be a better way to handle this from more advanced people here :P)
I have something similar, an Emberjs running in one docker container that connects to nodejs that is running in its own container (not to mention the DB that runs on a third container). It all works rather well.
I would recommend that you create your containers using docker-compose which will automatically create the network so that both containers can talk to each other using :.
Also I set it up so that the code is mapped from a folder in my machine to a folder in the container. This allows me to easily change stuff, work with Git , etc...
Here is a snippet of my docker-compose file as an example:
version: "3"
services:
....
ember_gui:
image: danlynn/ember-cli
container_name: ember_dev
depends_on:
- node_server
volumes:
- ./Ember:/myapp
command: ember server
ports:
- "4200:4200"
- "7020:7020"
- "7357:7357"
Here I create an ember_gui service which creates a container named ember_dev based on an existing image from docker hub. Then it tells docker that this container is dependent on another container that needs to be compiled first and which I do not show in the snippet but that is defined in the same docker-compose file (node_server). After that, I map the ./Ember directory to the /myapp folder in the container so that I can share the code. Finally I start the ember server and open some ports

How to change Prometheus.yml file in the container

How can I change my / Prometheus/ Prometheus.yml on the container itself
I want it to track
1) my appserver - an Node application in a docker container
2) my postgres db
3) my apached and nginx web server
I do know that one has to change the Prometheus.yml file and add targets
Generic mechanisms to change docker images are
Mount your configuration file at the desired path.
Create a new image by copying the co fig file in the new Dockerfile. Not recommended if you have to use different configs for different environments/apps
Change the file on the running container if the application (peomerheus in this case) supports it. I know that some of the apps like Kibana do this. Good for debugging, not recommended for production environments.
It's hard to be precise with an answer given the lack of details but in general, you place your modified prometheus.yml file within the Docker context and modify your Dockerfile to add the instruction
COPY prometheus.yml /path/to/prometheus.yml

chown docker volumes on host (possibly through docker-compose)

I have the following example
version: '2'
services:
proxy:
container_name: proxy
hostname: proxy
image: nginx
ports:
- 80:80
- 443:443
volumes:
- proxy_conf:/etc/nginx
- proxy_htdocs:/usr/share/nginx/html
volumes:
proxy_conf: {}
proxy_htdocs: {}
which works fine. When I run docker-compose up it creates those named volumes in /var/lib/docker/volumes and all is good. However, from the host, I can only access /var/lib/docker as root, because it's root:root (makes sense). I was wondering if there is a way of chowning the host's directories to something more sensible/safe (like, my relatively unprivileged user that I use to do most things on the host) or if I just have to suck it up and chown them manually. I'm starting to have a number of scripts already to work around other issues, so having an extra couple of lines won't be much of a problem, but I'd really like to keep my self-written automation minimal, if I can -- fewer chances for stupid mistakes.
By the way, no: if I mount host directories instead of creating volumes, they get overlaid, meaning that if they start empty, they stay empty, and I don't get the default configuration (or whatever) from inside the container.
Extra points: can I just move the volumes to a more convenient location? Say, /home/myuser/myserverstuff/volumes?
It's best to not try to access files inside /var/lib/docker directly. Those directories are meant to be managed by the docker daemon, and not to be messed with.
To access the data inside a volume, there's a number of options;
use a bind-mounted directory (you considered that, but didn't fit your use case).
use a "service" container that uses the same volume and makes it accessible through that container, for example a container running ssh (to use scp) or a SAMBA container (such as svendowideit/samba)
use a volume-driver plugin. there's various plugins around that offer all kind of options. For example, the local persist plugin is a really simple plug-in that allows you to specify where docker should store the volume data (so outside of /var/lib/docker)

docker-compose: where to store configuration for services?

I'm building an ELK (elasticsearch/logstash/kibana) stack using docker-compose/docker-machine. The plan is to deploy it to a digitalocean droplet and, if needed, use Swarm to scale it.
It works really well, but I'm a bit confused where I should store configuration for the services (e.g. configuration files for logstash, or the SSL certs for nginx).
At first, I just mounted a host directory as volume. The problem with that is that all the configuration files have to be available on the docker host, so I have to sync them to the digitalocean droplet.
Then I thought I had a very smart idea: create a data container with all the configuration, and let the other services access it using volumes_from:
config:
volumes:
- /conf
build:
context: .
# this just copies the conf folder into the image
dockerfile: /dockerfiles/config/Dockerfile
logstash:
image: logstash:2.2
volumes_from:
- config
The problem with this approach became obvious quite fast: every time I change any configuration, I need to stop all containers that are linked to the config container, recreate the config image and container, and then start up the services again. Not great for uptime :(.
So, what's the best way? Ideally, the configuration files would be inside a container, so I can just ship it to wherever.
One common solution to this problem is to put a load balancer in front of the services. That way when you want to change the configuration you can start a new container and the load balancer will pick it up, then stop the old container. No downtime, and it lets you reload the config.
Another option might be to use a named volume. Then you can just modify the contents of the named volume and any containers using it will see the new files. However if you are using multiple nodes with swarm, you'll need to use a volume driver that supports multi-host volumes.
Did you consider to use the extension mechanism and override a settings file? Put a second docker-compose.override.yml in the same directory as the main compose file, or use explicit extension within the compose file. See
https://docs.docker.com/compose/extends/
That way you could integrate a configuration file in a transparent way, or control the parameters you want to change via environment variables that are different in the overriding composition.

Managing dev/test/prod environments with Docker

There seems to be sparse conflicting information around on this subject. Im new to Docker and need some help. I have several docker containers to run an application, some require different config files for local development as they do for production. I don't seem to be able to find a neat way to automate this with Docker.
My containers that include custom config are Nginx, Freeradius and my code/data container is Laravel therefore requires a .env.php file (L4.2 at the moment).
I have tried Dockers environment variables in docker compose:
docker-compose.yml:
freeradius:
env_file: ./env/freeradius.env
./env/freeradius.env
DB_HOST=123.56.12.123
DB_DATABASE=my_database
DB_USER=me
DB_PASS=itsasecret
Except I can't pick those variables up in /etc/freeradius/mods-enabled/sql where they need to be.
How can I get Docker to run as a 'local' container with local config, or as a 'production' container with production config without having to actually build different containers, and without having to attach to each container to manually config them. I need it automated as this is to eventually be used on quite a large production environment which will have a large cluster of servers with many instances.
Happy to learn Ansible if this is how people achieve this.
If you can't use environment variables to configure the application (which is my understanding of the problem), then the other option is to use volumes to provide the config files.
You can use either "data volume containers" (which are containers with the sole purpose of sharing files and directories) with volumes_from, or you can use a named volume.
Data Volume container
If the go with the "data volume container" route, you would create a container with all the environment configuration files. Every service that needs a file uses volumes_from: - config. In dev you'd have something like:
configs:
build: dev-configs/
freeradius:
volumes_from:
- configs
The dev-configs directory will need a Dockerfile to build the image, which will have a bunch of VOLUME directives for all the config paths.
For production (and other environments) you can create an override file which replaces the configs service with a different container:
docker-compose.prod.yml:
configs:
build: prod-configs/
You'll probably have other settings you want to change between dev and prod, which can go into this file as well. Then you run compose with the override file:
docker-compose -f docker-compose.yml -f docker-compose.prod.yml up -d
You can learn more about this here: http://docs.docker.com/compose/extends/#multiple-compose-files
Named Volume
If you go with the "named volume" route, it's a bit easier to configure. On dev you create a volume with docker volume create thename and put some files into it. In your config you use it directly:
freeradius:
volumes:
- thename:/etc/freeradius/mods-enabled/sql
In production you'll either need to create that named volume on every host, or use a volume driver plugin that supports multihost (I believe flocker is one example of this).
Runtime configs using Dockerize
Finally, another option that doesn't involve volumes is to use https://github.com/jwilder/dockerize which lets you generate the configs at runtime from environment variables.

Resources