I'm building an ELK (elasticsearch/logstash/kibana) stack using docker-compose/docker-machine. The plan is to deploy it to a digitalocean droplet and, if needed, use Swarm to scale it.
It works really well, but I'm a bit confused where I should store configuration for the services (e.g. configuration files for logstash, or the SSL certs for nginx).
At first, I just mounted a host directory as volume. The problem with that is that all the configuration files have to be available on the docker host, so I have to sync them to the digitalocean droplet.
Then I thought I had a very smart idea: create a data container with all the configuration, and let the other services access it using volumes_from:
config:
volumes:
- /conf
build:
context: .
# this just copies the conf folder into the image
dockerfile: /dockerfiles/config/Dockerfile
logstash:
image: logstash:2.2
volumes_from:
- config
The problem with this approach became obvious quite fast: every time I change any configuration, I need to stop all containers that are linked to the config container, recreate the config image and container, and then start up the services again. Not great for uptime :(.
So, what's the best way? Ideally, the configuration files would be inside a container, so I can just ship it to wherever.
One common solution to this problem is to put a load balancer in front of the services. That way when you want to change the configuration you can start a new container and the load balancer will pick it up, then stop the old container. No downtime, and it lets you reload the config.
Another option might be to use a named volume. Then you can just modify the contents of the named volume and any containers using it will see the new files. However if you are using multiple nodes with swarm, you'll need to use a volume driver that supports multi-host volumes.
Did you consider to use the extension mechanism and override a settings file? Put a second docker-compose.override.yml in the same directory as the main compose file, or use explicit extension within the compose file. See
https://docs.docker.com/compose/extends/
That way you could integrate a configuration file in a transparent way, or control the parameters you want to change via environment variables that are different in the overriding composition.
Related
I have this website which uses angular for the frontend and has a NodeJs backend. The backend serves the angular files and handles client calls.
As it is now, they are both packages and deployed as one docker image. Meaning, if I change the frontend, I also need to build the backend in order to create a new image. So it makes sense to seperate them.
But if I create an image for the backend and frontend, how can the backend serve files from the frontend container?
Is this the right approach?
I think I would like to have the frontend inside a docker image, so I can do stuff like rollback easily (which is not possible with docker volumes for example)!
Yes! Containerize them to have their own containers is the way to go! This make us deploy/deliver faster and also separate build pipelines to make steps clearer to everyone involved.
I won't bother having backend serving frontend files. I usually create my frontend image with a webserver (eg nginx:alpine), since frontend and backend can be separately deployed to different machines or systems. And don't forget to use multi-stage builds to minimize image size.
But if you must do that, I guess you can use docker-compose to have them in one network, and then, forward requests of those static files from backend to the frontend webserver. (Just some hacks, there must be a better way to handle this from more advanced people here :P)
I have something similar, an Emberjs running in one docker container that connects to nodejs that is running in its own container (not to mention the DB that runs on a third container). It all works rather well.
I would recommend that you create your containers using docker-compose which will automatically create the network so that both containers can talk to each other using :.
Also I set it up so that the code is mapped from a folder in my machine to a folder in the container. This allows me to easily change stuff, work with Git , etc...
Here is a snippet of my docker-compose file as an example:
version: "3"
services:
....
ember_gui:
image: danlynn/ember-cli
container_name: ember_dev
depends_on:
- node_server
volumes:
- ./Ember:/myapp
command: ember server
ports:
- "4200:4200"
- "7020:7020"
- "7357:7357"
Here I create an ember_gui service which creates a container named ember_dev based on an existing image from docker hub. Then it tells docker that this container is dependent on another container that needs to be compiled first and which I do not show in the snippet but that is defined in the same docker-compose file (node_server). After that, I map the ./Ember directory to the /myapp folder in the container so that I can share the code. Finally I start the ember server and open some ports
I have a docker-compose setup for development, and I need to replicate the same file for production or staging.
Currently, aside from volumes ports and environment I am not quite sure what settings "may need" to be changed for production/environment.
To clarify:
I have to change volumes, because I usually mount a USB drive to my docker container ex: d:/var/www
The issue with ports is, because there may be other services that use port 80 on my local machine, so I may need to change that.
environment is of course, different for prod/dev .. (mainly authentication and database access)
Any more tips would be nice to know.
The exact list would depend on your environment/ops team requirements, but this is what seems to be useful besides ports/existing volumes:
Networks
The default network might not work for your prod environment.
As an example, your ops team might decide to put nginx/php-fpm/mariadb on different networks like in the following example (https://docs.docker.com/compose/networking/#specify-custom-networks) or even use a pre-existing network
Mysql configs
They usually reside in a separate dir i.e. /etc/my.cnf and /etc/my.cnf.d.
These configs are likely to be different between prod/dev.
Can’t see it in your volumes paths
Php-fpm7
Haven’t worked with php-fpm7, but in php-fpm5 it also had a different folder with config files (/etc/php-fpm.conf and /etc/php-fpm.d) that is missing in your volumes. These files are also likely to differ once your handle even a moderate load (you’ll need to configure number of workers/timeouts etc)
Nginx
Same as for php-fpm, ssl settings/hostnames/domains configurations are likely to be different
Logging
Think on what logging driver might fit your needs best.
From here:
Docker includes multiple logging mechanisms to help you get
information from running containers and services. These mechanisms are
called logging drivers.
You can easily configure it in docker-compose, here's an example bring up a dedicated fluentd container for logging:
version: "3"
services:
randolog:
image: golang
command: go run /usr/src/randolog/main.go
volumes:
- ./randolog/:/usr/src/randolog/
logging:
driver: fluentd
options:
fluentd-address: "localhost:24224"
tag: "docker.{{.ID}}"
fluentd:
build:
context: ./fluentd/
ports:
- "24224:24224"
- "24224:24224/udp"
You should follow the Use Compose in production
documentation:
You probably need to make changes to your app configuration to make it
ready for production. These changes may include:
Removing any volume bindings for application code, so that code stays inside the container and can’t be changed from outside
Binding to different ports on the host
Setting environment variables differently, such as when you need to decrease the verbosity of logging, or to enable email sending)
Specifying a restart policy like restart: always to avoid downtime
Adding extra services such as a log aggregator
How can I change my / Prometheus/ Prometheus.yml on the container itself
I want it to track
1) my appserver - an Node application in a docker container
2) my postgres db
3) my apached and nginx web server
I do know that one has to change the Prometheus.yml file and add targets
Generic mechanisms to change docker images are
Mount your configuration file at the desired path.
Create a new image by copying the co fig file in the new Dockerfile. Not recommended if you have to use different configs for different environments/apps
Change the file on the running container if the application (peomerheus in this case) supports it. I know that some of the apps like Kibana do this. Good for debugging, not recommended for production environments.
It's hard to be precise with an answer given the lack of details but in general, you place your modified prometheus.yml file within the Docker context and modify your Dockerfile to add the instruction
COPY prometheus.yml /path/to/prometheus.yml
I'm putting together a docker-compose.yml file to run the multiple services for a project I'm working on. This project has a Magento and Wordpress website residing under the same domain, with that "same domain" aspect requiring a very simple nginx container to route requests to either service.
So I have this architected as 4 containers (visualisation):
A "magento" container, using an in-house project-specific image.
A "wordpress" container, using an in-house project-specific image.
A "db" container running mysql:5.6, with the init db dumps mounted at /docker-entrypoint-initdb.d.
A "router" container running nginx:alpine with a custom config mounted at /etc/nginx/nginx.conf. This functions as a reverse-proxy with two location directives set up. location / routes to "magento", and location /blog routes to "wordpress".
I want to keep things simple and avoid building unnecessary custom images, but in the context of the "router" I'm not sure what I'm doing is the best approach, or if that would be better off as a project-specific image.
I'm leaning toward my current approach of mounting a custom config into the nginx:alpine container, because the configuration is specific to the stack that's running – it wouldn't make sense as a single standalone container.
So the two methods, without a custom image we have the following in docker-compose.yml
router:
image: nginx:alpine
networks:
- projectnet
ports:
- "80:80"
volumes:
- "./router/nginx.conf:/etc/nginx/nginx.conf"
Otherwise, we have a Dockerfile containing the following, as I've seen suggested across the internet and in other StackOverflow responses.
FROM nginx:alpine
ADD nginx.conf /etc/nginx/
Does anybody have arguments for/against either approach?
If you 'bake in' the nginx config (your second approach)
ADD nginx.conf /etc/nginx/
it makes your docker containers more portable - i.e. they can be downloaded and run on any server capable of running docker and it will just work.
If you use option 1, mounting the config file at run time, then you are transferring one of your dependencies to outside of your container. This makes it a dependency that must be managed outside of docker.
In my opinion it is best to put as many dependencies inside your dockerfiles as possible because it makes them more portable and more automated (great for CI Pipelines for example)
There are reasons for mounting files at run time and these are usually centred around environmentally specific settings (although these can largely be overcome within docker too) or 'sensitive' files that application developers shouldn't or couldn't have access to. For example ssl certificates, database passwords, etc
There seems to be sparse conflicting information around on this subject. Im new to Docker and need some help. I have several docker containers to run an application, some require different config files for local development as they do for production. I don't seem to be able to find a neat way to automate this with Docker.
My containers that include custom config are Nginx, Freeradius and my code/data container is Laravel therefore requires a .env.php file (L4.2 at the moment).
I have tried Dockers environment variables in docker compose:
docker-compose.yml:
freeradius:
env_file: ./env/freeradius.env
./env/freeradius.env
DB_HOST=123.56.12.123
DB_DATABASE=my_database
DB_USER=me
DB_PASS=itsasecret
Except I can't pick those variables up in /etc/freeradius/mods-enabled/sql where they need to be.
How can I get Docker to run as a 'local' container with local config, or as a 'production' container with production config without having to actually build different containers, and without having to attach to each container to manually config them. I need it automated as this is to eventually be used on quite a large production environment which will have a large cluster of servers with many instances.
Happy to learn Ansible if this is how people achieve this.
If you can't use environment variables to configure the application (which is my understanding of the problem), then the other option is to use volumes to provide the config files.
You can use either "data volume containers" (which are containers with the sole purpose of sharing files and directories) with volumes_from, or you can use a named volume.
Data Volume container
If the go with the "data volume container" route, you would create a container with all the environment configuration files. Every service that needs a file uses volumes_from: - config. In dev you'd have something like:
configs:
build: dev-configs/
freeradius:
volumes_from:
- configs
The dev-configs directory will need a Dockerfile to build the image, which will have a bunch of VOLUME directives for all the config paths.
For production (and other environments) you can create an override file which replaces the configs service with a different container:
docker-compose.prod.yml:
configs:
build: prod-configs/
You'll probably have other settings you want to change between dev and prod, which can go into this file as well. Then you run compose with the override file:
docker-compose -f docker-compose.yml -f docker-compose.prod.yml up -d
You can learn more about this here: http://docs.docker.com/compose/extends/#multiple-compose-files
Named Volume
If you go with the "named volume" route, it's a bit easier to configure. On dev you create a volume with docker volume create thename and put some files into it. In your config you use it directly:
freeradius:
volumes:
- thename:/etc/freeradius/mods-enabled/sql
In production you'll either need to create that named volume on every host, or use a volume driver plugin that supports multihost (I believe flocker is one example of this).
Runtime configs using Dockerize
Finally, another option that doesn't involve volumes is to use https://github.com/jwilder/dockerize which lets you generate the configs at runtime from environment variables.