Docker Compose Dev and Production Environments Best Workflow - docker

I've built a simple Docker Compose project as a development environment. I have PHP-FPM, Nginx, MongoDB, and Code containers.
Now I want to automate the process and deploy to production.
The docker-compose.yml can be extended and can define multiple environments. See https://docs.docker.com/compose/extends/ for more information.
However, there are Dockerfiles for my containers. And for the dev environment are needed more packages than in production.
The main question is should I use separate dockerfiles for dev and prod and manage them in docker-compose.yml and production.yml ?
Separate dockerfiles are easy approach but there is code duplication.
The other solution is to use environment variables and somehow handle them from bash script (maybe as entrypoint ?).
I am searching for other ideas.

According to the official docs:
... you’ll probably want to define a separate Compose file, say
production.yml, which specifies production-appropriate configuration.
Note: The extends keyword is useful for maintaining multiple Compose
files which re-use common services without having to manually copy and
paste.

In docker-compose version >= 1.5.0 you can use environment variables, may be this suits you?

If the packages needed for development aren't too heavy (i.e. the image size isn't significally bigger) you could just create Dockerfiles that include all the components and then decide whether to activate them based on the value of an environment variable in the entrypoint.
That way you would could have the main docker-compose.yml providing the production environment while development.yml would just add the correct environment variable value where needed.

In this situation it might be worth considering using an "onbuild" image to handle the commonalities among environments, then using separate images to handle the specifics. Some official images have onbuild versions, e.g., Node. Or you can create your own.

Related

DevOps and environment variables with Docker images and containers

I am a newby with dockers and want to understand how to deal with environment variables in images/containers and how to configure the CI/CD pipelines.
In the first instance I need the big picture before deepdiving in commands etc. I searched a lot on Internet, but in the most of the cases I found the detailed commands how to create, build, publish images.
I have a .net core web application. As all of you know there are appsettings.json files for each environment, like appsettings.development.json or appsettings.production.json.
During the build you can give the environment information so .net can build the application with the specified environment variables like connection strings.
I can define the same steps in de Dockerfile and give the environment as a parameter or define as variables. That part works fine.
My question is, should I have to create seperate images for all of my environments? If no, how can I create 1 image and can use that to create a container and can use it for all of my environments? What is the best practice?
I hope I am understanding the question correctly. If the environments are the same framework, then no. In each project, import the necessary files for Docker and then update the docker-compose.yml for the project - it will then create an image for that project. Using Docker Desktop (if you prefer over CLI) you can start and stop your containers.

Where are you supposed to store your docker config files?

I'm new to docker so I have a very simple question: Where do you put your config files?
Say you want to install mongodb. You install it but then you need to create/edit a file. I don't think they fit on github since they're used for deployment though it's not a bad place to store the files.
I was just wondering if docker had any support for storing such config files so you can add them as part of running an image.
Do you have to use swarms?
Typically you'll store the configuration files on the Docker host and then use volumes to bind mount your configuration files in the container. This allows you to separately manage the configuration file from the running containers. When you make a change to the configuration, you can just restart the container.
You can then use a configuration management tool like Salt, Puppet, or Chef to manage copying/storing the configuration file onto the Docker host. Things like passwords can be managed by the secrets capabilities of the tool. When set up this way, changing a configuration file just means you need to restart your container and not build a new image.
Yes, in most cases you definitely want to keep your Dockerfiles in version control. If your org (or you personally) use GitHub for this, that's fine, but stick them wherever your other repos are. One of the main ideas in DevOps is to treat infrastructure as code. In fact, one of the main benefits of something like a Dockerfile (or a chef cookbook, or a puppet file, etc) is that it is "used for deployment" but can also be version-controlled, meaningfully diffed, etc.

Using docker-compose vs codeship-services in my CI pipeline

I am building an app that has a couple of microservices and trying to prototype a CI/CD pipeline using Codeship and Docker.
I am a bit confused with the difference between using codeship-services.yml and docker-compose.yml. Codeship docs say -
By default, we look for the filename codeship-services.yml. In its
absence, Codeship will automatically search for a docker-compose.yml
file to use in its place.
Per my understanding, docker-compose could be more appropriate in my case as I'd like to spin up containers for all the microservices at the same time for integration testing. codeship-services.yml would have helped if I wanted to build my services serially rather than in parallel.
Is my understanding correct?
You can use the codeship-services.yml in the same manner as the docker-compose.yml. So you can define your services and spin up several containers via the link key.
I do exactly the same in my codeship-services.yml. I do some testing on my frontend service and that service spins up all depended services (backend, DB, etc.) when I run it via the codeship-steps.yml, just like in docker-compose.yml.
At the beginning it was a bit confusing for me to have 2 files which are nearly the same. I actually contacted the Codeship support with that question and the answer was that it could be the same file (because all unavailable features in the compose file are just ignored, see here) but in almost all cases they have seen it was easier to have two separate files at the end, one for CI/CD and one for running docker-compose.
And the same turned out true for me as well, because I need a lot of services which are only for CI/CD like deploying or special test containers which are just doing cURL tests e.g..
I hope that helps and doesn't confuse you more ;)
Think of codeship-services.yml as a superset of docker-compose.yml, in the sense that codeship-services.yml has additional options that Docker Compose doesn't provide. Other than that, they are totally identical. Both build images the same way, and both can start all containers at once.
That being said, I agree with Moema's answer that it is often better to have both files in your project and optimize each of them for their environment. Caching, for example, can only be configured in codeship-services.yml. For our images, caching makes a huge difference for build times, so we definitely want to use it. And just like Moema, we need a lot of auxiliary tools on CI that we don't need locally (AWS CLI, curl, test frameworks, ...). On the other hand, we often run multiple instances of a service locally, which is not necessary on Codeship.
Having both files in our projects makes it much easier for us to cover the different requirements of CI and local development.

Is Docker Compose suitable for production?

I like the idea of modularizing an applications into containers (db, fronted, backed...) However, according to Docker docs "Compose is great for development, testing, and staging environments".
The sentence tells nothing about production environment. Thus, I am confused here.
Is it better to use Dockerfile to build production image from scratch and install all LAMP stack (etc.) there?
Or is it better to build production environment with docker-compose.yml? Is there any reason (overhead, linking etc.) that Docker doesn't explicitly say that Compose is great for production?
Really you need to define "production" in your case.
Compose simply starts and stops multiple containers with a single command. It doesn't add anything to the mix you couldn't do with regular docker commands.
If "production" is a single docker host, with all instances and relationships defined, then compose can do that.
But if instead you want multiple hosts and dynamic scaling across the cluster then you are really looking at swarm or another option.
Just to extend what #ChrisSainty already mentioned, compose is just an orchestration tool, you can use your own images built with your own Dockerfiles with your compose settings in a single host. But note that it is possible to compose against a swarm cluster as it exposes the same API as a single Docker host.
In my opinion it is an easy way to implement a microservice architecture using containers to tailor services with high efficient availability. In addition to that I recommend checking this official documentation on good practices on using compose in production environments.

Docker compose in production?

I plan to use docker to build my dev and production environment. I build Django based app.
On dev I use docker-compose to mange all local containers. It's a nice and convenient solution. I run Django, 3 celery queues, rabbitmq, 2 postgresql DBs.
But my production environment is quite different. I need to run gunicorn and nginx. Moreover DBs will be ran using AWS RDS. Of course Django app will require more stuff, like different settings file or more env vars.
I'm wandering how to divide it. Should I docker-compose there as well? That will require separate files for dev and prod, maybe more in future for staging etc... If yes, how to deploy it? Using Jenkins, pull, restart all using compose?
Or maybe I should use ansible to run docker commands directly? But then I have no confidence that my dev is the same as live and it's harder to predict its behaviour.
I like the idea of running compose files on all environments, but I'm not sure if maintaining multiple files for different environments is a good idea. Dev requires less env vars and less configuration. I can use env file to set all of them on production. But should I keep my live settings in the se repo? Previously I was setting all env vars while provisioning and this was separate process. Now it looks like provisioning and deploy are the same? Maybe this is the way with Docker?
Using http://docs.docker.com/compose/extends/#multiple-compose-files you can keep all the common stuff in a docker-compose.yml and use docker-compose.prod.yml to add extra services, change links, environment, and ports.

Resources