Docker compose in production? - docker

I plan to use docker to build my dev and production environment. I build Django based app.
On dev I use docker-compose to mange all local containers. It's a nice and convenient solution. I run Django, 3 celery queues, rabbitmq, 2 postgresql DBs.
But my production environment is quite different. I need to run gunicorn and nginx. Moreover DBs will be ran using AWS RDS. Of course Django app will require more stuff, like different settings file or more env vars.
I'm wandering how to divide it. Should I docker-compose there as well? That will require separate files for dev and prod, maybe more in future for staging etc... If yes, how to deploy it? Using Jenkins, pull, restart all using compose?
Or maybe I should use ansible to run docker commands directly? But then I have no confidence that my dev is the same as live and it's harder to predict its behaviour.
I like the idea of running compose files on all environments, but I'm not sure if maintaining multiple files for different environments is a good idea. Dev requires less env vars and less configuration. I can use env file to set all of them on production. But should I keep my live settings in the se repo? Previously I was setting all env vars while provisioning and this was separate process. Now it looks like provisioning and deploy are the same? Maybe this is the way with Docker?

Using http://docs.docker.com/compose/extends/#multiple-compose-files you can keep all the common stuff in a docker-compose.yml and use docker-compose.prod.yml to add extra services, change links, environment, and ports.

Related

Workflow: Docker Services from Development to Production - Best Practice

I have been developing a web application using Python, Flask, Docker(-Compose) and git/github and getting to the point where I try to figure out the best way/workflow to bring it to production. I have read some articles but not sure what is a best practice from different approaches.
My current setup is purely development oriented:
Local Docker using docker-compose to build various service images (such as db, backend workers, webapp (flask & uwisg), nginx).
using .env file for docker-compose to pass configuration to the services
Source code is bind mounted from the local docker host
db data is stored in a named volume
Using local git for source control (though I have connected it to a github repository but not been using it much since I am the only one currently developing the application)
From what I understand the steps to production could be the following:
Implement docker-compose override to distinguish between dev and prod
Implement Dockerfile Multistage builds to create prod images which include the source code in the image and do not include dev dependencies
Tag and push the production images to a registry (docker, google?) or better push the git to github?
[do security scans of the prod images]
deploy/pull the prod images from the registry (or build from github) on a service like GKE for instance
Is this a common way to do it? Am I missing something?
How would I best go about using an integration/staging environment between dev and prod, so that I can first test new prod builds or debug prod images in integration?
Does GKE for instance offer an easy way to setup an integration environment? Or could I use the Docker installation on my NAS for that?
Any best practices for backing up production (like db data most importantly)?
Thanks in advance!

How to use docker in the development phase of a devops life cycle?

I have a couple of questions related to the usage of Docker in a development phase.
I am going to propose three different scenarios of how I think Docker could be used in a development environment. Let's imagine that we are creating a REST API in Java and Spring Boot. For this I will need a MySQL database.
The first scenario is to have a docker-compose for development with the MySQL container and a production docker-compose with MySQL and the Java application (jar) in another container. To develop I launch the docker-compose-dev.yml to start only the database. The application is launched and debugged using the IDE, for example, IntelliJ Idea. Any changes made to the code, the IDE will recognize and relaunch the application by applying the changes.
The second scenario is to have, for both the development and production environment, a docker-compose with the database and application containers. That way, every time I make a change in the code, I have to rebuild the image so that the changes are loaded in the image and the containers are lauched again. This scenario may be the most typical and used for development with Docker, but it seems very slow due to the need to rebuild the image every time there is a change.
The third scenario consists of the mixture of the previous two. Two docker-compose. The development docker-compose contains both containers, but with mechanisms that allow a live reload of the application, mapping volumes and using, for example, Spring Dev Tools. In this way, the containers are launched and, in case of any change in the files, the application container will detect that there is a change and will be relaunched. For production, a docker-compose would be created simply with both containers, but without the functionality of live reload. This would be the ideal scenario, in my opinion, but I think it is very dependent on the technologies used since not all allow live reload.
The questions are as follows.
Which of these scenarios is the most typical when using Docker for phase?
Is scenario 1 well raised? That is, dockerize only external services, such as databases, queues, etc. and perform the development and debugging of the application with the IDE without using Docker for it.
The doubts and the scenarios that I raise came up after I raised the problem that scenario 2 has. With each change in the code, having to rebuild the image and start the containers again is a significant waste of time. In short, a question would be: How to avoid this?
Thanks in advance for your time.
NOTE: It may be a question subject to opinion, but it would be nice to know how developers usually deal with these problems.
Disclaimer: this is my own opinion on the subject as asked by Mr. Mars. Even though I did my best to back my answer with actual sources, it's mostly based on my own experience and a bit of common sense
Which of these scenarios is the most typical when using Docker for development?
I have seen all 3 scenarios iin several projects, each of them with their advantages and drawbacks. However I think scenario 3 with a Docker Compose allowing for dynamic code reload is the most advantageous in term of flexibility and consistency:
Dev and Prod Docker Compose are close matches, meaning Dev environment is as close as possible to Prod environment
You do not have to rebuild the image constantly when developping, but it's easy to do when you need to
Lots of technologies support such scenario, such as Spring Dev Tools as you mentionned, but also Python Flask, etc.
You can easily leverage Docker Compose extends a.k.a configuration sharing mechanism (also possible with scenario 2)
Is scenario 1 well raised? That is, dockerize only external services, such as databases, queues, etc. and perform the development and debugging of the application with the IDE without using Docker for it.
Scenario 1 is quite common, but the IDE environment would probably be different than the one from the Docker container (and it would be difficult to maintain a match of version for each libs, dependencies, etc. from IDE environment to Docker environment). It would also probably require to go through an intermediate step between Dev and Production to actually test the Docker image built after Dev is working before going to Production.
In my own experience doing this is great when you do not want to deal too much with Docker when actually doing dev and/or the language or technology you use is not adapted for dynamic reload as described in scenario 3. But in the end it only adds a drift between your environments and more complexity between Dev and Prod deployment method.
having to rebuild the image and start the containers again is a significant waste of time. In short, a question would be: How to avoid this?
Beside the scenarios you describe, you have ways to decently (even drastically) reduce image build time by leveraging Docker build cache and designing your Dockerfile. For example, a Python application would typically copy code as the last (or almost last) step of the build to avoid invalidating the cache, and for Java app it would be possible to split code so as to avoid compiling the entire application everytime a bit of code changes - that would depend on your actual setup.
I personally use a workflow roughly matching scenario 3 such as:
a docker-compose.yml file corresponding to my Production environment
a docker-compose.dev.yml which will override some aspect of my main Docker Compose file such as mouting code from my machine, adding dev specific flags to commands, etc. - it would be run such as
docker-compose -f docker-compose.yml -f docker-compose.dev.yml
but it would also be possible to have a docker-compose.override.yml as Docker Compose uses by default for override
in some situation I would have to use other overrides for specific situations such as docker-compose.ci.yml on my CI, but usually the main Docker Compose file is enough to describe my Prod environment (and if that's not the case, docker-compose.prod.yml does the trick)
I've seen them all used in different scenarios. There are some gotchas to avoid:
Applications inside of a container shouldn't depend on something running outside of a container on the host. So all of your dependencies should be containerized first.
File permissions with host volumes can be complicated depending on your version of docker. Some of the newer Docker Desktop installs automatically handle uid mappings, but if you develop directly on Linux you'll need to ensure the containers run as the same uid as your host user.
Avoid making changing inside the container if that isn't mapped into a host volume, since those changes will be lost when the container is recreated.
Looking at each of the options, here's my assessment of each:
Containerizing just the DB: This works well when developers already have a development environment for the language of choice, and there's no risk of external dependencies creeping in, e.g. a developer upgrading their JDK install to a newer version than the image is built with. It follows the idea of containerizing the dependencies first, while also giving developers the familiar IDE integration with their application.
Rebuilding the Image for Every Change: This tends to be the least ideal for developer workflow, but the quickest to implement when you're not familiar with the tooling. I'll give a 4th option that I consider an improvement to this.
Everything in a container, volume mounts, and live reloading: This is the most complicated to implement, and requires the language itself to support things like live reloading. However, when they do, it is nearly seamless for the developers and gets them up to speed on a new project quickly without needing to install any other tooling to get started.
Rebuild the app in the container with volume mounts: This is a halfway point between 2 and 3. When you don't have live reloading, you likely need to recompile or restart the interpreter to see any change. Rather than rebuilding the image, I put the recompile step in the entrypoint of a development image. I'll mount the code into the container, and run a full JDK instead of just a JRE (or whatever compiler is needed). I use named volumes for any dependency caches so they don't need to download on every restart. Then the method to see the changes is to restart that one container. The steps are identical to a compiled binary outside of a container, stop the old service, recompile, and restart the service, but now it happens inside of a container that should have the same tools used when building the production image.
For option 4, I tend to use a multi-stage build that has stages for build, develop, and release. The build stage pulls in the code and compiles it, the develop stage is the same base image as build but with an entrypoint that does the compile/run, and the release stage copies the result of the build stage into a minimal runtime. Developers then have a compose file for development that creates the development image and runs that with volume mounts and any debugging ports opened.
First of all, docker-compose is just for development and also testing phase, not for production. Example:
With a minimal and basic docker-compose, all your containers will run in the same machine? For development purposes it is ok, but in production, put all the apps in just one machine is a risk
Official link https://docs.docker.com/compose/production/
We will assume
01 java api
01 mysql database
01 web application that needs the api
all of these applications are already in production
Quick Answer
If you need to fix or add new feature to the java api, I advice you to use an ide like eclipse or IntelliJ Idea. Why?
Because java needs compilation.
Compile inside a docker container will take more time due to maven dependencies
IDE has code auto completion
etc
In this development phase, Docker helps you with one of its most powerful features: "Bring the production containers to your localhost". Yeah, in this case, docker-compose.yml is the best option because with one file, you could start everything you need : mysql database and web app but not your java api. Open your java api with your favorite ide.
Anyway if you want to use docker to "develop", you just need the Dockerfile and perform a docker build ... when you need to run your source code in your localhost
Basic Devops Life cycle with docker
Developer push source code changes using git
Your continuous integration (C.I) platform detect this change and perform
docker build ... (In this step, unit test are triggered)
docker push to your private hub. Container is uploaded in this step and will be used to deployments on other servers.
docker run or container deploy to the next environment : testing
Human testers ,selenium or another automation start their work
If no errors are detected, your C.I perform perform a final deploy of the uploaded container to your production environment. No docker build are required, just deploy or docker run.
Some Tips
Docker features are awesome but sometimes add too much complexity. So stop using volumes , hard disk dependency, logs, or complex configurations. If you use volumes, what will happen when your containers are in different hosts?
Java and Nodejs are a stable languages and your rest api or web apps does not need crazy configurations. Just maven compilation and java -jar ... or npm install and npm run start.
For logs you could use https://www.graylog.org/, google stasckdriver or another log management.
And like Heroku, stop using hard disk dependency as much as possible. In heroku platform disk are disposable, it means disappear when app is restarted. So instead of local file storage, you could use another file storage service with a lot of functionalities.
With this approaches, your containers can be deployed anywhere in a simple way
I'm using something similar to your 3rd scenario for my web dev, but it is Node-based. So I have 3 docker-compose files (actually 4, one is base and having all common stuff for others) for dev, staging and production environments.
Staging docker-compose config is similar to production config excluding SSL, ports and other things that may not allow to use it locally.
I have a separate container for each service (like DB, queue), and for dev, I also have additional dev DB and queue containers mostly for running auto-tests. In dev environment, all source are mounted into containers, so it allows to use IDE/editor of choice outside the container, and see changes inside.
I use supervisor to manage my workers inside a container with workers and have some commands to restart my workers manually when I need this. Maybe you can have something similar to recompile/restart your Java app. Or if you have an idea of how to organize app source code changes detection and your app auto-reloading, than could be the best variant. By the way, you gave me an idea to research something similar suitable for my case.
For staging and production environment, my source code is included inside the corresponding container using production Dockerfile. And I have some commands to restart all stuff using an environment I need, and this typically includes rebuilding containers, but because of Docker cache, it doesn't take much time (about 20 seconds). And taking into account that switching between environments is not a too frequent operation I feel quite comfortable with this.
Production docker-compose config is used only during deployment because it enables SSL, proper ports and has some additional production stuff.
Update for details on backend app restarting using Supervisor:
This is how I use it in my projects:
A part of my Dockerfile with installing Supervisor:
FROM node:10.15.2-stretch-slim
RUN apt-get update && apt-get install -y \
# Supervisor
supervisor \
...
...
# Configs for services/workers managed by supervisor
COPY some/path/worker-configs/*.conf /etc/supervisor/conf.d/
This is an example of one of Supervisor configs for a worker:
[program:myWorkerName]
command=/usr/local/bin/node /app/workers/my-worker.js
user=root
numprocs=1
stopsignal=INT
autostart=true
autorestart=true
startretries=10
In this example in your case command should run your Java app.
And this is an example of command aliases for convenient managing Supervisor from outside of containers. I'm using Makefile as a universal runner of all commands, but this can be something else.
# Used to run all workers
su-start:
#docker exec -t MY-WORKERS-CONTAINER-NAME supervisorctl start all
# Used to stop all workers
su-stop:
#docker exec -t MY-WORKERS-CONTAINER-NAME supervisorctl stop all
# Used to restart all workers
su-restart:
#docker exec -t MY-WORKERS-CONTAINER-NAME supervisorctl restart all
# Used to check status of all workers
su-status:
#docker exec -t MY-WORKERS-CONTAINER-NAME supervisorctl status
As I described above these Supervisor commands need to be run manually, but I think it is possible to implement maybe another Node-based worker or some watcher outside of a container with workers that will detect file system changes for sources directory and run these commands automatically. I think it is possible to implement something like this using Java as well like this or this.
On the other hand, it needs to be done carefully to avoid constant restarting workers on every little change.

How to integrate Capistrano with Docker for deployment?

I am not sure my question is relevant as I may try to mix tools (Capistrano and Docker) that should not be mixed.
I have recently dockerized an application that is deployed with Capistrano. Docker compose is used both for development and staging environments.
This is how my project looks like (the application files are not shown):
Capfile
docker-compose.yml
docker-compose.staging.yml
config/
deploy.rb
deploy
staging.rb
The Docker Compose files creates all the necessary containers (Nginx, PHP, MongoDB, Elasticsearch, etc.) to run the app in development or staging environment (hence some specific parameters defined in docker-compose.staging.yml).
The app is deployed to the staging environment with this command:
cap staging deploy
The folder architecture on the server is the one of Capistrano:
current
releases
20160912150720
20160912151003
20160912153905
shared
The following command has been run in the current directory of the staging server to instantiate all the necessary containers to run the app:
docker-compose -f docker-compose.yml -f docker-compose.staging.yml up -d
So far so good. Things get more complicated on the next deploy: the current symlink will point to a new directory of the releases directory:
If deploy.rb defines commands that need to be executed inside containers (like docker-compose exec php composer install for PHP), Docker tells that the containers don't exist yet (because the existing ones were created on the previous release folder).
If a docker-compose up -d command is executed in the Capistrano deployment process, I get some errors because of port conflicts (the previous containers still exist).
Do you have an idea on how to solve this issue? Should I move away from Capistrano and do something different?
The idea would be to keep the (near) zero-downtime deployment that Capistrano offers with the flexibility of Docker containers (providing several PHP versions for various apps on the same server for instance).
As far as i understood, you are using capistrano on the host , to redeploy the whole application stack, means containers. So you are using capistrano to orchestrate building, container creation and thus deployment.
While you do so you basically, when running cap deploy
build the app ( based on the current base you pulled on the host ) - probably even includes gulp/grunt/build tasks
then you "package" it into your image using "volume mounts"
during that you start / replace the containers
You do so to get a 'nearly' zero downtime deployment.
If you really care about the downtime and about formalising your deployment process that much, you should do it right by using a proper pipeline implementation for
packaging / ci
deployment / distribution
I do not think capistrano can/should be one of the tools you can use during this strategy. Capistrano is meant for deployment of an application directly on a server using ssh and git as transport. Using cap to build whole images on the target server to then start those as containers, is really over the top, IMHO.
packaging / building
Either use a CI/CD server like jenkins/bamboo/gocd to build an release-image for you application. Assuming only the app is customised in terms of 'release', lets say you have db and app as containers/services, app will include your source-code and will regularly change during releases..
Thus its a CD/CI process to build a new app-image (release) offsite on your CI server. Pulling the source code of your application an packaging it into your image using COPY and then any RUN statement to compile your assets ( npm / gulp / grunt whatever ). That all happens not on the production server, but on the CI/CD agent. Using multistage builds for slim images is encouraged.
Then you push this release-image, lets call this image yourregistry.com/yourapp into your private registry as a new 'version' for deployment.
deployment
with downtime (easy)
To deploy into your production or staging server WITH downtime, you would simply do a docker-composer pull && docker-composer up - this will pull the newer image and then start it in your stack - your app is upgraded. Using tagged images in the release stage would require to change the the docker-compose.yml
The server should of course be able to pull from your private repository.
withou downtime (more effort)
Achieving a zero-downtime deployment you should use the blue-green deployment concept. Thus you add a proxy to your setup and do no longer expose the public port from the app, but rather using this proxy public port. Your current live system might be running on a random port 21231, the proxy is forwarding from 443 to 21231.
We are using random ports to avoid the conflict during deploying the "second" system, covering one of the issue you mentioned.
When redeploying, you will only start a "new" container based on the new app-image in addition (to the old one), it gets a new random port 12312 - if you like, run your integration tests agains 12312 directly ( do not use the proxy ). If you are done and happy, reconfigure the proxy to now forward to 12312 - then remove the old container (21231).
If you like to automate the proxy-reconfiguration, which in detail is out of scope for this question, you can use service-discovery and a registrator which makes random ports much more practical and makes it easy to reconfigure you proxy, let it be nginx/haproxy while they are running. Tools would be, for example.
consul
consul watch + consul-template or tiller on the proxy to update the proxy-config
Registator for centralized registration or consul agent client mode with a service-configuration.json (depends on you choice)
-
I don't think Capistrano is the right tool for the job. This was recently discussed in a PR for SSHKit, which underlies Capistrano.
https://github.com/capistrano/sshkit/pull/368
#EugenMayer does a better job of explaining a "normal" way of using Docker.

Docker Compose Dev and Production Environments Best Workflow

I've built a simple Docker Compose project as a development environment. I have PHP-FPM, Nginx, MongoDB, and Code containers.
Now I want to automate the process and deploy to production.
The docker-compose.yml can be extended and can define multiple environments. See https://docs.docker.com/compose/extends/ for more information.
However, there are Dockerfiles for my containers. And for the dev environment are needed more packages than in production.
The main question is should I use separate dockerfiles for dev and prod and manage them in docker-compose.yml and production.yml ?
Separate dockerfiles are easy approach but there is code duplication.
The other solution is to use environment variables and somehow handle them from bash script (maybe as entrypoint ?).
I am searching for other ideas.
According to the official docs:
... you’ll probably want to define a separate Compose file, say
production.yml, which specifies production-appropriate configuration.
Note: The extends keyword is useful for maintaining multiple Compose
files which re-use common services without having to manually copy and
paste.
In docker-compose version >= 1.5.0 you can use environment variables, may be this suits you?
If the packages needed for development aren't too heavy (i.e. the image size isn't significally bigger) you could just create Dockerfiles that include all the components and then decide whether to activate them based on the value of an environment variable in the entrypoint.
That way you would could have the main docker-compose.yml providing the production environment while development.yml would just add the correct environment variable value where needed.
In this situation it might be worth considering using an "onbuild" image to handle the commonalities among environments, then using separate images to handle the specifics. Some official images have onbuild versions, e.g., Node. Or you can create your own.

Docker and Rails environments

Just starting with this whole Docker thing and I can't wrap my head around one thing:
How does one deal with different dependencies? Let's say in production I don't want to have git, grunt, etc installed, but in development I do.
There's a difference between a container that can run tests, and container that runs in production.
Am I thinking about this wrong?
There are different philosophies on this, but personally, I use Docker to match my production environment as closely as possible so testing with that container anywhere let's me be pretty sure things will just work once I deploy to prod. This is one of the major benefits of Docker-- that you can mimic OS, environment, dependencies, versions, etc... locally before deploying anywhere.
There's nothing wrong with having a separate container development container with added dependencies that you can pass around your team, but to me the main benefit of Docker for development is the ability to test on that simulated prod environment and run the exact same container locally that you will be using in prod once you're ready. No more "but it worked on my machine!" bugs.
docker-rails is a project I just created to make rails with docker (and CI) very easy. I think it can help you and reduce the amount of configuration necessary to get up and running with rails on docker. It deals with multiple environments i.e. development | test | production in one docker-rails.yml file, which is really just a meta configuration/inheritance wrapper over the standard docker-compose.yml.
It will allow you to run a test command set in test, vs development server, or a production setup with different containers. The example in the readme shows elasticsearch used in dev and test, but not staging or production.
I hope that helps.

Resources