Workflow: Docker Services from Development to Production - Best Practice - docker

I have been developing a web application using Python, Flask, Docker(-Compose) and git/github and getting to the point where I try to figure out the best way/workflow to bring it to production. I have read some articles but not sure what is a best practice from different approaches.
My current setup is purely development oriented:
Local Docker using docker-compose to build various service images (such as db, backend workers, webapp (flask & uwisg), nginx).
using .env file for docker-compose to pass configuration to the services
Source code is bind mounted from the local docker host
db data is stored in a named volume
Using local git for source control (though I have connected it to a github repository but not been using it much since I am the only one currently developing the application)
From what I understand the steps to production could be the following:
Implement docker-compose override to distinguish between dev and prod
Implement Dockerfile Multistage builds to create prod images which include the source code in the image and do not include dev dependencies
Tag and push the production images to a registry (docker, google?) or better push the git to github?
[do security scans of the prod images]
deploy/pull the prod images from the registry (or build from github) on a service like GKE for instance
Is this a common way to do it? Am I missing something?
How would I best go about using an integration/staging environment between dev and prod, so that I can first test new prod builds or debug prod images in integration?
Does GKE for instance offer an easy way to setup an integration environment? Or could I use the Docker installation on my NAS for that?
Any best practices for backing up production (like db data most importantly)?
Thanks in advance!

Related

Deploying Data with containers across multiple environments

I wondered if I could pick your brains regarding deploying data across environments with containers... I have a hard time understanding the "standard" deployment process.
Let me set the scene:
You're running a CMS on your local development environment and you have two containers - a database container that runs MySQL and links to a volume (that stores the data), and the CMS container that links to it. All fine.
Now, I can deploy through GitLab/GitHub CI/CD by pushing any code changes to the repo in the CMS container - which will rebuild an image and push it to a 'Staging' environment (via a container registry). That's fine too.
Here's the issue - what's the standard accepted way of deploying the DATA in the database to the 'Staging' environment? Particularly through CI/CD?
I understand that having a cloudDB is best practice on staging/production, which again, is totally ok for me to do, but short of writing some Bash scripts to sqldump and rsync the data, I can't see how people do it with standard CI/CD pipelines.
Am I missing something here?

How to design & Implement containerize architecture for nginx with reactjs and laravel with mongo?

I'm using reactjs for frontend with Nginx load balancer and laravel for backend with MongoDB.
as old architecture design, code upload to GitHub with different frontend and backend repo.
still did not use DOCKER AND KUBERNETS, I want to implement them, in the new Architecture design, I used a private cloud server so, restricted to deploying on AWS\AZURE\GCP\etc...
share your Architecture plan and implementation for a better approach to microservices!
as per my thinking,
first make a docker file for react and laravel project
then upload to docker private registry.[dockerhub]
install docker and k8s on VM
deploy container of 1=react and 2=laravel from image
also deploy 3=nginx and 4=mongo container from default market image
Some of my questions:
How to make the connection?
How to take a new pull on the container, for the new update release version?
How to make a replica, for a disaster recovery plan?
How to monitor errors and performance?
How to make the pipeline MOST important?
How to make dev, staging, and production environments?
This is more of planning question, more of the task can be automated by developer/devops, except few administrative task like monitoring and environment creation.
still this can be shared responsibility. or if team is available to manage product/services.
You can use gitlab, which can directly attach to kub8 provider. Can reduce multiple build steps.

Resume ways of using dockers (web development)

Screenshot: my docker-compose for wordpress
I've learned last week how to deploy 3 containers of wordpress, phpmyadmin and mysql. They work fine. The containers were connected between them, using a volume and the same network. The docker was configured from a docker compose file. .yml I used Git of my native operative system to version the changes.
But then I found another way to do the same:
I installed a image of Debian, then added git, apache2, mariadb and phpmyadmin, i connected all and use a "docker commit" to save changes of my development every time.
Then, a coworker told me to use a docker-file and add volumes an use Git for versioning.
Which is the best way?
What problems have the first and second ways?
Is there another way?
From my view you search for optimal deployment structure, its a long way to go and find information about. Here my opinons:
I wouldn't recommend this version because the mix of operation system (win/linux) can cause big problems. Example, Line Breaks, Folder/File Filename.
But the docker compose idea is the right way to setup the test, dev enviroment local.
is outside of git, thats not optimal, but a good solution when save everything.
is alright, but you done already with docker compose. Here the usage of volume can cause same problems as 1. You can use git versioning in commandline mode to develop, but I don't recommend it.
Alternative Ways
Use Software that able to deploy remotely to the php server, like PHPStorm, Eclipse, Winscp use local to develop the application and link it to the Apache/PHP Maschine or Container over FTP/SFTP. You work local and transfer the changed files into the running maschine or container. The Git Versioning would be done on the local maschine. You can also use mysql tools to backup the database local. So if the docker container brake you can setup it easy again.
Make sure you save also config files of apache, php, mysql into git, that makes the resetup of docker container smart.
Use (Gitlab & Gitlab CI), (Bitbucket & Bamboo), (Git & Jenkins) to deploy your php changes to the servers or docker containers.
At best read articles over continuous delivery and continuous integration.
This option is suitable for rollout to customer or dev, beta systems.

How to integrate Capistrano with Docker for deployment?

I am not sure my question is relevant as I may try to mix tools (Capistrano and Docker) that should not be mixed.
I have recently dockerized an application that is deployed with Capistrano. Docker compose is used both for development and staging environments.
This is how my project looks like (the application files are not shown):
Capfile
docker-compose.yml
docker-compose.staging.yml
config/
deploy.rb
deploy
staging.rb
The Docker Compose files creates all the necessary containers (Nginx, PHP, MongoDB, Elasticsearch, etc.) to run the app in development or staging environment (hence some specific parameters defined in docker-compose.staging.yml).
The app is deployed to the staging environment with this command:
cap staging deploy
The folder architecture on the server is the one of Capistrano:
current
releases
20160912150720
20160912151003
20160912153905
shared
The following command has been run in the current directory of the staging server to instantiate all the necessary containers to run the app:
docker-compose -f docker-compose.yml -f docker-compose.staging.yml up -d
So far so good. Things get more complicated on the next deploy: the current symlink will point to a new directory of the releases directory:
If deploy.rb defines commands that need to be executed inside containers (like docker-compose exec php composer install for PHP), Docker tells that the containers don't exist yet (because the existing ones were created on the previous release folder).
If a docker-compose up -d command is executed in the Capistrano deployment process, I get some errors because of port conflicts (the previous containers still exist).
Do you have an idea on how to solve this issue? Should I move away from Capistrano and do something different?
The idea would be to keep the (near) zero-downtime deployment that Capistrano offers with the flexibility of Docker containers (providing several PHP versions for various apps on the same server for instance).
As far as i understood, you are using capistrano on the host , to redeploy the whole application stack, means containers. So you are using capistrano to orchestrate building, container creation and thus deployment.
While you do so you basically, when running cap deploy
build the app ( based on the current base you pulled on the host ) - probably even includes gulp/grunt/build tasks
then you "package" it into your image using "volume mounts"
during that you start / replace the containers
You do so to get a 'nearly' zero downtime deployment.
If you really care about the downtime and about formalising your deployment process that much, you should do it right by using a proper pipeline implementation for
packaging / ci
deployment / distribution
I do not think capistrano can/should be one of the tools you can use during this strategy. Capistrano is meant for deployment of an application directly on a server using ssh and git as transport. Using cap to build whole images on the target server to then start those as containers, is really over the top, IMHO.
packaging / building
Either use a CI/CD server like jenkins/bamboo/gocd to build an release-image for you application. Assuming only the app is customised in terms of 'release', lets say you have db and app as containers/services, app will include your source-code and will regularly change during releases..
Thus its a CD/CI process to build a new app-image (release) offsite on your CI server. Pulling the source code of your application an packaging it into your image using COPY and then any RUN statement to compile your assets ( npm / gulp / grunt whatever ). That all happens not on the production server, but on the CI/CD agent. Using multistage builds for slim images is encouraged.
Then you push this release-image, lets call this image yourregistry.com/yourapp into your private registry as a new 'version' for deployment.
deployment
with downtime (easy)
To deploy into your production or staging server WITH downtime, you would simply do a docker-composer pull && docker-composer up - this will pull the newer image and then start it in your stack - your app is upgraded. Using tagged images in the release stage would require to change the the docker-compose.yml
The server should of course be able to pull from your private repository.
withou downtime (more effort)
Achieving a zero-downtime deployment you should use the blue-green deployment concept. Thus you add a proxy to your setup and do no longer expose the public port from the app, but rather using this proxy public port. Your current live system might be running on a random port 21231, the proxy is forwarding from 443 to 21231.
We are using random ports to avoid the conflict during deploying the "second" system, covering one of the issue you mentioned.
When redeploying, you will only start a "new" container based on the new app-image in addition (to the old one), it gets a new random port 12312 - if you like, run your integration tests agains 12312 directly ( do not use the proxy ). If you are done and happy, reconfigure the proxy to now forward to 12312 - then remove the old container (21231).
If you like to automate the proxy-reconfiguration, which in detail is out of scope for this question, you can use service-discovery and a registrator which makes random ports much more practical and makes it easy to reconfigure you proxy, let it be nginx/haproxy while they are running. Tools would be, for example.
consul
consul watch + consul-template or tiller on the proxy to update the proxy-config
Registator for centralized registration or consul agent client mode with a service-configuration.json (depends on you choice)
-
I don't think Capistrano is the right tool for the job. This was recently discussed in a PR for SSHKit, which underlies Capistrano.
https://github.com/capistrano/sshkit/pull/368
#EugenMayer does a better job of explaining a "normal" way of using Docker.

Docker compose in production?

I plan to use docker to build my dev and production environment. I build Django based app.
On dev I use docker-compose to mange all local containers. It's a nice and convenient solution. I run Django, 3 celery queues, rabbitmq, 2 postgresql DBs.
But my production environment is quite different. I need to run gunicorn and nginx. Moreover DBs will be ran using AWS RDS. Of course Django app will require more stuff, like different settings file or more env vars.
I'm wandering how to divide it. Should I docker-compose there as well? That will require separate files for dev and prod, maybe more in future for staging etc... If yes, how to deploy it? Using Jenkins, pull, restart all using compose?
Or maybe I should use ansible to run docker commands directly? But then I have no confidence that my dev is the same as live and it's harder to predict its behaviour.
I like the idea of running compose files on all environments, but I'm not sure if maintaining multiple files for different environments is a good idea. Dev requires less env vars and less configuration. I can use env file to set all of them on production. But should I keep my live settings in the se repo? Previously I was setting all env vars while provisioning and this was separate process. Now it looks like provisioning and deploy are the same? Maybe this is the way with Docker?
Using http://docs.docker.com/compose/extends/#multiple-compose-files you can keep all the common stuff in a docker-compose.yml and use docker-compose.prod.yml to add extra services, change links, environment, and ports.

Resources