Docker workflow - docker

I am developing a small social-media project using nodejs, postgresql and nginx on a backend.
Locally, I worked with Docker as a replacement for a Vagrant, I have all entities split between separate containers and combined them via docker-compose.
I do not have production experience with Docker. How should I pack result of docker-compose, and deploy?

You can build and publish the individual docker images, and do the same docker-compose on your production servers. Of course, the servers have to be logged into the registry if it is a private one.
Sample:
version: '2'
services:
application1:
image: your.docker.registry/image-application1
application2:
image: your.docker.registry/image-application2
depends_on:
- application1
The images can be built and pushed to a registry as part of your regular build process.

You do not need to modifiy containers to make them production ready, other than what is described here. What you need to do is ensure you are deploying them to a High Availability system that can respond to failures by respawning processes. Here are some examples:
Amazon Elastic Container Service
Kubernetes
Google Container Engine
Weave

Related

What is the best practice for deploying my application on my VPS using Docker?

I do have a (Python Flask) application that I want to deploy using GitLab CI and Docker to my VPS.
On my server I want to have a production version and a staging version of my application. Both of them require a MongoDB connection.
My plan is to automatically build the application on GitLab and push it to GitLab's Docker Registry. If I want to deploy the application to staging or production I do a docker pull, docker rm and docker run.
The plan is to store the config (e. g. secret_key) in .production.env (and .staging.env) and pass it to application using docker run --env-file ./env.list
I already have MongoDB installed on my server and both environments of the applications shall use the same MongoDB instance, but a different database name (configured in .env).
Is that the best practice for deploying my application? Do you have any recommendations? Thanks!
Here's my configuration that's worked reasonably well in different organizations and project sizes:
To build:
The applications are located in a git repository (GitLab in your case). Each application brings its own Dockerfile.
I use Jenkins for building, you can, of course, use any other CD tooling. Jenkins pulls the application's repository, builds the docker image and publishes it into a private Docker repository (Nexus, in my case).
To deploy:
I have one central, application-independent repository that has a docker-compose file (or possibly multiple files that extend one central file for different environments). This file contains all service definitions and references the docker images in my Nexus repo.
If I am using secrets, I store them in a HashiCorp Vault instance. Jenkins pulls them, and writes them into an .env file. The docker-compose file can reference the individual environment variables.
Jenkins pulls the docker-compose repo and, in my case via scp, uploads the docker-compose file(s) and the .env file to my server(s).
It then triggers a docker-compose up (for smaller applications) or re-deploys a docker stack into a swarm (for larger applications).
Jenkins removes everything from the target server(s).
If you like it, you can do step 3. via Docker Machine. I feel, however, its benefits don't warrant use in my cases.
One thing I can recommend, as I've done it in production several times is to deploy Docker Swarm with TLS Encrypted endpoints. This link talks about how to secure the swarm via certificate. It's a bit of work, but what it will allow you to do is define services for your applications.
The services, once online can have multiple replicas and whenever you update a service (IE deploy a new image) the swarm will take care of making sure one is online at all times.
docker service update <service name> --image <new image name>
Some VPS servers actually have Kubernetes as a service (Like Digital Ocean) If they do, it's more preferable. Gitlab actually has an autodevops feature and can remotely manage your Kubernetes cluster, but you could also manually deploy with kubectl.

Docker services running on some locally and others remotely

How can I configure docker-compose to use multiple containers where some containers (especially in active development) to be running on your local host computer and other services are containers in remote servers?
In docker-compose.yml
rails:
build: some_path
volumes: some_volumes
mysql:
image: xxx
build: xxxx
nginx:
image: xxx
build: xxxx
other_services:
Currently I have all containers running locally and it works fine, but noticed that performance is slow; what if I have, for example, nginx and other_services running remotely - how do I do that? If there is a tutorial link, kindly let me know since didn't find one with google.
Using docker swarm. You can create a swarm with many nodes (one your local machine one the remote server) and then using docker stack deploy you can deploy your application to those machines.
This is the tutorial.

How do I run docker-compose up on a a docker-swarm?

I'm new to Docker and trying to get started by deploying locally a hello-world Flask app on Docker-Swarm.
So far I have my Flask app, a Dockerfile, and a docker-compose.yml file.
version: "3"
services:
webapp:
build: .
ports:
- "5000:5000"
docker-compose up works fine and deploys my Flask app.
I have started a Docker Swarm with docker swarm init, which I understand created a swarm with a single node:
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
efcs0tef4eny6472eiffiugqp * moby Ready Active Leader
Now, I don't want workers or anything else, just a single node (the manager node created by default), and deploy my image there.
Looking at these instructions https://docs.docker.com/get-started/part4/#create-a-cluster it seems like I have to create a VM driver, then scp my files there, and ssh to run docker-compose up. Is that the normal way of working? Why do I need a VM? Can't I just run docker-compose up on the swarm manager? I didn't find a way to do so, so I'm guessing I'm missing something.
Running docker-compose up will create individual containers directly on the host.
With swarm mode, all the commands to manage containers have shifted to docker stack and docker service which manage containers across multiple hosts. The docker stack deploy command accepts a compose file with the -c arg, so you would run the following on a manager node:
docker stack deploy -c docker-compose.yml stack_name
to create a stack named "stack_name" based on the version 3 yml file. This command works the same regardless of whether you have one node or a large cluster managed by your swarm.

How to put docker container for database on a different host in production?

Let's say we have a simple web app stack, something like the one described in docker-compse docs. Its docker-compose.yml looks like this:
version: '2'
services:
db:
image: postgres
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
This is great for development on a laptop. In production, though, it would be useful to require the db container to be on its own host. Tutorials I'm able to find use docker-swarm to scale out the web container, but pay no attention to the fact that the instance of db and one instance of web run on the same machine.
Is it possible to require a specific container to be on its own machine (or even better, on a specific machine) using docker ? If so, how? If not, what is the docker way to deal with database in multi-container apps?
In my opinion, databases sit on the edge of the container world, they're useful for development and testing but production databases are often not very ephemeral or portable things by nature. Flocker certainly
helps as do scalable types of databases, like Cassandra, but databases can have very specific requirements that might be better treated as a service that sits behind your containerised app (RDS, Cloud SQL etc).
In any case you will need a container orchestration tool.
You can apply manual scheduling constraints for Compose + Swarm to dictate the docker host a container can run on. For your database, you might have:
environment:
- "constraint:storage==ssd"
Otherwise you can setup a more static Docker environment with Ansible, Chef, Puppet
Use another orchestration tool that supports docker: Kubernetes, Mesos, Nomad
Use a container service: Amazon ECS, Docker Cloud/Tutum

Is it possible to deploy a docker hub publicly hosted image to Kubernetes Container Engine without uploading it to Containers Registery?

Still new to Containers and Kubernetes here but I am dabbling with deploying a cluster on Google Containers Engine and was wondering if you can use a docker hub hosted image to deploy containers, so in my .yaml configuration file I'd say:
...
image: hub.docker.com/r/my-team/my-image:latest
...
Is this possible? Or one has to download/build image locally and then upload it to Google Containers Registery?
Thanks so much
Yes, it is possible. The Replication Controller template or Pod spec image isn't special. If you specify image: redis you will get the latest tag of the official Docker Hub library Redis image, just as if you did docker pull redis.

Resources