I'm using docker-compose to deploy into a remote host. This is what my config looks like:
# stacks/web.yml
version: '2'
services:
postgres:
image: postgres:9.6
restart: always
volumes:
- db:/var/lib/postgresql/data
redis:
image: redis:3.2.3
restart: always
web_server:
depends_on: [postgres]
build: ../sources/myapp
links: [postgres]
restart: always
volumes:
- nginx_socks:/tmp/socks
- static_assets:/source/public
sidekiq:
depends_on: [postgres, redis]
build: ../sources/myapp
links: [postgres, redis]
restart: always
volumes:
- static_assets:/source/public
nginx:
depends_on: [web_server]
build: ../sources/nginx
ports:
- "80:80"
volumes:
- nginx_socks:/tmp/socks
- static_assets:/public
restart: always
volumes:
db:
nginx_socks:
static_assets:
# stacks/web.production.yml
version: '2'
services:
web_server:
command: bundle exec puma -e production -b unix:///tmp/socks/puma.production.sock
env_file: ../env/production.env
sidekiq:
command: bundle exec sidekiq -e production -c 2 -q default -q carrierwave
env_file: ../env/production.env
nginx:
build:
args:
ENV_NAME: production
DOMAIN: production.yavende.com
I deploy using:
eval $(docker-machine env myapp-production)`
docker-compose -f stacks/web.yml -f stacks/web.production.yml -p myapp_production build -no-deps web_server sidekiq
docker-compose -f stacks/web.yml -f stacks/web.production.yml -p myapp_production up -d
Although this works perfectly locally, and I did couple successful deploys in the past with this method, now it hangs when building the "web_server" service and finally show some timeout error, like I describe in this issue.
I think that the problem originates from the combination of my slow connection (Argentina -> DigitalOcean servers on USA) and me trying to build images and push them instead of using hub hosted images.
I've been able to do deploy by cloning my compose config into the server and running docker-compose directly there.
The question is: is there a better way to automate this process? Is a good practice to use docker-compose to build images on the fly?
I've been thinking about automating this process of cloning sources into the server and docker-composeing stuff, but there may be better tooling to solve this matter.
I was remote building images. This implies pushing the whole source needed to build the image over the net. For some images that was over 400MB of data sent from Argentina to some virtual servers in USA, and proved to be terribly slow.
The solution is to totally change the approach to dockerizing my stack:
Instead of building images on the fly using Dockerfile ARGs, I've modified my source apps and it's docker images to accept options via environment variables on runtime.
Used DockerHub automated build, integrated with GitHub.
This means I only push changes -no the whole source- via git. Then DockerHub builds the image.
Then I docker-compose pull and docker-compose up -d my site.
Free alternatives are running your own self-hosted docker registry and/or possibly GitLab, since it recently released it's own docker image registry: https://about.gitlab.com/2016/05/23/gitlab-container-registry/.
Related
I'm working in a api rest in python with celery and worker redis. In my local pc I need to run two servers, the flask/fastapi server and the celery server. My question is if is possible deploy the app in heroku, and how is the configuration of the Dockerfiles and the docker-compose.yml. This is the project structure:
Project: https://github.com/SebastianJHM/fastapi_celery_redis
UPDATE:
I create a heroku.yml and I try to deploy.
setup:
addons:
- plan: heroku-redis
as: REDIS
build:
docker:
web: app/Dockerfile
worker: worker_tasks/Dockerfile
The deploy was good but I have this error with the connection of redis:
Of corse it is possible to run this setup locally and on Heroku. If I understood you correctly you want to run celery as the worker in a container and Redis as the message broker and result backend. Since you did not provide anything about the setup you already have I can show you a standard setup to get going locally. Add these 2 services to your docker-compose.yml file:
version: '3.7'
services:
web:
build: ./app
ports:
- "5000:5000"
depends_on:
- redis
worker:
build: ./worker_tasks
command: celery -A tasks worker --loglevel=INFO
depends_on:
- redis
redis:
image: redis
ports:
- '6379:6379'
To run everything on Heroku, the Heroic dev center has an article that explains it better and more detailed than I ever could.
To deploy the project on Heroku you could create a heroku.yml which is very similar to a docker-compose.yml file; see
How to push Docker containers managed by Docker-compose to Heroku?
I propably have missed something. So, I have installed docker on the production server and I have have a running application locally. It starts and runs on docker-compose.
So I feel like I am almost ready for deployment.
No I'd like to build a docker image and deploy it to the to the production server.
But when I try to build the image from the docker compose files like this
docker-compose -f docker-compose.base.yml -f docker-compose.production.yml build myapp
I keep getting ERROR: No such service: app
I haven't found any documentation the docker site, where the image build procedure from multiple dock-compose files is described. Maybe it's there, but then I have missed it.
My other Problem i.e. question is: where would I place the image tar(.gz) files on the target server.
Can I specify the target location for the images in either the /etc/docker/daemon.json or some other configuration file?
I simply don't know where to dump the image file on the production server.
Again, maybe there is some documentation somewhere on the docker web site. But if so, then I've missed that too.
Addendum source files
I was asked to add my docker-compose files to provide a running example. So, here's the entire code for development and production environments.
For the sake of simplicity, I've kept this example quite basic:
Dockerfile.base
# joint settings for development, production and staging
FROM mariadb:10.3
Dockerfile.production
# syntax = edrevo/dockerfile-plus
INCLUDE+ Dockerfile.base
# add more production specific settings when needed
Dockerfile.development
# syntax = edrevo/dockerfile-plus
INCLUDE+ Dockerfile.base
# add more production specific settings when needed
docker-compose.base.yml
version: "3.8"
services:
mariadb:
container_name: my_mariadb
build:
context: .
volumes:
- ./../../databases/mysql-my:/var/lib/mysql
- ~/.ssh:/root/.ssh
logging:
driver: local
ports:
- "3306:3306"
restart: on-failure
networks:
- backend
networks:
backend:
driver: $DOCKER_NETWORKS_DRIVER
docker-compose.production.yml
services:
mariadb:
build:
dockerfile: Dockerfile.production
environment:
PRODUCTION: "true"
DEBUG: "true"
MYSQL_ROOT_PASSWORD: $DBPASSWORD_PRODUCTION
MYSQL_DATABASE: $DBNAME_PRODUCTION
MYSQL_USER: $DBUSER_PRODUCTION
MYSQL_PASSWORD: $DBPASSWORD_PRODUCTION
docker-compose.development.yml
services:
mariadb:
build:
dockerfile: Dockerfile.development
environment:
DEVELOPMENT: "true"
INFO: "true"
MYSQL_ROOT_PASSWORD: $DBPASSWORD_DEVELOPMENT
MYSQL_DATABASE: $DBNAME_DEVELOPMENT
MYSQL_USER: $DBUSER_DEVELOPMENT
MYSQL_PASSWORD: $DBPASSWORD_DEVELOPMENT
This starts up properly on my development machine when running:
docker-compose -f docker-compose.base.yml \
-f docker-compose.development.yml \
up
But how do I get from here to to there i.e. how can I turn this into self contained image which I can upload to my docker production host?
And where do I put it on the server so the docker production server can find it?
I think I should be able to upload and run the built image without running compose again on the production host, or shouldn't I?
For the time being, the question remains:
Why does this build command
docker-compose -f docker-compose.base.yml \
-f docker-compose.production.yml \
build app
return ERROR: No such service: app?
I see two separate issues in your question:
How to deploy an image to the production server
To "deploy" / start a new image on a production server it needs to be downloaded with docker pull, from a registry where it was uploaded with docker push - i.e. you need a registry that is reachable from the production server and your build environment.
If you do not want to use a registry (public or private one) you can use docker export to export an image as a tar-ball which you can manually upload to the production server and make it available with docker import.
Check docker image ls to see if a specific image is available on your host.
But in your case I think it would be the easiest to just upload your docker-compose.yml and related files to the production server and directly build the images there.
Why does this build command return ERROR: No such service: app?
docker-compose -f docker-compose.base.yml \
-f docker-compose.production.yml \
build app
Because there is no service app! At least in the files you provided in your question there is only a mariadb service defined and no app service.
But starting / building the services should be the same on your local dev host and the production server.
I have heard it said that
Docker compose is designed for development NOT for production.
But I have seen people use Docker compose on production with bind mounts. Then pull the latest changes from github and it appears live in production without the need to rebuild. But others say that you need to COPY . . for production and rebuild.
But how does this work? Because in docker-compose.yaml you can specify depends-on which doesn't start one container until the other is running. If I don't use docker-compose in production then what about this? How would I push my docker-compose to production (I have 4 services / 4 images that I need to run). With docker-compose up -d it is so easy.
How do I build each image individually?
How can I copy these images to my production server to run them (in correct order)? I can't even find the build images on my machine anywhere.
This is my docker-compose.yaml file that works great for development
version: '3'
services:
# Nginx client server
nginx-client:
container_name: nginx-client
build:
context: .
restart: always
stdin_open: true
environment:
- CHOKIDAR_USEPOLLING=true
ports:
- 28874:3000
volumes:
- ./client:/var/www
- /var/www/node_modules
networks:
- app-network
# MySQL server for the server side app
mysql-server:
image: mysql:5.7.22
container_name: mysql-server
restart: always
tty: true
ports:
- "16427:3306"
environment:
MYSQL_USER: root
MYSQL_ROOT_PASSWORD: BcGH2Gj41J5VF1
MYSQL_DATABASE: todo
volumes:
- ./docker/mysql-server/my.cnf:/etc/mysql/my.cnf
networks:
- app-network
# Nginx server for the server side app
nginx-server:
container_name: nginx-server
image: nginx:1.17-alpine
restart: always
ports:
- 49691:80
volumes:
- ./server:/var/www
- ./docker/nginx-server/etc/nginx/conf.d:/etc/nginx/conf.d
depends_on:
- php-server
- mysql-server
networks:
- app-network
# PHP server for the server side app
php-server:
build:
context: .
dockerfile: ./docker/php-server/Dockerfile
container_name: php-server
restart: always
tty: true
environment:
SERVICE_NAME: php
SERVICE_TAGS: dev
working_dir: /var/www
volumes:
- ./server:/var/www
- ./docker/php-server/local.ini:/usr/local/etc/php/conf.d/local.ini
- /var/www/vendor
networks:
- app-network
depends_on:
- mysql-server
# Networks
networks:
app-network:
driver: bridge
How do you build the docker images? I assume you don't plan using a registry, therefore you'll have to:
give an image name to all services
build the docker images somewhere (a CI/CD server, locally, it does not really matter)
save the images in a file
zip the file
export the zipped file remotely
on the server, unzip and load
I'd create a script for this. Something like this:
#!/bin/bash
set -e
docker-compose build
docker save -o images.tar "$( grep "image: .*" docker-compose.yml | awk '{ print $2 }' )"
gzip images.tar
scp images.tar.gz myserver:~
ssh myserver ./load_images.sh
-----
on myserver, the load_images.sh would look like this:
```bash
#!/bin/bash
if [ ! -f images.tar.gz ] ; then
echo "no file"
exit 1
fi
gunzip images.tar.gz
docker load -i images.tar
Then you'll have to create the docker commands to emulate the docker-compose configuration (I won't go there since it's nothing difficult but it's boring and I'm not feeling like writing that). How do you simulate the depends_on? Well, you'll have to start each container singularly so you'll either prepare another script or you'll do it manually.
About using docker-compose on production:
There's not really a big issue about using docker-compose on production as soon as you do it properly. e.g. some of my production setups tends to look like this:
docker-compose.yml
docker-compose.dev.yml
docker-compose.prd.yml
The devs will use docker-compose -f docker-compose.yml -f docker-compose.dev.yml $cmd while on production you'll use docker-compose -f docker-compose.yml -f docker-compose.prd.yml $cmd.
Taking you file as an example, I'd move all volumes, ports, tty and stdin_open subsections from docker-compose.yml to docker-compose.dev.yml. e.g.
the docker-compose.dev.yml would look like this:
version: '3'
services:
nginx-client:
stdin_open: true
ports:
- 28874:3000
volumes:
- ./client:/var/www
- /var/www/node_modules
mysql-server:
tty: true
ports:
- "16427:3306"
volumes:
- ./docker/mysql-server/my.cnf:/etc/mysql/my.cnf
nginx-server:
ports:
- 49691:80
volumes:
- ./server:/var/www
- ./docker/nginx-server/etc/nginx/conf.d:/etc/nginx/conf.d
php-server:
restart: always
tty: true
volumes:
- ./server:/var/www
- ./docker/php-server/local.ini:/usr/local/etc/php/conf.d/local.ini
- /var/www/vendor
on production, the docker-compose you'll have the strictly required port subsections, define a production environment file where the required passwords are stored (the file will be only on the production server, not in git), etc etc.
Actually, you have so many different approaches you can take.
Generally, docker-compose is used as a container-orchestration tool on development. There are several other production-grade container orchestration tools available on most of the popular hosting services like GCP and AWS. Kubernetes is by far the most popular and most commonly used.
Based on the services used in your docker-compose, it advisable to not use it directly on production. Running a mysql container can lead to issues with data loss as containers are meant to be temporary. It is better to opt for a managed MySQL service like RDS instead. Similarly nginx is also better set up with any reverse-proxy/load-balancer services that your hosting service provides.
When it comes to building the images you can utilise your CI/CD pipeline to build these images from their respective Dockerfiles, and then push to a image registry of your choice and let your hosting service pick up the image and deploy it with th e container-orchestration tool that your hosting service provides.
If you need a lightweight production environment, using Compose is probably fine. Other answers here have hinted at more involved tools, that have advantages like supporting multiple-host clusters and zero-downtime deployments, but they are much more involved.
One core piece missing from your description is an image registry. Docker Hub fits this role, if you want to use it; major cloud providers have one; even GitHub has a container registry now (for public repositories); or you can run your own. This addresses a couple of your problems: (2) you docker build the images locally (or on a dedicated continuous-integration system) and docker push them to the registry, then (3) you docker pull the images on the production system, or let Docker do it on its own.
A good practice that goes along with this is to give each build a unique tag, perhaps a date stamp or commit ID. This makes it very easy to upgrade (or downgrade) by changing the tag and re-running docker-compose up.
For this you'd change your docker-compose.yml like:
services:
nginx-client:
# No `build:`
image: registry.example.com/nginx:${NGINX_TAG:latest}
php-server:
# No `build:`
image: registry.example.com/php:${PHP_TAG:latest}
And then you can update things like:
docker build -t registry.example.com/nginx:20201101 ./nginx
docker build -t registry.example.com/php:20201101 ./php
docker push registry.example.com/nginx:20201101 registry.example.com/php:20201101
ssh production-system.example.com \
NGINX_TAG=20201101 PHP_TAG=20201101 docker-compose up -d
You can use multiple docker-compose.yml files to also use docker-compose build and docker-compose push for your custom images, with a development-only overlay file. There is an example in the Docker documentation.
Do not separately copy your code; it's contained in the image. Do not bind-mount local code over the image code. Especially do not use an anonymous volume to hold libraries, since this will completely ignore any updates in the underlying image. These are good practices in development too, since if you replace everything interesting in an image with volume mounts then it doesn't really have any relation to what you're running in production.
You will need to separately copy the configuration files you reference and the docker-compose.yml itself to the target system, and take responsibility for backing up the database data.
Finally, I'd recommend removing unnecessary options from the docker-compose.yml file (don't manually specify container_name:, use the Compose-provided default network, prefer specifying the command: in an image, and so on). That's not essential but it can help trim down the size of the YAML file.
I am very new to K8s, so I didn't use it ever. But I had familiarized myself with the concept of nodes/pods. I know that minikube is the local k8s engine for debug/etc and that I should interact with any k8s engine via kubectl tool. Now my questions are:
Does launching the same configuration on my local minikube instance and production AWS/etc instance guarantee that the result will be identic?
How do I set up continuous deployment for my project? Now I have configured CI that pushes images of tested code to the docker hub with the :latest tag. But I want them to be automatically deployed in the Rolling Update mode without interrupting uptime.
It would be great to get correct configurations with the steps I should perform to make it work on any cluster? I don't want to save docker-compose's notation and use kompose. I want to make it properly in the k8s context.
My current docker-compose.yml is (django and react services are available from dockerhub now):
version: "3.5"
services:
nginx:
build:
context: .
dockerfile: Dockerfile.nginx
restart: always
command: bash -c "service nginx start && tail -f /dev/null"
ports:
- 80:80
- 443:443
volumes:
- /mnt/wts_new_data_volume/static:/data/django/static
- /mnt/wts_new_data_volume/media:/data/django/media
- ./certs:/etc/letsencrypt/
- ./misc/ssl/server.crt:/etc/ssl/certs/server.crt
- ./misc/ssl/server.key:/etc/ssl/private/server.key
- ./misc/conf/nginx.conf:/etc/nginx/nginx.conf:ro
- ./misc/conf/passports.htaccess:/etc/passports.htaccess:ro
depends_on:
- react
redis:
restart: always
image: redis:latest
privileged: true
command: redis-server
celery:
build:
context: backend
command: bash -c "celery -A project worker -B -l info"
env_file:
- ./misc/.env
depends_on:
- redis
django:
build:
context: backend
command: bash -c "/code/manage.py collectstatic --no-input && echo donecollectstatic && /code/manage.py migrate && bash /code/run/daphne.sh"
volumes:
- /mnt/wts_new_data_volume/static:/data/django/static
- /mnt/wts_new_data_volume/media:/data/django/media
env_file:
- ./misc/.env
depends_on:
- redis
react:
build:
context: frontend
depends_on:
- django
The short answer is yes, you can replicate what you have with docker-compose with K8s.
It depends on your infrastructure. For example if you have an external LoadBalancer in your AWS deployment, it won't be the same in your local.
You can do rolling updates (this typically works with stateless services). You can also take advantage of a GitOps type of approach.
The docker-compose notation is different from K8s so yes, you'll have to translate that to Kubernetes objects: Pods, Deployments, Secrets, ConfigMaps, Volumes, etc and for the most part the basic objects will work on any cluster, but there will always be some specific objects related to the physical characteristics of your cluster (i.e storage volumes, load balancer, etc). The Kubernetes docs are very comprehensive and are super helpful.
I have a docker compose file with this content.
version: '3'
services:
db:
image: postgres
restart: always
ports:
- "5432:5432"
environment:
POSTGRES_PASSWORD: pass
POSTGRES_USER: user
volumes:
- postgres_data:/var/lib/postgresql/data
redis:
image: "redis:alpine"
ports:
- "6379:6379"
volumes:
- 'redis:/var/lib/redis/data'
sidekiq:
build: .
links:
- db
- redis
command: bundle exec sidekiq
volumes:
- '.:/app'
web:
image: production_image
ports:
- "80:80"
links:
- db
- redis
- sidekiq
restart: always
volumes:
postgres_data:
redis:
In this to run sidekiq, we run bundle exec sidekiq in the current directory. This works on my local machine in development environment. But on AWS EC2 container, I am sending my docker-compose.yml file and running docker-compose up. But since the project code is not there, sidekiq fails. How should I run sidekiq on EC2 instance without sending my code there and using docker container of my code only in the compose file?
The two important things you need to do are to remove the volumes: declaration that gets the actual application code from your local filesystem, and upload your built Docker image to some registry. Since you're otherwise on AWS, ECR is a ready option; public Docker Hub will work fine too.
Depending on how your Rails app is structured, it might make sense to use the same image with different commands for the main application and the Sidekiq worker(s), and it might work to just make it say
sidekiq:
image: production_image
command: bundle exec sidekiq
Since you're looking at AWS anyways you should also consider the possibility of using hosted services for data storage (RDS for the database, Elasticache for Redis). The important thing is to include the locations of those data stores as environment variables so that you can change them later (maybe they would default to localhost for developer use, but always be something different when deployed).
You'll also notice that my examples don't have links:. Docker provides an internal DNS service for containers to find each other, and Docker Compose arranges for containers to be found via their service key in the YAML file.
Finally, you should be able to test this setup locally before deploying it to EC2. Run docker build and docker-compose up as needed; debug; and if it works then docker push the image(s) and launch it on Amazon.
version: '3'
volumes: *volumes_from_the_question
services:
db: *db_from_the_question
redis: *redis_from_the_question
sidekiq:
image: 123456789012.dkr.ecr.us-east-1.amazonaws.com/myapp/sidekiq:1.0
environment:
- PGHOST: db
- REDIS_HOST: redis
app:
image: 123456789012.dkr.ecr.us-east-1.amazonaws.com/myapp/app:1.0
ports:
- "80:80"
environment:
- PGHOST: db
- REDIS_HOST: redis