Docker Swarm with multiple hosts (different physical machines) - docker

As a bit of context, I am fairly new to Docker and Docker-compose and until recently I've never even heard of Docker Swarm. I should not be the one responsible for the task I've been given but it's not like I can offload it to someone else...
So, the idea is to have two different physical machines to host a web server. One of the machines will run an Express.js server plus a Redis database, while the other machine hosts the system database (a Postgres DB).
Up until now I had a docker-compose.yaml file which created all these services and ran them.
version: '3.8'
services:
server:
image: server
build:
context: .
target: build-node
volumes:
- ./:/src/app
- /src/app/node_modules
container_name: server
ports:
- 3000:3000
depends_on:
- postgres
- redis
entrypoint:
['./wait-for-it.sh', '-t', '30', 'postgres:5432', '--', 'yarn', 'dev']
networks:
- servernet
# postgres database
postgres:
image: postgres
user: postgres
restart: always
environment:
- POSTGRES_USER=${POSTGRES_USER}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
- PGDATA=/var/lib/postgresql/data/pgdata
volumes:
- ./data:/var/lib/postgresql/data # persist data even if container shuts down
- ./db_scripts/startup.sh:/docker-entrypoint-initdb.d/c_startup.sh
#- ./db_scripts/db.sql:/docker-entrypoint-initdb.d/a_db.sql
#- ./db_scripts/db_population.sql:/docker-entrypoint-initdb.d/b_db_population.sql
ports:
- '5432:5432'
networks:
- servernet
# pgadmin for managing postgis db (runs at localhost:5050)
# To add the above postgres server to pgadmin, use hostname as defined by docker: 'postgres'
pgadmin:
image: dpage/pgadmin4
restart: always
environment:
- PGADMIN_DEFAULT_EMAIL=${PGADMIN_DEFAULT_EMAIL}
- PGADMIN_DEFAULT_PASSWORD=${PGADMIN_DEFAULT_PASSWORD}
depends_on:
- postgres
ports:
- 5050:80
networks:
- servernet
redis:
image: redis
networks:
- servernet
networks:
servernet:
I would naturally run this script with docker-compose up and that was the end of my concerns, everything running together on localhost. But now, with this setup I have no idea what to do. From what I've read, I have to create a swarm, but then how do I go about running everything from the same place (or with one command)? And how do I specify which services are to be executed on which machine?
Additionally, here is my Dockerfile in case it's useful:
FROM node as build-node
WORKDIR /src/app
COPY package.json .
COPY yarn.lock .
COPY wait-for-it.sh .
COPY . .
RUN yarn
RUN yarn style:fix
RUN yarn lint:fix
RUN yarn build
EXPOSE 3000
ENTRYPOINT yarn dev
Is my current docker-compose script even capable of being used with this new setup?
This is really over my head and I've got not idea where to start. Docker documentation is also a bit confusing since I don't have much knowledge of Docker to begin with...
Thanks in advance!

You first need to learn what's docker swarm and how it works
Docker swarm is a container orchestration tool, meaning that it allows
the user to manage multiple containers deployed across multiple hosts
machines.
to answer your questions briefly:
how do I go about running everything from the same place?
you can use docker stack deploy command to deploy a set of services
and yes you run it from one host machine, you don't have to run it on different machines, and that machine we call it master node
The good news is that you can still use your docker-compose file, with slight modifications maybe.
So to summarize the steps that you need to do are the following:
install docker swarm (1 master and 1 worker as you have only 2
machines)
make sure it's working fine (communication between nodes)
prepare your docker-compose file and deploy your stack from the
master node

Related

How to bind folders inside docker containers?

I have docker-compose.yml on my local machine like below:
version: "3.3"
services:
api:
build: ./api
volumes:
- ./api:/api
ports:
- 3000:3000
links:
- mysql
depends_on:
- mysql
app:
build: ./app
volumes:
- ./app:/app
ports:
- 80:80
mysql:
image: mysql:8.0.27
volumes:
- ./mysql:/var/lib/mysql
tty: true
restart: always
environment:
MYSQL_DATABASE: db
MYSQL_ROOT_PASSWORD: qwerty
MYSQL_USER: db
MYSQL_PASSWORD: qwerty
ports:
- '3306:3306'
The api is NestJS app, app, mysql - Angular and Mysql respectively. And I need to work with this one localy.
How could I make so, that any my changes will be applied without rebuilding containers every time?
You don't have to build an image for a development environment with your sources in it. For NestJS, and since you're using Docker (I voluntary specify this because it exists other container runtimes), you can simply run a NodeJS image from the Docker main registry: https://hub.docker.com/_/node.
You could run it with:
docker run -d -v ./app:/app node:12-alpine /app/index.js
N.B.: I choose 12-alpine for the example. I imagine the file to start your app is index.js, replace it with yours.
You must consider to install the node dependencies yourself and they must be in the ./app directory.
For docker-compose, it could look like this:
version: "3.3"
services:
app:
image: node:12-alpine
command: /app/index.js
volumes:
- ./app:/app
ports:
- "80:80"
Same way for your API project.
For a production image, it is still suggested to build the image with the sources in it.
Say you're working on your front-end application (app). This needs to make calls out to the other components, especially api. So you can start the things it depends on, but not the application itself
docker-compose up -d api
Update your application configuration for this different environment; if you would have proxied to http://api:3000 before, for example, you need to change this to http://localhost:3000 to connect to the container's published ports:.
Now you can develop your application totally normally, without doing anything Docker-specific.
# outside Docker, on your normal development workstation
yarn run dev
$EDITOR src/components/Foo.tsx
You might find it convenient to use environment variables for these settings that will, well, differ per environment. If you're developing the back-end code but want to attach a live UI to it, you'll either need to rebuild the container or update the front-end's back-end URL to point at the host system.
This approach also means you do not need to bind-mount your application's code into the container, and I'd recommend removing those volumes: blocks.

How to clone a docker stack on the same server

I want to practice using docker-compose. I have a tournament happening over the weekend and I want to set up 10 copies of the same web app on ONE server with urls like:
http://team1.example.com
http://team2.example.com
etc...
http://team10.example.com
There will be 10 teams in the tournament, and they will all go to their respective url http://team<your team number>.example.com via web browser, save some information to a database, and maybe even modify the code on the actual server.
So I built a simple nodejs app that simply writes data to a mongo database. Then I decided to set up two websites http://team1.example.com and http://team2.example.com. So I made this docker compose file:
version: '3'
services:
api1:
image: dockerjohn/tournament:latest
environment:
- DB=database1
ports:
- 80:3000
networks:
- net1
db1:
image: mongo:4.0.3
container_name: database1
networks:
- net1
api2:
image: dockerjohn/tournament:latest
environment:
- DB=database2
ports:
- 81:3000
networks:
- net2
db2:
image: mongo:4.0.3
container_name: database2
networks:
- net2
networks:
net1:
net2:
Then I installed apache web server to reverse proxy team 1 to port 80 and team 2 to port 81. This all works fine.
To set up the remaining teams 3 to 10, I have to duplicate the entries I have in my docker compose yml file and duplicate virtual host entries in apache.
My question: Is there a docker command that will let me clone each docker stack (team 1, team2, etc...) more easily without all this data entry? Do I need Kubernetes to do this?
Kubernetes would be way easier to set this up. It can take care of the reverse proxy setup too if you install the nginx controller.
You could create a single Kubernetes manifest containing:
a mongodb deployment, service, persistent volume claim
a nodejs deployment, service
You can then apply this 10 times, each time using a different namespace:
kubectl -n team01 -f manifest.yaml
kubectl -n team02 -f manifest.yaml
kubectl -n team03 -f manifest.yaml
...
Of course, you would need 10 different ingress rules because you want 10 different domains, but that would be the only thing you need to copy-paste.
I figured it out. There are options for docker called swarm and stack. First, I simplified my docker-compose.yml file to just this:
version: '3'
services:
api:
image: dockerjohn/tournament:latest
environment:
- DB=$DB
ports:
- $WEB_PORT:3000
networks:
- mynet
db:
image: mongo:4.0.3
networks:
- mynet
networks:
mynet:
Then I ran these commands from the same folder as my docker-compose file like this
docker swarm init
DB=team1_db WEB_PORT=81 docker stack deploy -c docker-compose.yml team1
DB=team2_db WEB_PORT=82 docker stack deploy -c docker-compose.yml team2
DB=team3_db WEB_PORT=83 docker stack deploy -c docker-compose.yml team3
DB=team4_db WEB_PORT=84 docker stack deploy -c docker-compose.yml team4
DB=team5_db WEB_PORT=85 docker stack deploy -c docker-compose.yml team5
etc...
You have to structure the DB env variable as <stack name located at the end of my docker stack deploy command>_<job name in the docker-compose yaml file>.
Now I just need to find a way to simplify my apache set up so I don't have to duplicate so many vhost entries . I heard there's a docker image called Traefik which can do this reverse proxy. Maybe I'll try that out and update my answer after.

How do I ensure docker is running the new code in my containers when starting up?

I am currently writing a webapp - java backend, react front end and have been deploying via a docker compose file. I've made changes and when I try to run them via yarn build for my front end server and starting my back end server with maven, the changes appear. However, when running with docker, the changes aren't there.
I've been using the docker compose up and docker compose down commands and I even run docker system prune -a after stopping my docker containers via the docker compose down command but my new changes aren't showing. I'd appreciate any guidance on what I'm doing wrong to help show my changes.
I also have docker desktop and have manually gone and deleted all of the volumes, containers and images so that they have to be regenerated. Running the build commands to specify ignoring cache didn't help either.
I also deleted the .m2 folder so that this gets generated (my understanding is that this is the cache store for the backend). My changes are mainly on the front end but since my front end container depends on this, I thought regenerating the back-end container may have a knock on effect that may help.
I would greatly appreciate any help, please do let me know if there's anything else to help with context. The changes involve removing a search bar and some text, both of which are commented out in the code but still appear whilst I also add another button which doesn't show up.
My docker compose file is below as follows:
services:
mysqldb:
# image: mysql:5.7
build: ./Database
restart: unless-stopped
env_file: ./.env
environment:
- MYSQL_ROOT_PASSWORD=$MYSQLDB_ROOT_PASSWORD
- MYSQL_DATABASE=$MYSQLDB_DATABASE
ports:
- $MYSQLDB_LOCAL_PORT:$MYSQLDB_DOCKER_PORT
volumes:
- db:/var/lib/mysql
networks:
- backend
app_backend:
depends_on:
- mysqldb
build: ./
restart: on-failure
env_file: ./.env
ports:
- $SPRING_LOCAL_PORT:$SPRING_DOCKER_PORT
environment:
SPRING_APPLICATION_JSON: '{
"spring.datasource.url" : "jdbc:mysql://mysqldb:$MYSQLDB_DOCKER_PORT/$MYSQLDB_DATABASE?useSSL=false&allowPublicKeyRetrieval=true&serverTimezone=UTC",
"spring.datasource.username" : "$MYSQLDB_USER",
"spring.datasource.password" : "$MYSQLDB_ROOT_PASSWORD",
"spring.jpa.properties.hibernate.dialect" : "org.hibernate.dialect.MySQL5InnoDBDialect",
"spring.jpa.hibernate.ddl-auto" : "update"
}'
volumes:
- .m2:/root/.m2
stdin_open: true
tty: true
networks:
- backend
- frontend
app_frontend:
depends_on:
- app_backend
build:
../MyProjectFrontEnd
restart: on-failure
ports:
- 80:80
networks:
- frontend
volumes:
db:
networks:
backend:
frontend:
Since the issue is on the front end, I've also attached the dockerfile for the front end below:
FROM node:16.13.0-alpine AS react-build
WORKDIR /MyProjectFrontEnd
RUN yarn cache clean
RUN yarn install
COPY . ./
RUN yarn
RUN yarn build
# Stage 2 - the production environment
FROM nginx:alpine
COPY nginx.conf /etc/nginx/conf.d/default.conf
COPY /build /usr/share/nginx/html
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
Update - the cache in the browser was storing some (rookie error) however, not all the changes are still being loaded
If your source code is in the same folder (usually) of your Dockerfile, you can be sure that your last source code will be built and deployed. This feature is one of the cornerstones, which is the base of docker. If this would be failing, it would be the end of the world.
These kind of errors are not related to the docker core. Usually is something at application level and/or its development:
Libraries mistake
Developer mistake
Functional test mistake
Load Balancer mistake
Advice
docker-compose and windows are for development stage. For deployment on real environments for real users, you should use linux and some tool like Kubernetes.

How to deploy a docker app to production without using Docker compose?

I have heard it said that
Docker compose is designed for development NOT for production.
But I have seen people use Docker compose on production with bind mounts. Then pull the latest changes from github and it appears live in production without the need to rebuild. But others say that you need to COPY . . for production and rebuild.
But how does this work? Because in docker-compose.yaml you can specify depends-on which doesn't start one container until the other is running. If I don't use docker-compose in production then what about this? How would I push my docker-compose to production (I have 4 services / 4 images that I need to run). With docker-compose up -d it is so easy.
How do I build each image individually?
How can I copy these images to my production server to run them (in correct order)? I can't even find the build images on my machine anywhere.
This is my docker-compose.yaml file that works great for development
version: '3'
services:
# Nginx client server
nginx-client:
container_name: nginx-client
build:
context: .
restart: always
stdin_open: true
environment:
- CHOKIDAR_USEPOLLING=true
ports:
- 28874:3000
volumes:
- ./client:/var/www
- /var/www/node_modules
networks:
- app-network
# MySQL server for the server side app
mysql-server:
image: mysql:5.7.22
container_name: mysql-server
restart: always
tty: true
ports:
- "16427:3306"
environment:
MYSQL_USER: root
MYSQL_ROOT_PASSWORD: BcGH2Gj41J5VF1
MYSQL_DATABASE: todo
volumes:
- ./docker/mysql-server/my.cnf:/etc/mysql/my.cnf
networks:
- app-network
# Nginx server for the server side app
nginx-server:
container_name: nginx-server
image: nginx:1.17-alpine
restart: always
ports:
- 49691:80
volumes:
- ./server:/var/www
- ./docker/nginx-server/etc/nginx/conf.d:/etc/nginx/conf.d
depends_on:
- php-server
- mysql-server
networks:
- app-network
# PHP server for the server side app
php-server:
build:
context: .
dockerfile: ./docker/php-server/Dockerfile
container_name: php-server
restart: always
tty: true
environment:
SERVICE_NAME: php
SERVICE_TAGS: dev
working_dir: /var/www
volumes:
- ./server:/var/www
- ./docker/php-server/local.ini:/usr/local/etc/php/conf.d/local.ini
- /var/www/vendor
networks:
- app-network
depends_on:
- mysql-server
# Networks
networks:
app-network:
driver: bridge
How do you build the docker images? I assume you don't plan using a registry, therefore you'll have to:
give an image name to all services
build the docker images somewhere (a CI/CD server, locally, it does not really matter)
save the images in a file
zip the file
export the zipped file remotely
on the server, unzip and load
I'd create a script for this. Something like this:
#!/bin/bash
set -e
docker-compose build
docker save -o images.tar "$( grep "image: .*" docker-compose.yml | awk '{ print $2 }' )"
gzip images.tar
scp images.tar.gz myserver:~
ssh myserver ./load_images.sh
-----
on myserver, the load_images.sh would look like this:
```bash
#!/bin/bash
if [ ! -f images.tar.gz ] ; then
echo "no file"
exit 1
fi
gunzip images.tar.gz
docker load -i images.tar
Then you'll have to create the docker commands to emulate the docker-compose configuration (I won't go there since it's nothing difficult but it's boring and I'm not feeling like writing that). How do you simulate the depends_on? Well, you'll have to start each container singularly so you'll either prepare another script or you'll do it manually.
About using docker-compose on production:
There's not really a big issue about using docker-compose on production as soon as you do it properly. e.g. some of my production setups tends to look like this:
docker-compose.yml
docker-compose.dev.yml
docker-compose.prd.yml
The devs will use docker-compose -f docker-compose.yml -f docker-compose.dev.yml $cmd while on production you'll use docker-compose -f docker-compose.yml -f docker-compose.prd.yml $cmd.
Taking you file as an example, I'd move all volumes, ports, tty and stdin_open subsections from docker-compose.yml to docker-compose.dev.yml. e.g.
the docker-compose.dev.yml would look like this:
version: '3'
services:
nginx-client:
stdin_open: true
ports:
- 28874:3000
volumes:
- ./client:/var/www
- /var/www/node_modules
mysql-server:
tty: true
ports:
- "16427:3306"
volumes:
- ./docker/mysql-server/my.cnf:/etc/mysql/my.cnf
nginx-server:
ports:
- 49691:80
volumes:
- ./server:/var/www
- ./docker/nginx-server/etc/nginx/conf.d:/etc/nginx/conf.d
php-server:
restart: always
tty: true
volumes:
- ./server:/var/www
- ./docker/php-server/local.ini:/usr/local/etc/php/conf.d/local.ini
- /var/www/vendor
on production, the docker-compose you'll have the strictly required port subsections, define a production environment file where the required passwords are stored (the file will be only on the production server, not in git), etc etc.
Actually, you have so many different approaches you can take.
Generally, docker-compose is used as a container-orchestration tool on development. There are several other production-grade container orchestration tools available on most of the popular hosting services like GCP and AWS. Kubernetes is by far the most popular and most commonly used.
Based on the services used in your docker-compose, it advisable to not use it directly on production. Running a mysql container can lead to issues with data loss as containers are meant to be temporary. It is better to opt for a managed MySQL service like RDS instead. Similarly nginx is also better set up with any reverse-proxy/load-balancer services that your hosting service provides.
When it comes to building the images you can utilise your CI/CD pipeline to build these images from their respective Dockerfiles, and then push to a image registry of your choice and let your hosting service pick up the image and deploy it with th e container-orchestration tool that your hosting service provides.
If you need a lightweight production environment, using Compose is probably fine. Other answers here have hinted at more involved tools, that have advantages like supporting multiple-host clusters and zero-downtime deployments, but they are much more involved.
One core piece missing from your description is an image registry. Docker Hub fits this role, if you want to use it; major cloud providers have one; even GitHub has a container registry now (for public repositories); or you can run your own. This addresses a couple of your problems: (2) you docker build the images locally (or on a dedicated continuous-integration system) and docker push them to the registry, then (3) you docker pull the images on the production system, or let Docker do it on its own.
A good practice that goes along with this is to give each build a unique tag, perhaps a date stamp or commit ID. This makes it very easy to upgrade (or downgrade) by changing the tag and re-running docker-compose up.
For this you'd change your docker-compose.yml like:
services:
nginx-client:
# No `build:`
image: registry.example.com/nginx:${NGINX_TAG:latest}
php-server:
# No `build:`
image: registry.example.com/php:${PHP_TAG:latest}
And then you can update things like:
docker build -t registry.example.com/nginx:20201101 ./nginx
docker build -t registry.example.com/php:20201101 ./php
docker push registry.example.com/nginx:20201101 registry.example.com/php:20201101
ssh production-system.example.com \
NGINX_TAG=20201101 PHP_TAG=20201101 docker-compose up -d
You can use multiple docker-compose.yml files to also use docker-compose build and docker-compose push for your custom images, with a development-only overlay file. There is an example in the Docker documentation.
Do not separately copy your code; it's contained in the image. Do not bind-mount local code over the image code. Especially do not use an anonymous volume to hold libraries, since this will completely ignore any updates in the underlying image. These are good practices in development too, since if you replace everything interesting in an image with volume mounts then it doesn't really have any relation to what you're running in production.
You will need to separately copy the configuration files you reference and the docker-compose.yml itself to the target system, and take responsibility for backing up the database data.
Finally, I'd recommend removing unnecessary options from the docker-compose.yml file (don't manually specify container_name:, use the Compose-provided default network, prefer specifying the command: in an image, and so on). That's not essential but it can help trim down the size of the YAML file.

How should I rewrite docker-compose.yaml for Kubernetes?

I am very new to K8s, so I didn't use it ever. But I had familiarized myself with the concept of nodes/pods. I know that minikube is the local k8s engine for debug/etc and that I should interact with any k8s engine via kubectl tool. Now my questions are:
Does launching the same configuration on my local minikube instance and production AWS/etc instance guarantee that the result will be identic?
How do I set up continuous deployment for my project? Now I have configured CI that pushes images of tested code to the docker hub with the :latest tag. But I want them to be automatically deployed in the Rolling Update mode without interrupting uptime.
It would be great to get correct configurations with the steps I should perform to make it work on any cluster? I don't want to save docker-compose's notation and use kompose. I want to make it properly in the k8s context.
My current docker-compose.yml is (django and react services are available from dockerhub now):
version: "3.5"
services:
nginx:
build:
context: .
dockerfile: Dockerfile.nginx
restart: always
command: bash -c "service nginx start && tail -f /dev/null"
ports:
- 80:80
- 443:443
volumes:
- /mnt/wts_new_data_volume/static:/data/django/static
- /mnt/wts_new_data_volume/media:/data/django/media
- ./certs:/etc/letsencrypt/
- ./misc/ssl/server.crt:/etc/ssl/certs/server.crt
- ./misc/ssl/server.key:/etc/ssl/private/server.key
- ./misc/conf/nginx.conf:/etc/nginx/nginx.conf:ro
- ./misc/conf/passports.htaccess:/etc/passports.htaccess:ro
depends_on:
- react
redis:
restart: always
image: redis:latest
privileged: true
command: redis-server
celery:
build:
context: backend
command: bash -c "celery -A project worker -B -l info"
env_file:
- ./misc/.env
depends_on:
- redis
django:
build:
context: backend
command: bash -c "/code/manage.py collectstatic --no-input && echo donecollectstatic && /code/manage.py migrate && bash /code/run/daphne.sh"
volumes:
- /mnt/wts_new_data_volume/static:/data/django/static
- /mnt/wts_new_data_volume/media:/data/django/media
env_file:
- ./misc/.env
depends_on:
- redis
react:
build:
context: frontend
depends_on:
- django
The short answer is yes, you can replicate what you have with docker-compose with K8s.
It depends on your infrastructure. For example if you have an external LoadBalancer in your AWS deployment, it won't be the same in your local.
You can do rolling updates (this typically works with stateless services). You can also take advantage of a GitOps type of approach.
The docker-compose notation is different from K8s so yes, you'll have to translate that to Kubernetes objects: Pods, Deployments, Secrets, ConfigMaps, Volumes, etc and for the most part the basic objects will work on any cluster, but there will always be some specific objects related to the physical characteristics of your cluster (i.e storage volumes, load balancer, etc). The Kubernetes docs are very comprehensive and are super helpful.

Resources