I want to practice using docker-compose. I have a tournament happening over the weekend and I want to set up 10 copies of the same web app on ONE server with urls like:
http://team1.example.com
http://team2.example.com
etc...
http://team10.example.com
There will be 10 teams in the tournament, and they will all go to their respective url http://team<your team number>.example.com via web browser, save some information to a database, and maybe even modify the code on the actual server.
So I built a simple nodejs app that simply writes data to a mongo database. Then I decided to set up two websites http://team1.example.com and http://team2.example.com. So I made this docker compose file:
version: '3'
services:
api1:
image: dockerjohn/tournament:latest
environment:
- DB=database1
ports:
- 80:3000
networks:
- net1
db1:
image: mongo:4.0.3
container_name: database1
networks:
- net1
api2:
image: dockerjohn/tournament:latest
environment:
- DB=database2
ports:
- 81:3000
networks:
- net2
db2:
image: mongo:4.0.3
container_name: database2
networks:
- net2
networks:
net1:
net2:
Then I installed apache web server to reverse proxy team 1 to port 80 and team 2 to port 81. This all works fine.
To set up the remaining teams 3 to 10, I have to duplicate the entries I have in my docker compose yml file and duplicate virtual host entries in apache.
My question: Is there a docker command that will let me clone each docker stack (team 1, team2, etc...) more easily without all this data entry? Do I need Kubernetes to do this?
Kubernetes would be way easier to set this up. It can take care of the reverse proxy setup too if you install the nginx controller.
You could create a single Kubernetes manifest containing:
a mongodb deployment, service, persistent volume claim
a nodejs deployment, service
You can then apply this 10 times, each time using a different namespace:
kubectl -n team01 -f manifest.yaml
kubectl -n team02 -f manifest.yaml
kubectl -n team03 -f manifest.yaml
...
Of course, you would need 10 different ingress rules because you want 10 different domains, but that would be the only thing you need to copy-paste.
I figured it out. There are options for docker called swarm and stack. First, I simplified my docker-compose.yml file to just this:
version: '3'
services:
api:
image: dockerjohn/tournament:latest
environment:
- DB=$DB
ports:
- $WEB_PORT:3000
networks:
- mynet
db:
image: mongo:4.0.3
networks:
- mynet
networks:
mynet:
Then I ran these commands from the same folder as my docker-compose file like this
docker swarm init
DB=team1_db WEB_PORT=81 docker stack deploy -c docker-compose.yml team1
DB=team2_db WEB_PORT=82 docker stack deploy -c docker-compose.yml team2
DB=team3_db WEB_PORT=83 docker stack deploy -c docker-compose.yml team3
DB=team4_db WEB_PORT=84 docker stack deploy -c docker-compose.yml team4
DB=team5_db WEB_PORT=85 docker stack deploy -c docker-compose.yml team5
etc...
You have to structure the DB env variable as <stack name located at the end of my docker stack deploy command>_<job name in the docker-compose yaml file>.
Now I just need to find a way to simplify my apache set up so I don't have to duplicate so many vhost entries . I heard there's a docker image called Traefik which can do this reverse proxy. Maybe I'll try that out and update my answer after.
Related
I have 2 different directories each containing docker containers for different purposes and both spun up with docker compose.
Dir A has Traefik config and container (and other containers) as well as environment variables whereas Dir B is a bunch of containers.
I want to now include Traefik labels in Dir B containers but when I run compose in Dir B, I'm facing:
WARN[0000] The "DOMAIN_NAME" variable is not set. Defaulting to a blank string.
service "[service name]" refers to undefined network traefik_proxy: invalid compose project
I'm guessing this is because services in Dir B can't see traefik_proxy since it's part of a different stack and same with the DOMAIN_NAME variable.
How can I have Dir B 'reach across' to Dir A? Is it even possible with my current config?
If you want to have multiple compose projects share a single Traefik frontend, that's certainly possible, but you need to place Traefik on a shared network. For this model, I would suggest starting with a docker-compose.yaml that only deploys Traefik; e.g:
version: "3"
services:
traefik:
image: docker.io/traefik:latest
command:
- --api.insecure=true
- --providers.docker
- --accesslog=true
- --accesslog.filepath=/dev/stderr
- --providers.docker.exposedByDefault=false
ports:
- "80:80"
- "443:443"
- "127.0.0.2:8080:8080"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
networks:
services:
external: true
Start by creating the shared network:
docker network create services
And then starting the Traefik project:
pushd traefik; docker-compose up -d; popd
Now for every project you want to make available via Traefik, put your services on the services network. For example, let's say we have this in app1/docker-compose.yaml:
version: "3"
services:
app1:
image: docker.io/containous/whoami
networks:
- services
labels:
- "traefik.enable=true"
- "traefik.http.routers.app1.rule=PathPrefix(`/app1`)"
networks:
services:
external: true
Then I can run:
pushd app1; docker-compose up -d; popd
And now my app1 service is available at http://localhost/app1/.
We can add as many services as we want like this; the only requirement is that the containers are attached to the services network.
As a bit of context, I am fairly new to Docker and Docker-compose and until recently I've never even heard of Docker Swarm. I should not be the one responsible for the task I've been given but it's not like I can offload it to someone else...
So, the idea is to have two different physical machines to host a web server. One of the machines will run an Express.js server plus a Redis database, while the other machine hosts the system database (a Postgres DB).
Up until now I had a docker-compose.yaml file which created all these services and ran them.
version: '3.8'
services:
server:
image: server
build:
context: .
target: build-node
volumes:
- ./:/src/app
- /src/app/node_modules
container_name: server
ports:
- 3000:3000
depends_on:
- postgres
- redis
entrypoint:
['./wait-for-it.sh', '-t', '30', 'postgres:5432', '--', 'yarn', 'dev']
networks:
- servernet
# postgres database
postgres:
image: postgres
user: postgres
restart: always
environment:
- POSTGRES_USER=${POSTGRES_USER}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
- PGDATA=/var/lib/postgresql/data/pgdata
volumes:
- ./data:/var/lib/postgresql/data # persist data even if container shuts down
- ./db_scripts/startup.sh:/docker-entrypoint-initdb.d/c_startup.sh
#- ./db_scripts/db.sql:/docker-entrypoint-initdb.d/a_db.sql
#- ./db_scripts/db_population.sql:/docker-entrypoint-initdb.d/b_db_population.sql
ports:
- '5432:5432'
networks:
- servernet
# pgadmin for managing postgis db (runs at localhost:5050)
# To add the above postgres server to pgadmin, use hostname as defined by docker: 'postgres'
pgadmin:
image: dpage/pgadmin4
restart: always
environment:
- PGADMIN_DEFAULT_EMAIL=${PGADMIN_DEFAULT_EMAIL}
- PGADMIN_DEFAULT_PASSWORD=${PGADMIN_DEFAULT_PASSWORD}
depends_on:
- postgres
ports:
- 5050:80
networks:
- servernet
redis:
image: redis
networks:
- servernet
networks:
servernet:
I would naturally run this script with docker-compose up and that was the end of my concerns, everything running together on localhost. But now, with this setup I have no idea what to do. From what I've read, I have to create a swarm, but then how do I go about running everything from the same place (or with one command)? And how do I specify which services are to be executed on which machine?
Additionally, here is my Dockerfile in case it's useful:
FROM node as build-node
WORKDIR /src/app
COPY package.json .
COPY yarn.lock .
COPY wait-for-it.sh .
COPY . .
RUN yarn
RUN yarn style:fix
RUN yarn lint:fix
RUN yarn build
EXPOSE 3000
ENTRYPOINT yarn dev
Is my current docker-compose script even capable of being used with this new setup?
This is really over my head and I've got not idea where to start. Docker documentation is also a bit confusing since I don't have much knowledge of Docker to begin with...
Thanks in advance!
You first need to learn what's docker swarm and how it works
Docker swarm is a container orchestration tool, meaning that it allows
the user to manage multiple containers deployed across multiple hosts
machines.
to answer your questions briefly:
how do I go about running everything from the same place?
you can use docker stack deploy command to deploy a set of services
and yes you run it from one host machine, you don't have to run it on different machines, and that machine we call it master node
The good news is that you can still use your docker-compose file, with slight modifications maybe.
So to summarize the steps that you need to do are the following:
install docker swarm (1 master and 1 worker as you have only 2
machines)
make sure it's working fine (communication between nodes)
prepare your docker-compose file and deploy your stack from the
master node
I am using docker-compose and my configuration file is simply:
version: '3.7'
volumes:
mongodb_data: {}
services:
mongodb:
image: mongo:4.4.3
restart: always
ports:
- "27017:27017"
volumes:
- mongodb_data:/data/db
environment:
- MONGO_INITDB_ROOT_USERNAME=root
- MONGO_INITDB_ROOT_PASSWORD=super-secure-password
rocket:
build:
context: .
depends_on:
- mongodb
image: rocket:dev
dns:
- 1.1.1.1
- 8.8.8.8
volumes:
- .:/var/rocket
ports:
- "30301-30309:30300"
I start MongoDB with docker-compose up, and then in new terminal windows run two Node.js application each with all the source code in /var/rocket with:
# 1st Node.js application
docker-compose run --service-ports rocket
# 2nd Node.js application
docker-compose run --service-ports rocket
The problem is that the 2nd Node.js application service needs to communicate with the 1st Node.js application service on port 30300. I was able to get this working by referencing the 1st Node.js application by the id of the Docker container:
Connect to 1st Node.js application service on: tcp://myapp_myapp_run_837785c85abb:30300 from the 2nd Node.js application service.
Obviously this does not work long term as the container id changes every time I docker-compose up and down. Is there a standard way to do networking when you start multiple of the same container from docker-compose?
You can run the same image multiple times in the same docker-compose.yml file:
version: '3.7'
services:
mongodb: { ... }
rocket1:
build: .
depends_on:
- mongodb
ports:
- "30301:30300"
rocket2:
build: .
depends_on:
- mongodb
ports:
- "30302:30300"
As described in Networking in Compose, the containers can communicate using their respective service names and their "normal" port numbers, like rocket1:30300; any ports: are ignored for this. You shouldn't need to manually docker-compose run anything.
Well you could always give specific names to your two Node containers:
$ docker-compose run --name rocket1 --service-ports rocket
$ docker-compose run --name rocket2 --service-ports rocket
And then use:
tcp://rocket1:30300
I have heard it said that
Docker compose is designed for development NOT for production.
But I have seen people use Docker compose on production with bind mounts. Then pull the latest changes from github and it appears live in production without the need to rebuild. But others say that you need to COPY . . for production and rebuild.
But how does this work? Because in docker-compose.yaml you can specify depends-on which doesn't start one container until the other is running. If I don't use docker-compose in production then what about this? How would I push my docker-compose to production (I have 4 services / 4 images that I need to run). With docker-compose up -d it is so easy.
How do I build each image individually?
How can I copy these images to my production server to run them (in correct order)? I can't even find the build images on my machine anywhere.
This is my docker-compose.yaml file that works great for development
version: '3'
services:
# Nginx client server
nginx-client:
container_name: nginx-client
build:
context: .
restart: always
stdin_open: true
environment:
- CHOKIDAR_USEPOLLING=true
ports:
- 28874:3000
volumes:
- ./client:/var/www
- /var/www/node_modules
networks:
- app-network
# MySQL server for the server side app
mysql-server:
image: mysql:5.7.22
container_name: mysql-server
restart: always
tty: true
ports:
- "16427:3306"
environment:
MYSQL_USER: root
MYSQL_ROOT_PASSWORD: BcGH2Gj41J5VF1
MYSQL_DATABASE: todo
volumes:
- ./docker/mysql-server/my.cnf:/etc/mysql/my.cnf
networks:
- app-network
# Nginx server for the server side app
nginx-server:
container_name: nginx-server
image: nginx:1.17-alpine
restart: always
ports:
- 49691:80
volumes:
- ./server:/var/www
- ./docker/nginx-server/etc/nginx/conf.d:/etc/nginx/conf.d
depends_on:
- php-server
- mysql-server
networks:
- app-network
# PHP server for the server side app
php-server:
build:
context: .
dockerfile: ./docker/php-server/Dockerfile
container_name: php-server
restart: always
tty: true
environment:
SERVICE_NAME: php
SERVICE_TAGS: dev
working_dir: /var/www
volumes:
- ./server:/var/www
- ./docker/php-server/local.ini:/usr/local/etc/php/conf.d/local.ini
- /var/www/vendor
networks:
- app-network
depends_on:
- mysql-server
# Networks
networks:
app-network:
driver: bridge
How do you build the docker images? I assume you don't plan using a registry, therefore you'll have to:
give an image name to all services
build the docker images somewhere (a CI/CD server, locally, it does not really matter)
save the images in a file
zip the file
export the zipped file remotely
on the server, unzip and load
I'd create a script for this. Something like this:
#!/bin/bash
set -e
docker-compose build
docker save -o images.tar "$( grep "image: .*" docker-compose.yml | awk '{ print $2 }' )"
gzip images.tar
scp images.tar.gz myserver:~
ssh myserver ./load_images.sh
-----
on myserver, the load_images.sh would look like this:
```bash
#!/bin/bash
if [ ! -f images.tar.gz ] ; then
echo "no file"
exit 1
fi
gunzip images.tar.gz
docker load -i images.tar
Then you'll have to create the docker commands to emulate the docker-compose configuration (I won't go there since it's nothing difficult but it's boring and I'm not feeling like writing that). How do you simulate the depends_on? Well, you'll have to start each container singularly so you'll either prepare another script or you'll do it manually.
About using docker-compose on production:
There's not really a big issue about using docker-compose on production as soon as you do it properly. e.g. some of my production setups tends to look like this:
docker-compose.yml
docker-compose.dev.yml
docker-compose.prd.yml
The devs will use docker-compose -f docker-compose.yml -f docker-compose.dev.yml $cmd while on production you'll use docker-compose -f docker-compose.yml -f docker-compose.prd.yml $cmd.
Taking you file as an example, I'd move all volumes, ports, tty and stdin_open subsections from docker-compose.yml to docker-compose.dev.yml. e.g.
the docker-compose.dev.yml would look like this:
version: '3'
services:
nginx-client:
stdin_open: true
ports:
- 28874:3000
volumes:
- ./client:/var/www
- /var/www/node_modules
mysql-server:
tty: true
ports:
- "16427:3306"
volumes:
- ./docker/mysql-server/my.cnf:/etc/mysql/my.cnf
nginx-server:
ports:
- 49691:80
volumes:
- ./server:/var/www
- ./docker/nginx-server/etc/nginx/conf.d:/etc/nginx/conf.d
php-server:
restart: always
tty: true
volumes:
- ./server:/var/www
- ./docker/php-server/local.ini:/usr/local/etc/php/conf.d/local.ini
- /var/www/vendor
on production, the docker-compose you'll have the strictly required port subsections, define a production environment file where the required passwords are stored (the file will be only on the production server, not in git), etc etc.
Actually, you have so many different approaches you can take.
Generally, docker-compose is used as a container-orchestration tool on development. There are several other production-grade container orchestration tools available on most of the popular hosting services like GCP and AWS. Kubernetes is by far the most popular and most commonly used.
Based on the services used in your docker-compose, it advisable to not use it directly on production. Running a mysql container can lead to issues with data loss as containers are meant to be temporary. It is better to opt for a managed MySQL service like RDS instead. Similarly nginx is also better set up with any reverse-proxy/load-balancer services that your hosting service provides.
When it comes to building the images you can utilise your CI/CD pipeline to build these images from their respective Dockerfiles, and then push to a image registry of your choice and let your hosting service pick up the image and deploy it with th e container-orchestration tool that your hosting service provides.
If you need a lightweight production environment, using Compose is probably fine. Other answers here have hinted at more involved tools, that have advantages like supporting multiple-host clusters and zero-downtime deployments, but they are much more involved.
One core piece missing from your description is an image registry. Docker Hub fits this role, if you want to use it; major cloud providers have one; even GitHub has a container registry now (for public repositories); or you can run your own. This addresses a couple of your problems: (2) you docker build the images locally (or on a dedicated continuous-integration system) and docker push them to the registry, then (3) you docker pull the images on the production system, or let Docker do it on its own.
A good practice that goes along with this is to give each build a unique tag, perhaps a date stamp or commit ID. This makes it very easy to upgrade (or downgrade) by changing the tag and re-running docker-compose up.
For this you'd change your docker-compose.yml like:
services:
nginx-client:
# No `build:`
image: registry.example.com/nginx:${NGINX_TAG:latest}
php-server:
# No `build:`
image: registry.example.com/php:${PHP_TAG:latest}
And then you can update things like:
docker build -t registry.example.com/nginx:20201101 ./nginx
docker build -t registry.example.com/php:20201101 ./php
docker push registry.example.com/nginx:20201101 registry.example.com/php:20201101
ssh production-system.example.com \
NGINX_TAG=20201101 PHP_TAG=20201101 docker-compose up -d
You can use multiple docker-compose.yml files to also use docker-compose build and docker-compose push for your custom images, with a development-only overlay file. There is an example in the Docker documentation.
Do not separately copy your code; it's contained in the image. Do not bind-mount local code over the image code. Especially do not use an anonymous volume to hold libraries, since this will completely ignore any updates in the underlying image. These are good practices in development too, since if you replace everything interesting in an image with volume mounts then it doesn't really have any relation to what you're running in production.
You will need to separately copy the configuration files you reference and the docker-compose.yml itself to the target system, and take responsibility for backing up the database data.
Finally, I'd recommend removing unnecessary options from the docker-compose.yml file (don't manually specify container_name:, use the Compose-provided default network, prefer specifying the command: in an image, and so on). That's not essential but it can help trim down the size of the YAML file.
This question already has answers here:
Communication between multiple docker-compose projects
(20 answers)
Closed 4 months ago.
I have a dockerized application with a few services running using docker-compose. I'd like to connect this application with ElasticSearch/Logstash/Kibana (ELK) using another docker-compose application, docker-elk. Both of them are running in the same docker machine in development. In production, that will probably not be the case.
How can I configure my application's docker-compose.yml to link to the ELK stack?
Update Jun 2016
The answer below is outdated starting with docker 1.10. See this other similar answer for the new solution.
https://stackoverflow.com/a/34476794/1556338
Old answer
Create a network:
$ docker network create --driver bridge my-net
Reference that network as an environment variable (${NETWORK})in the docker-compose.yml files. Eg:
pg:
image: postgres:9.4.4
container_name: pg
net: ${NETWORK}
ports:
- "5432"
myapp:
image: quay.io/myco/myapp
container_name: myapp
environment:
DATABASE_URL: "http://pg:5432"
net: ${NETWORK}
ports:
- "3000:3000"
Note that pg in http://pg:5432 will resolve to the ip address of the pg service (container). No need to hardcode ip addresses; An entry for pg is automatically added to the /etc/host of the myapp container.
Call docker-compose, passing it the network you created:
$ NETWORK=my-net docker-compose up -d -f docker-compose.yml -f other-compose.yml
I've created a bridge network above which only works within one node (host). Good for dev. If you need to get two nodes to talk to each other, you need to create an overlay network. Same principle though. You pass the network name to the docker-compose up command.
You could also create a network with docker outside your docker-compose :
docker network create my-shared-network
And in your docker-compose.yml :
version: '2'
services:
pg:
image: postgres:9.4.4
container_name: pg
expose:
- "5432"
networks:
default:
external:
name: my-shared-network
And in your second docker-compose.yml :
version: '2'
myapp:
image: quay.io/myco/myapp
container_name: myapp
environment:
DATABASE_URL: "http://pg:5432"
net: ${NETWORK}
expose:
- "3000"
networks:
default:
external:
name: my-shared-network
And both instances will be able to see each other, without open ports on host, you just need to expose ports, and there will see each other through the network : "my-shared-network".
If you set a predictable project name for the first composition you can use external_links to reference external containers by name from a different compose file.
In the next docker-compose release (1.6) you will be able to use user defined networks, and have both compositions join the same network.
Take a look at multi-host docker networking
Networking is a feature of Docker Engine that allows you to create
virtual networks and attach containers to them so you can create the
network topology that is right for your application. The networked
containers can even span multiple hosts, so you don’t have to worry
about what host your container lands on. They seamlessly communicate
with each other wherever they are – thus enabling true distributed
applications.
I didn't find any complete answer, so decided to explain it in a complete and simple way.
To connect two docker-compose you need a network and putting both docker-composes in that network,
you could create netwrok with docker network create name-of-network,
or you could simply put network declaration in networks option of docker-compose file and when you run docker-compose (docker-compose up) the network would be created automatically.
put the below lines in both docker-compose files
networks:
net-for-alpine:
name: test-db-net
Note: net-for-alpine is internal name of the network and it will be used inside of the docker-compose files and could be different,
test-db-net is external name of the network and must be same in two docker-compose files.
Assume we have docker-compose.db.yml and docker-compose.alpine.yml
docker-compose.apline.yml would be:
version: '3.8'
services:
alpine:
image: alpine:3.14
container_name: alpine
networks:
- net-for-alpine
# these two command keeps apline container running
stdin_open: true # docker run -i
tty: true # docker run -t
networks:
net-for-alpine:
name: test-db-net
docker-compose.db.yml would be:
version: '3.8'
services:
db:
image: postgres:13.4-alpine
container_name: psql
networks:
- net-for-db
networks:
net-for-db:
name: test-db-net
To test the network, go inside alpine container
docker exec -it alpine sh
then with following commands you can check the network
# if it returns 0 or see nothing as a result, network is established
nc -z psql (container name)
or
ping pgsql