I have docker-compose.yml which is running a simple web server. I want to create multiple instances of the container without --scaling in the start command. This is how I currently start multiple instances of the container docker-composer up -d --scale appserver=2.
Ideally, I would love to put some kind of instruction in the docker-compose.yml to do this. Below is an example of the docker-compose.yml
version: '3'
services:
appserver:
image: nimmis/apache
haproxy:
image: eeacms/haproxy
ports:
- '80:5000'
- '1936:1936'
environment:
BACKENDS: 'appserver_1:80 appserver_2:80 appserver_3:80'
DNS_ENABLED: 'true'
LOG_LEVEL: info
Note here that I am only trying to multiple instances of appserver service.
Docker compose doesn't support the deploy section, but if you switch to a single node Swarm Mode (as easy as running docker swarm init) you can deploy with:
docker stack deploy -c docker-compose.yml stack_name
using the following yaml:
version: '3'
services:
appserver:
image: nimmis/apache
deploy:
replicas: 2
haproxy:
image: eeacms/haproxy
ports:
- '80:5000'
- '1936:1936'
environment:
BACKENDS: 'appserver_1:80 appserver_2:80 appserver_3:80'
DNS_ENABLED: 'true'
LOG_LEVEL: info
Related
I want to practice using docker-compose. I have a tournament happening over the weekend and I want to set up 10 copies of the same web app on ONE server with urls like:
http://team1.example.com
http://team2.example.com
etc...
http://team10.example.com
There will be 10 teams in the tournament, and they will all go to their respective url http://team<your team number>.example.com via web browser, save some information to a database, and maybe even modify the code on the actual server.
So I built a simple nodejs app that simply writes data to a mongo database. Then I decided to set up two websites http://team1.example.com and http://team2.example.com. So I made this docker compose file:
version: '3'
services:
api1:
image: dockerjohn/tournament:latest
environment:
- DB=database1
ports:
- 80:3000
networks:
- net1
db1:
image: mongo:4.0.3
container_name: database1
networks:
- net1
api2:
image: dockerjohn/tournament:latest
environment:
- DB=database2
ports:
- 81:3000
networks:
- net2
db2:
image: mongo:4.0.3
container_name: database2
networks:
- net2
networks:
net1:
net2:
Then I installed apache web server to reverse proxy team 1 to port 80 and team 2 to port 81. This all works fine.
To set up the remaining teams 3 to 10, I have to duplicate the entries I have in my docker compose yml file and duplicate virtual host entries in apache.
My question: Is there a docker command that will let me clone each docker stack (team 1, team2, etc...) more easily without all this data entry? Do I need Kubernetes to do this?
Kubernetes would be way easier to set this up. It can take care of the reverse proxy setup too if you install the nginx controller.
You could create a single Kubernetes manifest containing:
a mongodb deployment, service, persistent volume claim
a nodejs deployment, service
You can then apply this 10 times, each time using a different namespace:
kubectl -n team01 -f manifest.yaml
kubectl -n team02 -f manifest.yaml
kubectl -n team03 -f manifest.yaml
...
Of course, you would need 10 different ingress rules because you want 10 different domains, but that would be the only thing you need to copy-paste.
I figured it out. There are options for docker called swarm and stack. First, I simplified my docker-compose.yml file to just this:
version: '3'
services:
api:
image: dockerjohn/tournament:latest
environment:
- DB=$DB
ports:
- $WEB_PORT:3000
networks:
- mynet
db:
image: mongo:4.0.3
networks:
- mynet
networks:
mynet:
Then I ran these commands from the same folder as my docker-compose file like this
docker swarm init
DB=team1_db WEB_PORT=81 docker stack deploy -c docker-compose.yml team1
DB=team2_db WEB_PORT=82 docker stack deploy -c docker-compose.yml team2
DB=team3_db WEB_PORT=83 docker stack deploy -c docker-compose.yml team3
DB=team4_db WEB_PORT=84 docker stack deploy -c docker-compose.yml team4
DB=team5_db WEB_PORT=85 docker stack deploy -c docker-compose.yml team5
etc...
You have to structure the DB env variable as <stack name located at the end of my docker stack deploy command>_<job name in the docker-compose yaml file>.
Now I just need to find a way to simplify my apache set up so I don't have to duplicate so many vhost entries . I heard there's a docker image called Traefik which can do this reverse proxy. Maybe I'll try that out and update my answer after.
Hi guys and excuse me for my English. I'm using docker swarm, when I attempt to deploy docker application with this command
docker stack deploy -c docker-compose.yml -c docker-compose.prod.yml chatappapi
it shows the next error : services.chat-app-api Additional property pull_policy is not allowed
why this happens?
how do I solve this?
docker-compose.yml
version: "3.9"
services:
nginx:
image: nginx:stable-alpine
ports:
- "5000:80"
volumes:
- ./nginx/default.conf:/etc/nginx/conf.d/default.conf:ro
chat-app-api:
build: .
image: username/myapp
pull_policy: always
volumes:
- ./:/app
- /app/node_modules
environment:
- PORT= 5000
- MAIL_USERNAME=${MAIL_USERNAME}
- MAIL_PASSWORD=${MAIL_PASSWORD}
- CLIENT_ID=${CLIENT_ID}
- CLIENT_SECRET=${CLIENT_SECRET}
- REDIRECT_URI=${REDIRECT_URI}
- REFRESH_TOKEN=${REFRESH_TOKEN}
depends_on:
- mongo-db
mongo-db:
image: mongo
environment:
MONGO_INITDB_ROOT_USERNAME: 'username'
MONGO_INITDB_ROOT_PASSWORD: 'password'
ports:
- "27017:27017"
volumes:
- mongo-db:/data/db
volumes:
mongo-db:
docker-compose.prod.yml
version: "3.9"
services:
nginx:
ports:
- "80:80"
chat-app-api:
deploy:
mode: replicated
replicas: 8
restart_policy:
condition: any
update_config:
parallelism: 2
delay: 15s
build:
context: .
args:
NODE_ENV: production
environment:
- NODE_ENV=production
- MONGO_USER=${MONGO_USER}
- MONGO_PASSWORD=${MONGO_PASSWORD}
- MONGO_IP=${MONGO_IP}
command: node index.js
mongo-db:
environment:
MONGO_INITDB_ROOT_USERNAME: ${MONGO_USER}
MONGO_INITDB_ROOT_PASSWORD: ${MONGO_PASSWORD}
Information
docker-compose version 1.29.2
Docker version 20.10.8
Ubuntu 20.04.2 LTS
Thanks in advance.
Your problem line is in docker-compose.yml
chat-app-api:
build: .
image: username/myapp
pull_policy: always # <== this is the bad line, delete it
The docker compose file reference doesn't have any pull_policy in the api because
If the image does not exist, Compose attempts to pull it, unless you have also specified build, in which case it builds it using the specified options and tags it with the specified tag.
I think pull_policy used to be a thing for compose? Maybe keep the latest api documentation open to refer to/search through whilst you're developing (things can and do change fairly frequently with compose).
If you want to ensure that the most recent version of an image is pulled onto all servers in a swarm then run docker compose -f ./docker-compose.yml pull on each server in turn (docker stack doesn't have functionality to run this over an entire swarm yet).
As an aside: I wouldn't combine two .yml files with a single docker stack command without a very good reason to do so.
You are mixing docker-compose and docker swarm ideas up in the same files:
It is probably worth breaking your project up into 3 files:
docker-compose.yml
This would contain just the basic service definitions common to both compose and swarm.
docker-compose.override.yml
Conveniently, docker-compose and docker compose both should read this file automatically. This file should contain any "port:", "depends_on:", "build:" directives, and any convenience volumes use for development.
stack.production.yml
The override file to be used in stack deployments should contain everything understood by swarm and not compose, and b. everything required for production.
Here you would use configs: or even secrets: rather than volume mappings to local folders to inject content into containers. Rather than relying on ports: directives, you would install an ingress router on the swarm such as traefik. and so on.
With this arrangement, docker compose can be used to develop and build your compose stack locally, and docker stack deploy won't have to be exposed to compose syntax it doesn't understand.
pull_policy is in the latest version of docker-compose.
To upgrade your docker-compose refer to:
https://docs.docker.com/compose/install/
The spec for more info:
https://github.com/compose-spec/compose-spec/blob/master/spec.md#pull_policy
I am using docker-compose and my configuration file is simply:
version: '3.7'
volumes:
mongodb_data: {}
services:
mongodb:
image: mongo:4.4.3
restart: always
ports:
- "27017:27017"
volumes:
- mongodb_data:/data/db
environment:
- MONGO_INITDB_ROOT_USERNAME=root
- MONGO_INITDB_ROOT_PASSWORD=super-secure-password
rocket:
build:
context: .
depends_on:
- mongodb
image: rocket:dev
dns:
- 1.1.1.1
- 8.8.8.8
volumes:
- .:/var/rocket
ports:
- "30301-30309:30300"
I start MongoDB with docker-compose up, and then in new terminal windows run two Node.js application each with all the source code in /var/rocket with:
# 1st Node.js application
docker-compose run --service-ports rocket
# 2nd Node.js application
docker-compose run --service-ports rocket
The problem is that the 2nd Node.js application service needs to communicate with the 1st Node.js application service on port 30300. I was able to get this working by referencing the 1st Node.js application by the id of the Docker container:
Connect to 1st Node.js application service on: tcp://myapp_myapp_run_837785c85abb:30300 from the 2nd Node.js application service.
Obviously this does not work long term as the container id changes every time I docker-compose up and down. Is there a standard way to do networking when you start multiple of the same container from docker-compose?
You can run the same image multiple times in the same docker-compose.yml file:
version: '3.7'
services:
mongodb: { ... }
rocket1:
build: .
depends_on:
- mongodb
ports:
- "30301:30300"
rocket2:
build: .
depends_on:
- mongodb
ports:
- "30302:30300"
As described in Networking in Compose, the containers can communicate using their respective service names and their "normal" port numbers, like rocket1:30300; any ports: are ignored for this. You shouldn't need to manually docker-compose run anything.
Well you could always give specific names to your two Node containers:
$ docker-compose run --name rocket1 --service-ports rocket
$ docker-compose run --name rocket2 --service-ports rocket
And then use:
tcp://rocket1:30300
Currently I have a rabbitmq message broker and multiple celery workers that need to be containerized. My problem is, how can I fire up containers using different docker-compose.yml? My goal is to start the rabbitmq once and for all, and never touch it again.
Currently I have a docker-compose.yml for the rabbitmq:
version: '2'
services:
rabbit:
hostname: rabbit
image: rabbitmq:latest
environment:
- RABBITMQ_DEFAULT_USER=admin
- RABBITMQ_DEFAULT_PASS=mypass
ports:
- "5672:5672"
expose:
- "5672"
And another docker-compose.yml for celery workers:
version: '2'
services:
worker:
build:
context: .
dockerfile: dockerfile
volumes:
- .:/app
environment:
- CELERY_BROKER_URL=amqp://admin:mypass#rabbit:5672
links:
- rabbit
However, when I do docker-compose up for celery workers, I keep getting the following error:
ERROR/MainProcess] consumer: Cannot connect to
amqp://admin:**#rabbit:5672//: failed to resolve broker hostname.
Can anyone take a look if there is anything wrong with my code? Thanks.
the domain name rabbit in your second docker-compose.yml file does not resolve because there is no service with that name in that docker-compose.yml file.
As stated in the comments, one solution is to put both the rabbit service and worker service in the same docker-compose.yml file. In such a setup, all containers started for those services would join the same docker network and those service names could be resolved to the IP adresses of their containers.
Since having a single docker-compose.yml file is not convenient in your case, you have to find an other way to have the containers originating from different docker-compose.yml files join a same docker network.
To do so, you need to create a dedicated docker network for that purpose:
docker network create rabbitNetwork
Then, in each docker-compose.yml file, you need to refer to this network in the services definitions:
version: '2'
services:
rabbit:
hostname: rabbit
image: rabbitmq:latest
environment:
- RABBITMQ_DEFAULT_USER=admin
- RABBITMQ_DEFAULT_PASS=mypass
# ports:
# - "5672:5672" # there is no need to publish ports on the docker host anymore
expose:
- "5672"
networks:
- rabbitNet
networks:
rabbitNet:
external:
name: rabbitNetwork
version: '2'
services:
worker:
build:
context: .
dockerfile: dockerfile
volumes:
- .:/app
environment:
- CELERY_BROKER_URL=amqp://admin:mypass#rabbit:5672
networks:
- rabbitNet
networks:
rabbitNet:
external:
name: rabbitNetwork
You can use any file as service definition.
docker-compose.yml is default file name but any other name can be passed using -f argument.
docker-compose -f rabbit-compose.yml COMMAND
I have two services defined in docker-compose.yml file and expecting two containers up and running in output however the containers are getting stopped immediately. When I use the same compose file on Linux it creates and keep twp containers up and running. Is there any known issue with Windows?
I have tried using docker-compose up as well as docker-compose run -dT <servicename> no go.
my docker-compose.yml file is
version: '3'
networks:
default:
external:
name: nat
services:
awi-service:
env_file:
- awi-box.env
image: awi-box:12.0.0
ports:
- 8080:8080
depends_on:
- ae-service
ae-service:
env_file:
- ae-box.env
image: ar-box:12.0.0
ports:
- 2217:2217
- 2218:2218