I have just started using docker . I was able to create a docker compose file which deploys three components of my application ,with the necessary number of replications in one host .
I want to replicate the same same thing ,with multiple hosts now .
I have three processes A[7 copies ] ,B [ 1 copy] ,C [1 Copy]
I followed the creating swarm tutorial on the docker website ,and managed to create a manager and attach two workers to it .
So now when I run my command
docker stack deploy --compose-file docker-compose.yml perf
It does spawn the required number of machines ,but all of them in the manager itself .
I would ideally want them to spawn C and B in the manager and ann the copies of A distributed between worker 1 and worker 2.
Here is my docker -compose file
version: '3'
services:
A:
image: A:host
tty: true
volumes:
- LogFilesLocationFolder:/jmeter/log
- AntOutLogFolder:/antout
- ZipFilesLocationFolder:/zip
deploy:
replicas: 7
placement:
constraints: [node.role == worker]
networks:
- perfhost
B:
container_name: s1_perfSqlDB
restart: always
tty: true
image: mysql:5.5
environment:
MYSQL_ROOT_PASSWORD: ''
volumes:
- mysql:/var/lib/mysql
ports:
- "3306:3306"
deploy:
placement:
constraints: [node.role == manager]
networks:
- perfhost
C:
container_name: s1_scheduler
image: C:host
tty: true
volumes:
- LogFilesLocationFolder:/log
- ZipFilesLocationFolder:/zip
- AntOutLogFolder:/antout
networks:
- perfhost
deploy:
placement:
constraints: [node.role == manager]
ports:
- "7000:7000"
networks:
perfhost:
volumes:
mysql:
LogFilesLocationFolder:
ZipFilesLocationFolder:
AntOutLogFolder:
B) And if I do get this working ,how do I use volumes to transfer data between Conatiner for Service A and container for Service B ,given that they are on different host machines
A few tips and answers:
for service names I don't recommend capital letters. Use valid DNS hostnames (lowercase, no special char except -).
container_name isn't supported in swarm and shouldn't be needed. Looks like C: should be something like scheduler, etc. Make the service names simple so they are easy to use/remember on their virtual network.
All services in a single compose file are always on the same docker network in swarm (and docker-compose for local development), so no need for the network assignment or listing.
restart:always isn't needed in swarm. That setting isn't used and is the default anyways. If you're using it for docker-compose, it's rarely needed as you usually don't want apps in a respawn loop during errors which will usually result in CPU race condition. I recommend leaving it off.
Volumes use a "volume driver". The default is local, just like normal docker commands. If you have shared storage you can use a volume driver plugin from store.docker.com to ensure shared storage is connected to the correct node.
If you're still having issues with worker/manager task assignment, put the output of docker node ls and maybe docker service ls and docker node ps <managername> for us to help troubleshoot.
First you should run
docker node ls
And check if all of your nodes are available. If they are, you should check if the workers have the images they need to run the containers.
I would also try with a constraint using the id of each node instead, you can see the ids with the previous command.
Run before docker stack deploy:
mkdir /srv/service/public
docker run --rm -v /srv/service/public:/srv/service/public my-container-with-data cp -R /var/www/app/public /srv/service/public
Use direcory /srv/service/public as volume in containers.
Related
I have a docker swarm with 2 nodes and each node run 2 services in global mode so each node have 2 services running inside it. My problem is how to force ubuntu service in node1 only connect to mysql service in node1 and dont use round-robin method to select mysql service.
so when I connect to mysql from ubuntu in node1 with mysql -hmysql -uroot -p it select only mysql in node1.
here is the docker-compose file which describes my case
version: '3.8'
services:
mysql:
image: mysql:5.7
environment:
MYSQL_ROOT_PASSWORD: password
networks:
app-net: {}
deploy:
mode: global
ubuntu:
entrypoint: tail -f /dev/null
deploy:
mode: global
image: ubuntu:20.04
networks:
app-net: {}
networks:
app-net: {}
with this docker-compose file inside ubuntu container when I try to connect to mysql it selects mysql service in both nodes with round-robin algorithm.
What I try to achieve is to force each service be only visible to the services inside the same node.
I can't think of an easy way to achieve what you want in swarm with an overlay network. However, you can use unix socket instead of network. Just create a volume, mount it both into MySQL and your application, then make MySQL to put its socket onto that volume. Docker will create a volume on each node and thus you'll have your communication closed within node.
If you insist on using network communications, you can mount node's Docker socket into your app container and use it to find name of the container running MySQL on that node. Once you got the name, you can use it to connect to the particular instance of the service. Now, not only it is hard to make, it is also an anti-pattern and a security threat, so I don't recommend you to implement this idea.
At last there is also Kubernetes, where containers inside a pod can communicate with each other via localhost but I think you won't go that far, will you?
You should have a look mode=host.
You can bypass the routing mesh, so that when you access the bound port on a given node, you are always accessing the instance of the service running on that node. This is referred to as host mode. There are a few things to keep in mind.
ports:
- target: 80
published : 8080
protocol: tcp
mode: host
Unless I'm missing something, I would say you should not use global deploy and instead declare 2 ubuntu service and 2 mysql services in the compose file or deploy 2 separate stacks and in both cases use constraints to pin containers to specific node.
Example for first case would be something like this:
version: '3.8'
services:
mysql1:
image: mysql:5.7
environment:
MYSQL_ROOT_PASSWORD: password
deploy:
placement:
constraints: [node.hostname == node1]
mysql2:
image: mysql:5.7
environment:
MYSQL_ROOT_PASSWORD: password
deploy:
placement:
constraints: [node.hostname == node2]
ubuntu1:
entrypoint: tail -f /dev/null
image: ubuntu:20:04
deploy:
placement:
constraints: [node.hostname == node1]
ubuntu2:
entrypoint: tail -f /dev/null
image: ubuntu:20:04
deploy:
placement:
constraints: [node.hostname == node2]
So I have an issue with compose version 3.6 and deploying swarm services. What I'm trying to achieve is to only deploy either service A or B depending on the label value applied to the node.
What's happening is both services are getting deployed (or one is and the second one fails due to port usage).
serviceA:
image: serviceA:v1
deploy:
mode: global
restart_policy:
condition: on-failure
placement:
constraints:
- node.labels.faces == cpu
networks:
- mynet
ports:
- "8888:8888"
serviceB:
image: serviceB:v1
deploy:
mode: global
restart_policy:
condition: on-failure
placement:
constraints:
- node.labels.faces == gpu
networks:
- mynet
ports:
- "8888:8888"
On my single node I have defined a label as follows
# docker node inspect swarm-manager --pretty
ID: 0cpco8658ap5xxvxxblpqggpq
Labels:
- faces=cpu
Hostname: swarm-manager
Is this configuration even possible? Only the service which has a matching label should be instantiated.
I want to use global instead of replicated because we add additional nodes automatically without going to the master, but I read in another forum that the two might not be compatible.
However, if I create everything manually using CLI it works as expected
docker node update --label-add faces=cpu swarm-manager
docker service create -d --name serviceA --constraint node.labels.faces==cpu -p 8888 --mode global serviceA:v1
docker service create -d --name serviceB --constraint node.labels.faces==gpu -p 8888 --mode global serviceB:v1
# docker service ls | grep service
c30y50ez605p serviceA global 1/1 service:v1 *:30009->8888/tcp
uxjw41v42vzh serviceB global 0/0 serviceB:v1 *:30010->8888/tcp
You can see that the service created with CPU constraint worked and the service with GPU was not instantiated (in pending state).
I'm running docker for a production PHP-FPM/Nginx application, I want to use docker-stack.yml and deploy to a swarm cluster. Here's my file:
version: "3"
services:
app:
image: <MYREGISTRY>/app
volumes:
- app-data:/var/www/app
deploy:
mode: global
php:
image: <MYREGISTRY>/php
volumes:
- app-data:/var/www/app
deploy:
replicas: 2
nginx:
image: <MYREGISTRY>/nginx
depends_on:
- php
volumes:
- app-data:/var/www/app
deploy:
replicas: 2
ports:
- "80:80"
volumes:
app-data:
My code is in app container with image from my registry.
I want to update my code with docker service update --image <MYREGISTRY>/app:latest but it's not working the code is not changed.
I guess it uses the local volume app-data instead.
Is it normal that the new container data doesn't override volume data?
Yes, this is the expected behavior. Named volumes are only initialized to the image contents when they are empty (the default state when first created). Updating the volume any time after that point would risk data loss from overwriting or deleting volume data that you explicitly asked to be preserved.
If you need the files to be updated with every new image, then perhaps they shouldn't be in a volume? If you do need these inside a volume, then you may need to create a procedure to update the volumes from the image, e.g. if this were a docker run, you could do:
docker run -v app-data:/target --rm <your_registry>/app cp -a /var/www/app/. /target/.
Otherwise, you can delete the volume, or simply remove all files from the volume, and restart your stack to populate it again.
I was having the same issue that I have app and nginx containers sharing the same volume. My current solution having a deploy script which runs
docker service update --mount-add mount service
for app and nginx after docker stack deploy. It will force to update the volume for app and nginx containers.
I have a docker-compose file that deploys 8 different docker services in the same host. Is it possible to deploy it in different hosts?, I would like to deploy some service in one hosts and another ones in other host remotelly. Would I need to use docker-swarm? or is an easier way to do?
I have read that it could be done using DOCKER_HOST, but if I configure /etc/default/docker with this variable, all the services would be run on the remote host, and what I need is some services in one remote host, and other services in other remote host.
We can do this with docker compose v3 now.
https://docs.docker.com/engine/swarm/#feature-highlights
https://docs.docker.com/compose/compose-file/
You have to initialize the swarm cluster using command
$ docker swarm init
You can add more nodes as worker or manager -
https://docs.docker.com/engine/swarm/join-nodes/
Once you have your both nodes added to the cluster, pass your compose v3 i.e deployment file to create a stack. Compose file should just contain predefined images, you can't give a Dockerfile for deployment in Swarm mode.
$ docker stack deploy -c dev-compose-deploy.yml --with-registry-auth PL
View your stack services status -
$ docker stack services PL
Try to use Labels & Placement constraints to put services on different nodes.
Example "dev-compose-deploy.yml" file for your reference -
version: "3"
services:
nginx:
image: nexus.example.com/pl/nginx-dev:latest
extra_hosts:
- "dev-pldocker-01:10.2.0.42”
- "int-pldocker-01:10.2.100.62”
- "prd-plwebassets-01:10.2.0.62”
ports:
- "80:8003"
- "443:443"
volumes:
- logs:/app/out/
networks:
- pl
deploy:
replicas: 3
labels:
feature.description: “Frontend”
update_config:
parallelism: 1
delay: 10s
restart_policy:
condition: any
placement:
constraints: [node.role == worker]
command: "/usr/sbin/nginx"
viz:
image: dockersamples/visualizer
ports:
- "8085:8080"
networks:
- pl
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
deploy:
replicas: 1
labels:
feature.description: "Visualizer"
restart_policy:
condition: any
placement:
constraints: [node.role == manager]
networks:
pl:
volumes:
logs:
With docker swarm mode, you can deploy a version 3 compose yml file using:
docker stack deploy -c docker-compose.yml $your_stack_name
The v3 syntax removes a few features that do not apply to swarm mode, like links and dependencies. You should also note that volumes are stored local to a node by default. Otherwise the v3 syntax is very similar to the v2 syntax you may already be using. See ether following for more details:
https://docs.docker.com/compose/compose-file/
https://docs.docker.com/engine/swarm/
[ Original answer before v3 of the docker-compose.yml ]
For a single docker-compose.yml to deploy to multiple hosts, you need to use the standalone swarm (not the newer swarm mode, yet, this is rapidly changing). Spin up a swarm manager that has each host defined as members of its swarm, and then you can use constraints inside your docker-compose.yml to define which services run on which hosts.
You can also split up your docker-compose.yml into several files, one for each host, and then run multiple docker-compose up commands, with a different DOCKER_HOST value defined for each.
In both cases, you'll need to configure your docker installs to listen on the network, which should be done by configuring TLS on those sockets. This documentation describes what you need to do for that.
You can use docker compose version 3 which provides ability to do multi host deployment without using multiple compose files. All you need is to define labels for each node in the cluster and use the label name for placement constraint.
You may also want to consider the Overnode tool - it is container orchestration tool on top of automated multi-host docker-compose. It is the easiest transition from the single-host docker-compose deployment. (Disclaimer: I am an author and it was published recently)
This question already has answers here:
Communication between multiple docker-compose projects
(20 answers)
Closed 4 months ago.
I have a dockerized application with a few services running using docker-compose. I'd like to connect this application with ElasticSearch/Logstash/Kibana (ELK) using another docker-compose application, docker-elk. Both of them are running in the same docker machine in development. In production, that will probably not be the case.
How can I configure my application's docker-compose.yml to link to the ELK stack?
Update Jun 2016
The answer below is outdated starting with docker 1.10. See this other similar answer for the new solution.
https://stackoverflow.com/a/34476794/1556338
Old answer
Create a network:
$ docker network create --driver bridge my-net
Reference that network as an environment variable (${NETWORK})in the docker-compose.yml files. Eg:
pg:
image: postgres:9.4.4
container_name: pg
net: ${NETWORK}
ports:
- "5432"
myapp:
image: quay.io/myco/myapp
container_name: myapp
environment:
DATABASE_URL: "http://pg:5432"
net: ${NETWORK}
ports:
- "3000:3000"
Note that pg in http://pg:5432 will resolve to the ip address of the pg service (container). No need to hardcode ip addresses; An entry for pg is automatically added to the /etc/host of the myapp container.
Call docker-compose, passing it the network you created:
$ NETWORK=my-net docker-compose up -d -f docker-compose.yml -f other-compose.yml
I've created a bridge network above which only works within one node (host). Good for dev. If you need to get two nodes to talk to each other, you need to create an overlay network. Same principle though. You pass the network name to the docker-compose up command.
You could also create a network with docker outside your docker-compose :
docker network create my-shared-network
And in your docker-compose.yml :
version: '2'
services:
pg:
image: postgres:9.4.4
container_name: pg
expose:
- "5432"
networks:
default:
external:
name: my-shared-network
And in your second docker-compose.yml :
version: '2'
myapp:
image: quay.io/myco/myapp
container_name: myapp
environment:
DATABASE_URL: "http://pg:5432"
net: ${NETWORK}
expose:
- "3000"
networks:
default:
external:
name: my-shared-network
And both instances will be able to see each other, without open ports on host, you just need to expose ports, and there will see each other through the network : "my-shared-network".
If you set a predictable project name for the first composition you can use external_links to reference external containers by name from a different compose file.
In the next docker-compose release (1.6) you will be able to use user defined networks, and have both compositions join the same network.
Take a look at multi-host docker networking
Networking is a feature of Docker Engine that allows you to create
virtual networks and attach containers to them so you can create the
network topology that is right for your application. The networked
containers can even span multiple hosts, so you don’t have to worry
about what host your container lands on. They seamlessly communicate
with each other wherever they are – thus enabling true distributed
applications.
I didn't find any complete answer, so decided to explain it in a complete and simple way.
To connect two docker-compose you need a network and putting both docker-composes in that network,
you could create netwrok with docker network create name-of-network,
or you could simply put network declaration in networks option of docker-compose file and when you run docker-compose (docker-compose up) the network would be created automatically.
put the below lines in both docker-compose files
networks:
net-for-alpine:
name: test-db-net
Note: net-for-alpine is internal name of the network and it will be used inside of the docker-compose files and could be different,
test-db-net is external name of the network and must be same in two docker-compose files.
Assume we have docker-compose.db.yml and docker-compose.alpine.yml
docker-compose.apline.yml would be:
version: '3.8'
services:
alpine:
image: alpine:3.14
container_name: alpine
networks:
- net-for-alpine
# these two command keeps apline container running
stdin_open: true # docker run -i
tty: true # docker run -t
networks:
net-for-alpine:
name: test-db-net
docker-compose.db.yml would be:
version: '3.8'
services:
db:
image: postgres:13.4-alpine
container_name: psql
networks:
- net-for-db
networks:
net-for-db:
name: test-db-net
To test the network, go inside alpine container
docker exec -it alpine sh
then with following commands you can check the network
# if it returns 0 or see nothing as a result, network is established
nc -z psql (container name)
or
ping pgsql