I want to use docker-compose with Docker Swarm (I use docker version 1.13 and compose with version: '3' syntax).
Is each service reachable as a "single" service to the other services? Here is an simplified example to be clear:
version: '3'
services:
nodejs:
image: mynodeapp
container_name: my_app
ports:
- "80:8080"
environment:
- REDIS_HOST=my_redis
- REDIS_PORT=6379
deploy:
mode: replicated
replicas: 3
networks:
- my_net
command: npm start
redis:
image: redis
container_name: my_redis
restart: always
expose:
- 6379
deploy:
mode: replicated
replicas: 2
networks:
- my_net
networks:
my_net:
external: true
Let's say I have 3 VMs which are configured as a swarm. So there is one nodejs container running on each VM but there are only two redis container.
On the VM where no redis is running: Will my nodejs container know about the redis?
Addiitonal questions:
When I set replicas: 4 for my redis, I will have two redis container on one VM: Will this be a problem for my nodejs app?
Last question:
When I set replicas: 4 for my nodeapp: Will this even work because I now have exposed two times port 80?
The services have to be stateless. In the case of databases it is necessary to set the cluster mode in each instance, since they are statefull.
In the same order you asked:
One service does not see another service as if it is made of replicas. Nodejs will see a unique Redis, which will have one IP, no matter in which node its replicas are located. That's the beauty of Swarm.
Yes, you can have Nodejs in one node and Redis in another node and they will be visible to each other. That's what the manager does; make the containers "believe" they are running on the same machine.
Also, you can have many replicas in the same node without a problem; they will be perceived as a whole. In fact, they use the same volume.
And last, as implication of (1), there will be no problem because you are not actually exposing port 80 twice. Even having 20 replicas, you have a unique entrypoint to your service, a particular IP:PORT direction.
Related
I have this compose file:
version: "3.3"
services:
icecc-scheduler:
build: services/icecc-scheduler
restart: unless-stopped
network_mode: host
icecc-daemon:
build: services/icecc-daemon
restart: unless-stopped
network_mode: host
I then have a docker swarm configured with 5 machines, the one I'm on is the manager. When I deploy my stack I want the icecc-daemon container to be deployed to all nodes in the swarm while the icecc-scheduler is only deployed once (preferably to the swarm manager). Is there any way to have this level of control with docker compose/stack/swarm?
Inside docker swarm, you can achieve desired behaviour by using placement constraints.
To achieve the service is deployed only to the manager node the constraint should be: - "node.role==manager"
To achieve the service is only deployed once you can refer to the
deploy:
mode: replicated
replicas: 1
section. This will make your service run on one replica only inside the whole swarm cluster.
To achieve service is deployed exactly as one container per swarm node, you can use:
deploy:
mode: global
More information on the parameters on official docs
I have 5 microservices which I intend to deploy over docker swarm cluster consisting of 3 nodes.
I also have a postgresql service running over one of the 3 servers(not dockerized but rather installed over the server) which I have. I did assign the network as "host" for all of the services but they simply refuse to start with no logs being generated.
version: '3.8'
services:
frontend-client:
image: xxx:10
container_name: frontend
restart: on-failure
deploy:
mode: replicated
replicas: 3
networks:
- "host"
ports:
- "xxxx:3000"
networks:
host:
name: host
external: true
I also did try to start a centos container from a server which does not have postgres installed and was able to ping as well as telnet the postgresql port as well using the Host network being assigned to it.
Can someone please help me narrow down the issue or look at the possibility which I might be missing???
docker swarm doesn't support "host" network_mode currently, so your best bet (and best practice) would be to pass your postgresql host ip address as an environment variable to the services using it.
if you are using docker-compose instead of docker swarm, you can set network_mode to host:
version: '3.8'
services:
frontend-client:
image: xxx:10
container_name: frontend
restart: on-failure
deploy:
mode: replicated
replicas: 3
network_mode: "host"
ports:
- "xxxx:3000"
notice i've removed networks part of your compose snippet and replaced it with network_mode.
I have a docker swarm with 2 nodes and each node run 2 services in global mode so each node have 2 services running inside it. My problem is how to force ubuntu service in node1 only connect to mysql service in node1 and dont use round-robin method to select mysql service.
so when I connect to mysql from ubuntu in node1 with mysql -hmysql -uroot -p it select only mysql in node1.
here is the docker-compose file which describes my case
version: '3.8'
services:
mysql:
image: mysql:5.7
environment:
MYSQL_ROOT_PASSWORD: password
networks:
app-net: {}
deploy:
mode: global
ubuntu:
entrypoint: tail -f /dev/null
deploy:
mode: global
image: ubuntu:20.04
networks:
app-net: {}
networks:
app-net: {}
with this docker-compose file inside ubuntu container when I try to connect to mysql it selects mysql service in both nodes with round-robin algorithm.
What I try to achieve is to force each service be only visible to the services inside the same node.
I can't think of an easy way to achieve what you want in swarm with an overlay network. However, you can use unix socket instead of network. Just create a volume, mount it both into MySQL and your application, then make MySQL to put its socket onto that volume. Docker will create a volume on each node and thus you'll have your communication closed within node.
If you insist on using network communications, you can mount node's Docker socket into your app container and use it to find name of the container running MySQL on that node. Once you got the name, you can use it to connect to the particular instance of the service. Now, not only it is hard to make, it is also an anti-pattern and a security threat, so I don't recommend you to implement this idea.
At last there is also Kubernetes, where containers inside a pod can communicate with each other via localhost but I think you won't go that far, will you?
You should have a look mode=host.
You can bypass the routing mesh, so that when you access the bound port on a given node, you are always accessing the instance of the service running on that node. This is referred to as host mode. There are a few things to keep in mind.
ports:
- target: 80
published : 8080
protocol: tcp
mode: host
Unless I'm missing something, I would say you should not use global deploy and instead declare 2 ubuntu service and 2 mysql services in the compose file or deploy 2 separate stacks and in both cases use constraints to pin containers to specific node.
Example for first case would be something like this:
version: '3.8'
services:
mysql1:
image: mysql:5.7
environment:
MYSQL_ROOT_PASSWORD: password
deploy:
placement:
constraints: [node.hostname == node1]
mysql2:
image: mysql:5.7
environment:
MYSQL_ROOT_PASSWORD: password
deploy:
placement:
constraints: [node.hostname == node2]
ubuntu1:
entrypoint: tail -f /dev/null
image: ubuntu:20:04
deploy:
placement:
constraints: [node.hostname == node1]
ubuntu2:
entrypoint: tail -f /dev/null
image: ubuntu:20:04
deploy:
placement:
constraints: [node.hostname == node2]
I am trying to use docker swarm to create simple nodejs service that lays behind Haproxy and connect to mysql. So, I created this docker compose file:
And I have several issues:
The backend service can't connect to the database using: localhost or 127.0.0.1, so, I managed to connect to the database using the private ip(10.0.1.4) of the database container.
The backend tries to connect to the database too soon even though it depends on it.
The application can't be reached from outside.
version: '3'
services:
db:
image: test_db:01
ports:
- 3306
networks:
- db
test:
image: test-back:01
ports:
- 3000
environment:
- SERVICE_PORTS=3000
- DATABASE_HOST=localhost
- NODE_ENV=development
deploy:
replicas: 1
update_config:
parallelism: 1
delay: 5s
restart_policy:
condition: on-failure
max_attempts: 3
window: 60s
networks:
- web
- db
depends_on:
- db
extra_hosts:
- db:10.0.1.4
proxy:
image: dockercloud/haproxy
depends_on:
- test
environment:
- BALANCE=leastconn
volumes:
- /var/run/docker.sock:/var/run/docker.sock
ports:
- 80:80
networks:
- web
deploy:
placement:
constraints: [node.role == manager]
networks:
web:
driver: overlay
db:
driver: bridge
I am running the following:
docker stack deploy --compose-file=docker-compose.yml prod
All the services are running.
curl http://localhost/api/test <-- Not working
But, as I mentioned above the issues I have.
Docker version 18.03.1-ce, build 9ee9f40
docker-compose version 1.18.0, build 8dd22a9
What do I missing?
The backend service can't connect to the database using: localhost or 127.0.0.1, so, I managed to connect to the database using the private ip(10.0.1.4) of the database container.
don't use IP addresses for connection. Use just the DNS name.
So you must change connection to DATABASE_HOST=db, because this is the service name you've defined.
Localhost is wrong, because the service is running in a different container as your test service.
The backend tries to connect to the database too soon even though it depends on it.
depends_on does not work as you expected. Please read https://docs.docker.com/compose/compose-file/#depends_on and the info box "There are several things to be aware of when using depends_on:"
TL;DR: depends_on option is ignored when deploying a stack in swarm mode with a version 3 Compose file.
The application can't be reached from outside.
Where is your haproxy configuration that it must request for http://test:3000 when something requests haproxy on /api/test?
For DATABASE_HOST=localhost - the localhost word means my local container. You need to use the service name where db is hosted. localhost is a special dns name always pointing to the application host. when using containers - it will be the container. In cloud development, you need to forget about using localhost (will point to the container) or IPs (they can change every time you run the container and you will not be able to use load-balancing), and simply use service names.
As for the readiness - docker has no possibility of knowing, if the application you started in container is ready. You need to make the service aware of the database unavailability and code/script some mechanisms of polling/fault tolerance.
Markus is correct, so follow his advice.
Here is a compose/stack file that should work assuming your app listens on port 3000 in the container and db is setup with proper password, database, etc. (you usually set these things as environment vars in compose based on their Docker Hub readme).
Your app should be designed to crash/restart/wait if it can't fine the DB. That's the nature of all distributed computing... that anything "remote" (another container, host, etc.) can't be assumed to always be available. If your app just crashes, that's fine and a normal process for Docker, which will re-create the Swarm Service task each time.
If you could attempt to make this with public Docker Hub images, I can try to test for you.
Note that in Swarm, it's likely easier to use Traefik for the proxy (Traefik on Swarm Mode Guide), which will autoupdate and route incoming requests to the correct container based on the hostname you give the labels... But note that you should test first just the app and db, then after you know that works, try adding in a proxy layer.
Also, in Swarm, all your networks should be overlay, and you don't need to specify as that is the default in stacks.
Below is a sample using traefik with your above settings. I didn't give the test service a specific traefik hostname so it should accept all traffic coming in on 80 and forward to 3000 on the test service.
version: '3'
services:
db:
image: test_db:01
networks:
- db
test:
image: test-back:01
environment:
- SERVICE_PORTS=3000
- DATABASE_HOST=db
- NODE_ENV=development
networks:
- web
- db
deploy:
labels:
- traefik.port=3000
- traefik.docker.network=web
proxy:
image: traefik
networks:
- web
volumes:
- /var/run/docker.sock:/var/run/docker.sock
ports:
- "80:80"
- "8080:8080" # traefik dashboard
command:
- --docker
- --docker.swarmMode
- --docker.domain=traefik
- --docker.watch
- --api
deploy:
placement:
constraints: [node.role == manager]
networks:
web:
db:
I suppose, it's a stupid question but I have no idea where to find the answer. I've checked so many resources but I still didn't get it.
I have docker-compose.yml file. Is it possible to use AWS ECS cluster to run a new instance (t2.micro, for example) for each service (eurekaserver, configserver, zuulserver, database)? I saw only examples with one big instance.
version: '2'
services:
eurekaserver:
image: maxb/tracker-eurekasvr:tracker-eurekasvr
ports:
- "8761:8761"
configserver:
image: maxb/tracker-confsvr:tracker-confsvr
ports:
- "8888:8888"
environment:
EUREKASERVER_URI: "http://eurekaserver:8761/eureka/"
EUREKASERVER_PORT: "8761"
ENCRYPT_KEY: "IMSYMMETRIC"
zuulserver:
image: maxb/tracker-zuulsvr:tracker-zuulsvr
ports:
- "5555:5555"
environment:
PROFILE: "default"
SERVER_PORT: "5555"
CONFIGSERVER_URI: "http://configserver:8888"
EUREKASERVER_URI: "http://eurekaserver:8761/eureka/"
DATABASESERVER_PORT: "27017"
EUREKASERVER_PORT: "8761"
CONFIGSERVER_PORT: "8888"
database:
image: mongo
container_name: tracker-mongo
volumes:
- $HOME/tracker-data:/data/db
- $HOME/tracker-datacd:/data/bkp
restart: always
AWS ECS has Tasks Definitions but I'm not sure if it can help
I am assuming you want to run these services 24x7 and not on demand. With container orchestration it is possible. One way of doing it with Rancher is as below:
Create 5 micro instances. 4 for the services and 1 for Rancher and put all 5 in 1 VPC. Now install Rancher in the 5th instance and add the other 4 hosts in Rancher, so that all your 4 hosts show up in Rancher's infrastructure.
Now label all the 4 hosts in Rancher uniquely - for ex: 'zuulserver' , 'database' , 'configserver' , 'eurekaserver'
Now edit your docker compose to add those rancher host labels to each of your services.
io.rancher.scheduler.affinity:host_label: key1=value1
wordpress:
labels:
# Make wordpress a global service
io.rancher.scheduler.global: 'true'
# Make wordpress only run containers on hosts with a key1=value1 label
io.rancher.scheduler.affinity:host_label: key1=value1
# Make wordpress only run on hosts that do not have a key2=value2 label
io.rancher.scheduler.affinity:host_label_ne: key2=value2
image: wordpress
links:
- db: mysql
stdin_open: true
In Rancher create a stack with your docker compose and start the stack.
Rancher will deploy those services to the corresponding hosts according to the host affinity labels.
https://rancher.com/docs/rancher/v1.1/en/cattle/scheduling
https://rancher.com/docs/rancher/v1.2/en/hosts/