I'm trying to use docker swarm and docker stack to deploy my docker-compose file and after I've deployed my stack successfully, I can't access the web page. It just says the response time is too long with the error message "ERR_CONNECTION_TIMED_OUT" I have also opened the TCP and UDP ports 7946, and 4789 as well. Does anyone know what went wrong?
Here's my docker-compose file:
version: "3.3"
services:
mysql:
image: mariadb:latest
environment:
MYSQL_ROOT_PASSWORD: dbRoot
MYSQL_DATABASE: cloud
MYSQL_USER: php
MYSQL_PASSWORD: php
networks:
- mynet
myphp:
image: php:7.4-apache
depends_on:
- mysql
ports:
- "9000:9000"
volumes:
- ./src:/var/www/html
deploy:
placement:
constraints:
- node.role == manager
networks:
- mynet
mynginx:
image: nginx:latest
depends_on:
- myphp
ports:
- "80:80"
deploy:
mode: replicated
replicas: 2
placement:
constraints:
- node.role == manager
networks:
- mynet
visualizer:
image: dockersamples/visualizer:latest
ports:
- "8080:80"
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
deploy:
placement:
constraints: [node.role == manager]
phpMyAdmin:
image: phpmyadmin
environment:
PMA_HOST: mysql
ports:
- "8082:80"
networks:
- mynet
volumes:
src:
networks:
mynet:
driver: overlay
ID NAME MODE REPLICAS IMAGE PORTS
9ef6kj1wois8 test_mysql replicated 1/1 mariadb:latest
jjcmp9lrr35f test_mynginx replicated 2/2 nginx:latest *:80->80/tcp
oogi9emcjo0j test_myphp replicated 1/1 php:7.4-apache *:9000->9000/tcp
re8wnkvcxgo2 test_visualizer replicated 0/1 dockersamples/visualizer:latest *:8080->80/tcp
y9i19mxjp69q test_phpMyAdmin replicated 1/1 phpmyadmin:latest *:8082->80/tcp
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
c7b6bv6qgqbn test_visualizer.1 dockersamples/visualizer:latest manager Running Starting 11 seconds ago
xu9cvjjp4xqt test_mynginx.1 nginx:latest manager Running Running 10 seconds ago
klf8nimlljbp test_myphp.1 php:7.4-apache manager Running Running 10 seconds ago
0epkhwsmub8c test_mysql.1 mariadb:latest w1 Running Running 14 seconds ago
zlluox3ga6fw test_phpMyAdmin.1 phpmyadmin:latest manager Running Running 12 seconds ago
knmvjwmcslsj test_mynginx.2 nginx:latest manager Running Running 10 seconds ago
Related
I'm trying to deploy a compose project to a swarm but after I deploy it I have the problem of not all the services start and some of them keep restarting.
I have the following compose file
version: "3.3"
volumes:
jenkins_home:
external: false
driver: local
driver_opts:
type: none
o: 'bind'
device: '/var/jenkins_home'
docker_certs:
external: false
driver: local
driver_opts:
type: none
o: 'bind'
device: '/etc/certs'
services:
docker:
image: docker:dind
restart: unless-stopped
privileged: true
volumes:
- jenkins_home:/var/jenkins_home
- docker_certs:/certs/client
ports:
- "2376:2376"
environment:
DOCKER_TLS_CERTDIR: /certs
deploy:
mode: replicated
replicas: 1
placement:
constraints: [node.role == manager]
jenkins:
image: git.example.com:8444/devops/docker-services/jenkins
build:
context: ./
dockerfile: services/jenkins.dockerfile
restart: unless-stopped
depends_on:
- "docker"
volumes:
- jenkins_home:/var/jenkins_home
- docker_certs:/certs/client
ports:
- "636:636"
- "8443:8443"
- "3268:3268"
- "50000:50000"
environment:
DOCKER_HOST: tcp://docker:2376
DOCKER_CERT_PATH: /certs/client
DOCKER_TLS_VERIFY: 1
deploy:
mode: replicated
replicas: 1
placement:
constraints: [node.role == manager]
icecc-scheduler:
image: git.example.com:8444/devops/docker-services/icecc-scheduler
build:
context: ./
dockerfile: services/icecc-scheduler.dockerfile
restart: unless-stopped
ports:
- "8765:8765"
deploy:
mode: replicated
replicas: 1
placement:
constraints: [node.role == manager]
icecc-daemon:
image: git.example.com:8444/devops/docker-services/icecc-daemon
build:
context: ./
dockerfile: services/icecc-daemon.dockerfile
restart: unless-stopped
ports:
- "8766:8766"
- "10245:10245"
depends_on:
- "icecc-scheduler"
deploy:
mode: global
and a swarm with two nodes docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
i6edk9ny6z38krv6m5738uzwu st12873 Ready Active 20.10.12
phnvvy2139wft9innou0uermq * st12874 Ready Active Leader 20.10.12
I have all the images built and pushed to the docker registry
When I run docker stack deploy -c docker-compose.yml build-farm it says it deploys sucessfully though I then list the services
docker stack services build-farm
ID NAME MODE REPLICAS IMAGE PORTS
4z6w98jmswav build-farm_docker replicated 0/1 docker:dind *:2376->2376/tcp
r7xuq4vgc92i build-farm_icecc-daemon global 0/2 git.example.com:8444/devops/docker-services/icecc-daemon:latest *:8766->8766/tcp, *:10245->10245/tcp
20ukipii7wli build-farm_icecc-scheduler replicated 0/1 git.example.com:8444/devops/docker-services/icecc-scheduler:latest *:8765->8765/tcp
37r4pm7jgku5 build-farm_jenkins replicated 1/1 git.example.com:8444/devops/docker-services/jenkins:latest *:636->636/tcp, *:3268->3268/tcp, *:8443->8443/tcp, *:50000->50000/tcp
The icecc scheduler and daemon never start on and the docker:dind service keeps starting and stopping
I have this compose file
version: "3.3"
volumes:
jenkins_home:
external: false
driver: local
driver_opts:
type: none
o: 'bind'
device: '/var/jenkins_home'
certs:
external: false
driver: local
driver_opts:
type: none
o: 'bind'
device: '/etc/certs'
services:
docker:
image: docker:dind
restart: unless-stopped
privileged: true
volumes:
- jenkins_home:/var/jenkins_home
- certs:/certs/client
ports:
- "2376:2376"
environment:
DOCKER_TLS_CERTDIR: /certs
deploy:
mode: replicated
replicas: 1
placement:
constraints: [node.role == manager]
jenkins:
image: git.example.com:8444/devops/docker-services/jenkins
build:
context: services/jenkins
args:
ssl_pass: changeit
restart: unless-stopped
depends_on:
- "docker"
volumes:
- jenkins_home:/var/jenkins_home
- certs:/certs/client
ports:
- "8080:8080"
- "8443:8443"
- "3268:3268"
- "50000:50000"
environment:
DOCKER_HOST: tcp://docker:2376
DOCKER_CERT_PATH: /certs/client
DOCKER_TLS_VERIFY: 1
deploy:
mode: replicated
replicas: 1
placement:
constraints: [node.role == manager]
icecc-scheduler:
image: git.example.com:8444/devops/docker-services/icecc-scheduler
build: services/icecc-scheduler
restart: unless-stopped
network_mode: host
deploy:
mode: replicated
replicas: 1
placement:
constraints: [node.role == manager]
icecc-daemon:
image: git.example.com:8444/devops/docker-services/icecc-daemon
build: services/icecc-daemon
restart: unless-stopped
network_mode: host
deploy:
mode: global
when I run docker stack deploy --compose-file docker-compose.yml build_farm
It claims to start everything successfully. But running docker stack services build_farm I get
ID NAME MODE REPLICAS IMAGE PORTS
tap0zlw086wm build_farm_docker replicated 0/1 docker:dind *:2376->2376/tcp
n13pcmy8zpip build_farm_icecc-daemon global 0/1 git.example.com:8444/devops/docker-services/icecc-daemon:latest
ofpsosrhrzoq build_farm_icecc-scheduler replicated 0/1 git.example.com:8444/devops/docker-services/icecc-scheduler:latest
b9llhoe97vwz build_farm_jenkins replicated 0/1 git.example.com:8444/devops/docker-services/jenkins:latest *:3268->3268/tcp, *:8080->8080/tcp, *:8443->8443/tcp, *:50000->50000/tcp
Which seems to mean none of the services actually started, I can't access any of them which seems to confirm this.
The second issue is that the icecc-daemon container only has one replica despite being started in global mode with 2 nodes on the swarm
docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
rc6aajdnwnis4dvn4um7qcwk9 ex12873 Ready Active 20.10.12
phnvvy2139wft9innou0uermq * ex12874 Ready Active Leader 20.10.12
I have a Swarm cluster with a Manager and a Worker node.
All the containers running on the manager are accessible through Traefik and working fine.
I just deployed a new Worker node and joined my swarm on the node.
Now I start scaling some services and realized they were timing out on the worker node.
So I setup a simple example using the whoami container, and cannot figure out why I cannot access it. Here are my configs (all deployed on the MANAGER node):
version: '3.6'
networks:
traefik-net:
driver: overlay
attachable: true
external: true
services:
whoami:
image: jwilder/whoami
networks:
- traefik-net
deploy:
labels:
- "traefik.port=8000"
- "traefik.frontend.rule=Host:whoami.myhost.com"
- "traefik.docker.network=traefik-net"
replicas: 2
placement:
constraints: [node.role != manager]
My traefik:
version: '3.6'
networks:
traefik-net:
driver: overlay
attachable: true
external: true
services:
reverse-proxy:
image: traefik # The official Traefik docker image
command: --docker --docker.swarmmode --docker.domain=myhost.com --docker.watch --api
ports:
- "80:80" # The HTTP port
# - "8080:8080" # The Web UI (enabled by --api)
- "443:443"
networks:
- traefik-net
volumes:
- /var/run/docker.sock:/var/run/docker.sock # So that Traefik can listen
- /home/ubuntu/docker-configs/traefik/traefik.toml:/traefik.toml
- /home/ubuntu/docker-configs/traefik/acme.json:/acme.json
deploy:
labels:
traefik.port: 8080
traefik.frontend.rule: "Host:traefik.myhost.com"
traefik.docker.network: traefik-net
replicas: 1
placement:
constraints: [node.role == manager]
My worker docker ps output:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b825f95b0366 jwilder/whoami:latest "/app/http" 4 hours ago Up 4 hours 8000/tcp whoami_whoami.2.tqbh4csbqxvsu6z5i7vizc312
50cc04b7f0f4 jwilder/whoami:latest "/app/http" 4 hours ago Up 4 hours 8000/tcp whoami_whoami.1.rapnozs650mxtyu970isda3y4
I tried opening firewall ports, disabling it completely, nothing seems to work. Any help is appreciated
I had to use --advertise-addr y.y.y.y to make it work
I'm following the Docker Compose tutorial here
https://docs.docker.com/get-started/part5/#recap-optional
version: "3"
services:
web:
image: example/get-started:part-1
deploy:
replicas: 10
resources:
limits:
cpus: "0.1"
memory: 50M
restart_policy:
condition: on-failure
ports:
- "80:80"
networks:
- webnet
visualizer:
image: dockersamples/visualizer:stable
ports:
- "8080:8080"
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
deploy:
placement:
constraints: [node.role == manager]
networks:
- webnet
redis:
image: redis
ports:
- "6379:6379"
volumes:
- ./data:/data
deploy:
placement:
constraints: [node.role == manager]
networks:
- webnet
driver:
build: .
links:
- redis
networks:
webnet:
and while Redis seems to be running on myvm1, the app is unable to connect to it, and gives an error.
This is the app code in case it matters:
from flask import Flask
from redis import Redis, RedisError
import os
import socket
redis = Redis(host="redis", db=0, socket_connect_timeout=0, socket_timeout=0)
app = Flask(__name__)
#app.route("/")
def hello():
try:
visits = redis.incr("counter")
except RedisError:
visits = "<i>cannot connect to redis. Counter disabled</i>"
html = "<h3>Hello {name}!</h3>" \
"<b>Hostname:</b> {hostname}<br/>" \
"<b>Visits:</b> {visits}"
return html.format(name=os.getenv("NAME", "World"), hostname=socket.gethostname(), visits=visits)
if __name__ == "__main__":
app.run(host='0.0.0.0', port=80)
VM IPs:
myvm1 - virtualbox Running tcp://192.168.99.101:2376
v17.07.0-ce
myvm2 - virtualbox Running tcp://192.168.99.102:2376
v17.07.0-ce
Redis is running without errors on VM.
Any idea? There are many similar discussions online, but none helped yet.
If the redis is running on the VM , the binding might not be proper . Can you please check if its binding on 0.0.0.0 or else you need to edit the redis
config to bind on 0.0.0.0 and port for external service to connect to it
simplified swarm:
manager1 node
- consul-agent
worker1 node
- consul-client1
- web-app:80
- web-network:9000
database1 node
- consul-client2
- redis:6379
- mongo:27017
The web-app and web-network services can connect to redis and mongo through their service names correctly, e.g redis.createClient('6379', 'redis') and mongoose.connect('mongodb://mongo').
However, container web-app cannot connect to web-network, I'm trying to make a request like so:
request('http://web-network:9000')
But get the error:
errorno: ECONNREFUSED
address: 10.0.1.9
port: 9000
Request to web-network using a private IP does work:
request('http://11.22.33.44:9000')
What am I missing? Why can they connect to redis and mongo but not between each container? When moving redis/mongo to the same node as web-app, it will still work, so I don't think the issue comes because the services cannot talk to a service on the same server node.
Can we make docker network use private IP instead of the pre-configured subnet?
docker stack deploy file
version: '3'
services:
web-app:
image: private-repo/private-image
networks:
- swarm-network
ports:
- "80:8080"
deploy:
placement:
constraints:
- node.role==worker
web-network:
image: private-repo/private-image2
networks:
- swarm-network
ports:
- "9000:8080"
deploy:
placement:
constraints:
- node.role==worker
redis:
image: redis:latest
networks:
- swarm-network
ports:
- "6739:6739"
deploy:
placement:
constraints:
- engine.labels.purpose==database
mongo:
image: mongo:latest
networks:
- swarm-network
ports:
- "27017:27017"
deploy:
placement:
constraints:
- engine.labels.purpose==database
networks:
swarm-network:
driver: overlay
docker stack deploy app -c docker-compose.yml