Container on worker node is not accessible using Swarm Mode - docker

I have a Swarm cluster with a Manager and a Worker node.
All the containers running on the manager are accessible through Traefik and working fine.
I just deployed a new Worker node and joined my swarm on the node.
Now I start scaling some services and realized they were timing out on the worker node.
So I setup a simple example using the whoami container, and cannot figure out why I cannot access it. Here are my configs (all deployed on the MANAGER node):
version: '3.6'
networks:
traefik-net:
driver: overlay
attachable: true
external: true
services:
whoami:
image: jwilder/whoami
networks:
- traefik-net
deploy:
labels:
- "traefik.port=8000"
- "traefik.frontend.rule=Host:whoami.myhost.com"
- "traefik.docker.network=traefik-net"
replicas: 2
placement:
constraints: [node.role != manager]
My traefik:
version: '3.6'
networks:
traefik-net:
driver: overlay
attachable: true
external: true
services:
reverse-proxy:
image: traefik # The official Traefik docker image
command: --docker --docker.swarmmode --docker.domain=myhost.com --docker.watch --api
ports:
- "80:80" # The HTTP port
# - "8080:8080" # The Web UI (enabled by --api)
- "443:443"
networks:
- traefik-net
volumes:
- /var/run/docker.sock:/var/run/docker.sock # So that Traefik can listen
- /home/ubuntu/docker-configs/traefik/traefik.toml:/traefik.toml
- /home/ubuntu/docker-configs/traefik/acme.json:/acme.json
deploy:
labels:
traefik.port: 8080
traefik.frontend.rule: "Host:traefik.myhost.com"
traefik.docker.network: traefik-net
replicas: 1
placement:
constraints: [node.role == manager]
My worker docker ps output:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b825f95b0366 jwilder/whoami:latest "/app/http" 4 hours ago Up 4 hours 8000/tcp whoami_whoami.2.tqbh4csbqxvsu6z5i7vizc312
50cc04b7f0f4 jwilder/whoami:latest "/app/http" 4 hours ago Up 4 hours 8000/tcp whoami_whoami.1.rapnozs650mxtyu970isda3y4
I tried opening firewall ports, disabling it completely, nothing seems to work. Any help is appreciated

I had to use --advertise-addr y.y.y.y to make it work

Related

Running Services on Specific Nodes with Docker Swarm

I'm new to docker swarm and looking to set containers to run on a specific node in the swarm.
For example, I have the following nodes:
Manager
Worker1
Worker2
And I have a couple services listed in a compose yml similar to:
services:
my_service:
image: my_image
container_name: my_container_name
networks:
- my_network
my_service2:
image: my_image2
container_name: my_container_name2
networks:
- my_network
How can I make it so that my_service only runs on Worker1 and my_service2 only runs on Worker2?
UPDATE:
I managed to find the solution. Can specify deployment constraints as shown below.
my_service:
image: my_image
container_name: my_container_name
networks:
- my_network
deploy:
placement:
constraints:
- node.hostname == Worker1
my_service2:
image: my_image2
container_name: my_container_name2
networks:
- my_network
deploy:
placement:
constraints:
- node.hostname == Worker2

Exposing a Docker database service only on the internal network with Traefik

Let's say I defined two services "frontend" and "db" in my docker-compose.yml which are deployed to a Docker swarm, i.e. they may also run in different stacks. With this setup Traefik automatically generates the frontend and backend for each stack which is fine.
Now I have another Docker container running temporarily in a Jenkins pipeline which shall be able to access the db service in a specific stack. My first idea was to expose the db service by adding it to the cluster-global-net network so that Traefik can generate a frontend route to the bakend. This basically works.
But I'd like to hide the database service from "the public" while still being able to connect another Docker container to it via its stack or service name using the internal "default" network.
Can this be done somehow?
version: '3.6'
networks:
default: {}
cluster-global-net:
external: true
services:
frontend:
image: frontend_image
ports:
- 8080
networks:
- cluster-global-net
- default
deploy:
labels:
traefik.port: 8080
traefik.docker.network: cluster-global-net
traefik.backend.loadbalancer.swarm: 'true'
traefik.backend.loadbalancer.stickiness: 'true'
replicas: 1
restart_policy:
condition: any
db:
image: db_image
environment:
- MYSQL_ALLOW_EMPTY_PASSWORD=false
- MYSQL_DATABASE=db_schema
- MYSQL_USER=db_user
- MYSQL_PASSWORD=db_pass
ports:
- 3306
volumes:
- db_volume:/var/lib/mysql
networks:
- default
restart: on-failure
deploy:
labels:
traefik.port: 3306
traefik.docker.network: default
What you need is a network on which both of them are deployed, but that it's not visible from anyone else.
To do such, create a network , add it to your db service and frontend, and also to your temporary service. And indeed, remove traefik label on db because they are not needed anymore here.
EG :
...
networks:
default: {}
cluster-global-net:
external: true
db-net:
external: true
services:
frontend:
image: frontend_image
networks:
- cluster-global-net
- default
- db-net
deploy:
...
db:
image: db_image
...
networks:
- default
- db-net
restart: on-failure
#no labels
docker network create db-net
docker stack deploy -c <mycompose.yml> <myfront>
docker service create --network db-net <myTemporaryImage> <temporaryService>
Then, the temporaryService as well as the frontend can reach the db through db:3306
BTW : you don't need to open the port for the frontend, since traefik will access it internally (trafik.port).
EDIT : new exemple with network created from compose file.
...
networks:
default: {}
cluster-global-net:
external: true
db-net: {}
services:
frontend:
image: frontend_image
networks:
- cluster-global-net
- default
- db-net
deploy:
...
db:
image: db_image
...
networks:
- default
- db-net
restart: on-failure
#no labels
docker stack deploy -c <mycompose.yml> someStackName
docker service create --network someStackName_db-net <myTemporaryImage> <temporaryService>

Session Persistence using Traefik on Docker Swarm Replicas

I trying to implement sticky session on dockers-swarm with traefik, but I could not achieve session persistence over two replicas on same machine.
In my docker-compose.yml, I have added labels for traefik and added the loadbalancer as well. Below is my docker-compose.yml, (Although the indentation might not look proper here, but it correct in actual project)
version: '3'
services:
web:
image: php:7.2.11-apache-stretch
ports:
- "8080:80"
volumes:
- ./code/:/var/www/html/hello/
stdin_open: true
tty: true
deploy:
mode: replicated
replicas: 2
restart_policy:
condition: any
update_config:
delay: 2s
labels:
- "traefik.docker.network=docker-test_privnet"
- "traefik.port=80"
- "traefik.backend.loadbalancer.sticky=true"
- "traefik.frontend.rule=PathPrefix:/hello"
networks:
- privnet
loadbalancer:
image: traefik
command:
--docker \
--docker.swarmmode \
--docker.watch \
--web \
--loglevel=DEBUG
ports:
- 80:80
- 9090:8080
volumes:
- /var/run/docker.sock:/var/run/docker.sock
deploy:
restart_policy:
condition: any
mode: replicated
replicas: 1
update_config:
delay: 2s
placement:
constraints: [node.role == manager]
networks:
- privnet
networks:
privnet:
external: true
Am I missing anything?
A few things.
.sticky is deprecated in favor of traefik.backend.loadbalancer.stickiness=true
I don't think you need to set the network with traefik.docker.network when you only have a single network connected to that service.
Make sure you're testing with a tool that uses cookies, which is how sticky sessions stay sticky. If using curl, be sure to use -c and -b as in this example.
I used the voting app sample from my test Swarm setup and added sticky sessions to the "vote" service and it worked for me on a single node. If using a multi-node swarm you'll need the LB in front of multiple swarm nodes to also enable sticky.

Docker Compose: Cannot connect to Redis

I'm following the Docker Compose tutorial here
https://docs.docker.com/get-started/part5/#recap-optional
version: "3"
services:
web:
image: example/get-started:part-1
deploy:
replicas: 10
resources:
limits:
cpus: "0.1"
memory: 50M
restart_policy:
condition: on-failure
ports:
- "80:80"
networks:
- webnet
visualizer:
image: dockersamples/visualizer:stable
ports:
- "8080:8080"
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
deploy:
placement:
constraints: [node.role == manager]
networks:
- webnet
redis:
image: redis
ports:
- "6379:6379"
volumes:
- ./data:/data
deploy:
placement:
constraints: [node.role == manager]
networks:
- webnet
driver:
build: .
links:
- redis
networks:
webnet:
and while Redis seems to be running on myvm1, the app is unable to connect to it, and gives an error.
This is the app code in case it matters:
from flask import Flask
from redis import Redis, RedisError
import os
import socket
redis = Redis(host="redis", db=0, socket_connect_timeout=0, socket_timeout=0)
app = Flask(__name__)
#app.route("/")
def hello():
try:
visits = redis.incr("counter")
except RedisError:
visits = "<i>cannot connect to redis. Counter disabled</i>"
html = "<h3>Hello {name}!</h3>" \
"<b>Hostname:</b> {hostname}<br/>" \
"<b>Visits:</b> {visits}"
return html.format(name=os.getenv("NAME", "World"), hostname=socket.gethostname(), visits=visits)
if __name__ == "__main__":
app.run(host='0.0.0.0', port=80)
VM IPs:
myvm1 - virtualbox Running tcp://192.168.99.101:2376
v17.07.0-ce
myvm2 - virtualbox Running tcp://192.168.99.102:2376
v17.07.0-ce
Redis is running without errors on VM.
Any idea? There are many similar discussions online, but none helped yet.
If the redis is running on the VM , the binding might not be proper . Can you please check if its binding on 0.0.0.0 or else you need to edit the redis
config to bind on 0.0.0.0 and port for external service to connect to it

Docker Swarm connection between containers refused for some containers

simplified swarm:
manager1 node
- consul-agent
worker1 node
- consul-client1
- web-app:80
- web-network:9000
database1 node
- consul-client2
- redis:6379
- mongo:27017
The web-app and web-network services can connect to redis and mongo through their service names correctly, e.g redis.createClient('6379', 'redis') and mongoose.connect('mongodb://mongo').
However, container web-app cannot connect to web-network, I'm trying to make a request like so:
request('http://web-network:9000')
But get the error:
errorno: ECONNREFUSED
address: 10.0.1.9
port: 9000
Request to web-network using a private IP does work:
request('http://11.22.33.44:9000')
What am I missing? Why can they connect to redis and mongo but not between each container? When moving redis/mongo to the same node as web-app, it will still work, so I don't think the issue comes because the services cannot talk to a service on the same server node.
Can we make docker network use private IP instead of the pre-configured subnet?
docker stack deploy file
version: '3'
services:
web-app:
image: private-repo/private-image
networks:
- swarm-network
ports:
- "80:8080"
deploy:
placement:
constraints:
- node.role==worker
web-network:
image: private-repo/private-image2
networks:
- swarm-network
ports:
- "9000:8080"
deploy:
placement:
constraints:
- node.role==worker
redis:
image: redis:latest
networks:
- swarm-network
ports:
- "6739:6739"
deploy:
placement:
constraints:
- engine.labels.purpose==database
mongo:
image: mongo:latest
networks:
- swarm-network
ports:
- "27017:27017"
deploy:
placement:
constraints:
- engine.labels.purpose==database
networks:
swarm-network:
driver: overlay
docker stack deploy app -c docker-compose.yml

Resources