Docker Swarm connection between containers refused for some containers - docker

simplified swarm:
manager1 node
- consul-agent
worker1 node
- consul-client1
- web-app:80
- web-network:9000
database1 node
- consul-client2
- redis:6379
- mongo:27017
The web-app and web-network services can connect to redis and mongo through their service names correctly, e.g redis.createClient('6379', 'redis') and mongoose.connect('mongodb://mongo').
However, container web-app cannot connect to web-network, I'm trying to make a request like so:
request('http://web-network:9000')
But get the error:
errorno: ECONNREFUSED
address: 10.0.1.9
port: 9000
Request to web-network using a private IP does work:
request('http://11.22.33.44:9000')
What am I missing? Why can they connect to redis and mongo but not between each container? When moving redis/mongo to the same node as web-app, it will still work, so I don't think the issue comes because the services cannot talk to a service on the same server node.
Can we make docker network use private IP instead of the pre-configured subnet?
docker stack deploy file
version: '3'
services:
web-app:
image: private-repo/private-image
networks:
- swarm-network
ports:
- "80:8080"
deploy:
placement:
constraints:
- node.role==worker
web-network:
image: private-repo/private-image2
networks:
- swarm-network
ports:
- "9000:8080"
deploy:
placement:
constraints:
- node.role==worker
redis:
image: redis:latest
networks:
- swarm-network
ports:
- "6739:6739"
deploy:
placement:
constraints:
- engine.labels.purpose==database
mongo:
image: mongo:latest
networks:
- swarm-network
ports:
- "27017:27017"
deploy:
placement:
constraints:
- engine.labels.purpose==database
networks:
swarm-network:
driver: overlay
docker stack deploy app -c docker-compose.yml

Related

Docker swarm with reverse proxy, run requests based on request uri path to certain node

I have the following nodes with hostnames docker-php-pos-web-1,docker-php-pos-web-2,docker-php-pos-web-3,and docker-php-pos-web-4 in a docker swarm cluster with caddy proxy configured on distributed mode
I want requests with cron anywhere in the url path to run on docker-php-pos-web-4. An example request would be demo.phppointofsale.com/index.php/ecommerce/cron. If "cron" is not in the url, it would route as normal.
I want to avoid having 2 copies of production_php_point_of_sale_app just for this.
I am already routing to docker-php-pos-web-4 from my load balancer for "cron" in request path, BUT since in docker swarm the mesh network can decide on which node actually "runs" it. I always want docker-php-pos-web-4 to run these tasks
Below is my docker-compose.yml file
version: '3.9'
services:
production_php_point_of_sale_app:
logging:
driver: "local"
deploy:
restart_policy:
condition: any
mode: global
labels:
caddy: "http://*.phppointofsale.com, http://*.phppos.com"
caddy.reverse_proxy.trusted_proxies: "private_ranges"
caddy.reverse_proxy: "{{upstreams}}"
image: phppointofsale/production-app
build:
context: "production_php_point_of_sale_app"
restart: always
env_file:
- production_php_point_of_sale_app/.env
- .env
networks:
- app_network
- mail
caddy_server:
image: lucaslorentz/caddy-docker-proxy:ci-alpine
ports:
- 80:80
networks:
- caddy_controller
- app_network
environment:
- CADDY_DOCKER_MODE=server
- CADDY_CONTROLLER_NETWORK=10.200.200.0/24
volumes:
- caddy_data:/data
deploy:
restart_policy:
condition: any
mode: global
labels:
caddy_controlled_server:
caddy_controller:
image: lucaslorentz/caddy-docker-proxy:ci-alpine
networks:
- caddy_controller
- app_network
environment:
- CADDY_DOCKER_MODE=controller
- CADDY_CONTROLLER_NETWORK=10.200.200.0/24
volumes:
- /var/run/docker.sock:/var/run/docker.sock
deploy:
restart_policy:
condition: any
placement:
constraints: [node.role == manager]
networks:
caddy_controller:
driver: overlay
ipam:
driver: default
config:
- subnet: "10.200.200.0/24"
app_network:
driver: overlay
mail:
driver: overlay
volumes:
caddy_data: {}

Docker Compose - Changing the network mode to "host" results in error: Error response from daemon: failed to add interface veth701c890 to sandbox

Following this question, I edited my gateway container to use the host network mode:
services:
gateway:
...
network_mode: "host"
and then the docker compose up -d gives me this:
Error response from daemon: failed to add interface veth701c890 to
sandbox: error setting interface "veth701c890" IP to 172.26.0.11/16:
cannot program address 172.26.0.11/16 in sandbox interface because it
conflicts with existing route {Ifindex: 4 Dst: 172.26.0.0/16 Src:
172.26.0.1 Gw: Flags: [] Table: 254
I restarted the docker and even the server. No luck.
The docker-compose.yml looks like this (only the gateway container has published ports):
version: '3.4'
services:
gateway:
image: <ms-yarp>
environment:
- ASPNETCORE_URLS=https://+:443;http://+:80
ports:
- "80:80"
- "443:443"
volumes:
- ./tls/:/tls/
networks:
- mynet
restart: on-failure
orders:
image: <registry>/orders
environment:
- ASPNETCORE_URLS=http://+:80
networks:
- mynet
restart: on-failure
users:
image: <registry>/users
environment:
- ASPNETCORE_URLS=http://+:80
networks:
- mynet
restart: on-failure
smssender:
image: <registry>/smssender
environment:
- ASPNETCORE_URLS=http://+:80
networks:
- mynet
restart: on-failure
logger:
image: <registry>/logger
environment:
- ASPNETCORE_URLS=http://+:80
networks:
- mynet
restart: on-failure
notifications:
image: <registry>/notifications
environment:
- ASPNETCORE_URLS=http://+:80
networks:
- mynet
restart: on-failure
cacheserver:
image: <registry>/redis
networks:
- mynet
restart: on-failure
...
networks:
mynet:
You can't combine host networking with any other Docker networking option. At least some versions of Compose have given warnings if you combine network_mode: host with other networks: or ports: options.
The other thing host networking means in this particular setup is that the one container that's using it is "outside Docker" for purposes of connecting to other containers. It works exactly the same way a non-container process would. That means the other containers need to publish ports: to be reachable from the gateway, and in turn the gateway configuration needs to use localhost and the published port numbers to reach the other containers.
version: '3.8'
services:
gateway:
image: <ms-yarp>
network_mode: host
orders:
image: <registry>/orders
ports:
- '8001:80'
networks:
- mynet
{
"ReverseProxy": {
"Clusters": {
"cluster": {
"Destinations": {
"orders": {
"Address": "http://localhost:8001"
}
}
}
}
}
}
Something like this:
(doesn't work with Docker Desktop on windows WSL2, at least I couldn't even run the nginx example that is here in the docs)
version: '3.4'
services:
gateway:
image: <ms-yarp>
environment:
- ASPNETCORE_URLS=https://+:443;http://+:80
network_mode: host
volumes:
- ./tls/:/tls/
restart: on-failure
orders:
image: <registry>/orders
environment:
- ASPNETCORE_URLS=http://+:80
ports:
- 8080:80
networks:
- mynet
restart: on-failure
users:
image: <registry>/users
environment:
- ASPNETCORE_URLS=http://+:80
ports:
- 8081:80
networks:
- mynet
restart: on-failure
smssender:
image: <registry>/smssender
environment:
- ASPNETCORE_URLS=http://+:80
ports:
- 8082:80
networks:
- mynet
restart: on-failure
logger:
image: <registry>/logger
environment:
- ASPNETCORE_URLS=http://+:80
ports:
- 8082:80
networks:
- mynet
restart: on-failure
notifications:
image: <registry>/notifications
environment:
- ASPNETCORE_URLS=http://+:80
ports:
- 8083:80
networks:
- mynet
restart: on-failure
cacheserver:
image: <registry>/redis
restart: on-failure
networks:
- mynet
Also in your gateway service configuration you will need to change the
http://orders:80 to http://localhost:8080
http://users:80 to http://localhost:8081
and so on
Also restrict ports on the docker host of 8080 to 8083 to be accessible only from localhost and not from the internet.
You could even put all the containers (except the gateway) to a different docker host that is accessible only from the docker host where the gateway container is running and change the config in gateway from http://orders:80 to http://otherdockerhost:80 and so on.
But for this docker compose will not be viable you will need to "manually" create the containers with docker run commands (or have 2 separate compose project one for the gateway and one for the rest of the services) so this is where more serious container orchestration tools are required like kubernetes (you could try docker swarm or nomad or any other container orchestrator, but these are not so popular so if you are new to both kubernetes and docker swarm or all the other you are better off with starting with kubernetes, you will reap the benefits in the long run for both this project and your personal carrier too)

docker compose: restrict internet access

I want to run with a container that is a copy of a production container, so I want to restrict access to the internet to prevent that call production servers.
But I need to access the container from the host machine with internet access
This is what I am trying to do:
version: '2.1'
services:
proxy:
image: traefik
command: --api.insecure=true --providers.docker
networks:
- no-internet
- internet
ports:
- "80:80"
- "8080:8080"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
prod-service:
image: ....
depends_on:
- db
ports:
- "8094:8094"
labels:
- "traefik.http.routers.blog.rule=Host(`localhost`)"
- "traefik.port=8094"
networks:
- no-internet
db:
container_name: db
image: postgres:11
hostname: ap-db
expose:
- 5433
ports:
- 5433:5432
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
networks:
- no-internet
- internet
networks:
internet:
driver: bridge
no-internet:
internal: true
driver: bridge
But the trafic configuration is not working for me.
What is the best option to do this?
the answers I found do not take into account the access from the host machine, the container without internet is isolated
I appreciate any advice

Exposing a Docker database service only on the internal network with Traefik

Let's say I defined two services "frontend" and "db" in my docker-compose.yml which are deployed to a Docker swarm, i.e. they may also run in different stacks. With this setup Traefik automatically generates the frontend and backend for each stack which is fine.
Now I have another Docker container running temporarily in a Jenkins pipeline which shall be able to access the db service in a specific stack. My first idea was to expose the db service by adding it to the cluster-global-net network so that Traefik can generate a frontend route to the bakend. This basically works.
But I'd like to hide the database service from "the public" while still being able to connect another Docker container to it via its stack or service name using the internal "default" network.
Can this be done somehow?
version: '3.6'
networks:
default: {}
cluster-global-net:
external: true
services:
frontend:
image: frontend_image
ports:
- 8080
networks:
- cluster-global-net
- default
deploy:
labels:
traefik.port: 8080
traefik.docker.network: cluster-global-net
traefik.backend.loadbalancer.swarm: 'true'
traefik.backend.loadbalancer.stickiness: 'true'
replicas: 1
restart_policy:
condition: any
db:
image: db_image
environment:
- MYSQL_ALLOW_EMPTY_PASSWORD=false
- MYSQL_DATABASE=db_schema
- MYSQL_USER=db_user
- MYSQL_PASSWORD=db_pass
ports:
- 3306
volumes:
- db_volume:/var/lib/mysql
networks:
- default
restart: on-failure
deploy:
labels:
traefik.port: 3306
traefik.docker.network: default
What you need is a network on which both of them are deployed, but that it's not visible from anyone else.
To do such, create a network , add it to your db service and frontend, and also to your temporary service. And indeed, remove traefik label on db because they are not needed anymore here.
EG :
...
networks:
default: {}
cluster-global-net:
external: true
db-net:
external: true
services:
frontend:
image: frontend_image
networks:
- cluster-global-net
- default
- db-net
deploy:
...
db:
image: db_image
...
networks:
- default
- db-net
restart: on-failure
#no labels
docker network create db-net
docker stack deploy -c <mycompose.yml> <myfront>
docker service create --network db-net <myTemporaryImage> <temporaryService>
Then, the temporaryService as well as the frontend can reach the db through db:3306
BTW : you don't need to open the port for the frontend, since traefik will access it internally (trafik.port).
EDIT : new exemple with network created from compose file.
...
networks:
default: {}
cluster-global-net:
external: true
db-net: {}
services:
frontend:
image: frontend_image
networks:
- cluster-global-net
- default
- db-net
deploy:
...
db:
image: db_image
...
networks:
- default
- db-net
restart: on-failure
#no labels
docker stack deploy -c <mycompose.yml> someStackName
docker service create --network someStackName_db-net <myTemporaryImage> <temporaryService>

Netdata in a docker swarm environment

I'm quite new to Netdata and also Docker Swarm. I ran Netdata for a while on single hosts but now trying to stream Netdata from workers to a manager node in a swarm environment where the manager also should act as a central Netdata instance. I'm aiming to only monitor the data from the manager.
Here's my compose file for the stack:
version: '3.2'
services:
netdata-client:
image: titpetric/netdata
hostname: "{{.Node.Hostname}}"
cap_add:
- SYS_PTRACE
security_opt:
- apparmor:unconfined
environment:
- NETDATA_STREAM_DESTINATION=control:19999
- NETDATA_STREAM_API_KEY=1x214ch15h3at1289y
- PGID=999
volumes:
- /proc:/host/proc:ro
- /sys:/host/sys:ro
- /var/run/docker.sock:/var/run/docker.sock
networks:
- netdata
deploy:
mode: global
placement:
constraints: [node.role == worker]
netdata-central:
image: titpetric/netdata
hostname: control
cap_add:
- SYS_PTRACE
security_opt:
- apparmor:unconfined
environment:
- NETDATA_API_KEY_ENABLE_1x214ch15h3at1289y=1
ports:
- '19999:19999'
volumes:
- /proc:/host/proc:ro
- /sys:/host/sys:ro
- /var/run/docker.sock:/var/run/docker.sock
networks:
- netdata
deploy:
mode: replicated
replicas: 1
placement:
constraints: [node.role == manager]
networks:
netdata:
driver: overlay
attachable: true
Netdata on the manager works fine and the container runs on the one worker node I'm testing on. According to log output it seems to run well and gathers names from the docker containers running as it does in a local environment.
Problem is that it can't connect to the netdata-central service running on the manager.
This is the error message:
2019-01-04 08:35:28: netdata INFO : STREAM_SENDER[7] : STREAM 7 [send to control:19999]: connecting...,
2019-01-04 08:35:28: netdata ERROR : STREAM_SENDER[7] : Cannot resolve host 'control', port '19999': Name or service not known,
not sure why it can't resolve the hostname, thought it should work that way on the overlay network. Maybe there's a better way to connect and not rely on the hostname?
Any help is appreciated.
EDIT: as this question might come up - the firewall (ufw) on the control host is inactive, also I think the error message clearly points to a problem with name resolution.
Your API-Key is in the wrong format..it has to be a GUID. You can generate one with the "uuidgen" command...
https://github.com/netdata/netdata/blob/63c96aa96f96f3aea10bdcd2ecd92c889f26b3af/conf.d/stream.conf#L7
In the latest image the environment variables does not work.
The solution is to create a configuration file for the stream.
My working compose file is:
version: '3.7'
configs:
netdata_stream_master:
file: $PWD/stream-master.conf
netdata_stream_client:
file: $PWD/stream-client.conf
services:
netdata-client:
image: netdata/netdata:v1.21.1
hostname: "{{.Node.Hostname}}"
depends_on:
- netdata-central
configs:
-
mode: 444
source: netdata_stream_client
target: /etc/netdata/stream.conf
security_opt:
- apparmor:unconfined
environment:
- PGID=999
volumes:
- /proc:/host/proc:ro
- /etc/passwd:/host/etc/passwd:ro
- /etc/group:/host/etc/group:ro
- /sys:/host/sys:ro
- /var/run/docker.sock:/var/run/docker.sock
deploy:
mode: global
netdata-central:
image: netdata/netdata:v1.21.1
hostname: control
configs:
-
mode: 444
source: netdata_stream_master
target: /etc/netdata/stream.conf
security_opt:
- apparmor:unconfined
environment:
- PGID=999
ports:
- '19999:19999'
volumes:
- /etc/passwd:/host/etc/passwd:ro
- /etc/group:/host/etc/group:ro
- /proc:/host/proc:ro
- /sys:/host/sys:ro
- /var/run/docker.sock:/var/run/docker.sock
deploy:
mode: replicated
replicas: 1
placement:
constraints: [node.role == manager]

Resources