**I'm trying to use Traefik to load-balance my web apps via docker swarm.
I have installed sample application like joomla in swarm mode behind traefik. Joomla works fine when the application is deployed on the same node as traefik (ie, manager), and I can access it through the browser by hitting the manager's node IP. But, if the service gets deployed on the worker node with no container in the manager node, while the service is up and running without any issue, but I am not able to see anything on the browser (hitting the manager or worker IP)
My traefik.toml file:
defaultEntryPoints = ["http"]
loglevel = "INFO"
sendAnonymousUsage = true
[docker]
endpoint = "unix:///var/run/docker.sock"
exposedByDefault = false
[api]
dashboard = true
entrypoint = "dashboard"
[entryPoints]
[entryPoints.http]
address = ":80"
[entryPoints.dashboard]
address = ":8080"
--------------------------------
My traefik.yml file:
version: '3'
services:
traefik:
image: traefik:v1.7 # The official Traefik docker image
restart: always
ports:
- 80:80 # The HTTP port
- 9090:8080 # The Web UI (enabled by --api)
labels:
- traefik.frontend.rule=Host:traefik.dpaas1.pune.cdac.in
- traefik.port=8080
- traefik.enable=true
volumes:
- /var/run/docker.sock:/var/run/docker.sock # So that Traefik can listen to the Docker events
- ${PWD}/traefik.toml:/etc/traefik/traefik.toml
deploy:
mode: replicated
replicas: 1
restart_policy:
condition: on-failure
max_attempts: 3
placement:
constraints: [node.role == manager]
update_config:
delay: 2s
networks:
- net
networks:
net:
external: true
My joomla.yml file:
version: '3'
services:
joomla:
image: joomla
restart: always
links:
- joomladb:mysql
volumes:
- joomla1-www:/var/www/html
deploy:
mode: replicated
replicas: 3
restart_policy:
condition: on-failure
max_attempts: 3
placement:
constraints: [node.role == manager]
update_config:
delay: 2s
labels:
- traefik.frontend.rule=Host:joomla1.dpaas1.pune.cdac.in
- traefik.port=80
- traefik.enable=true
- traefik.backend.loadbalancer.sticky=true
environment:
JOOMLA_DB_HOST: 10.208.26.162
JOOMLA_DB_PASSWORD: root
tty: true
networks:
- net
networks:
net:
external: true
volumes:
joomla1-www:
_______________________
```_____________ **
My traefik Dashboard:
[![Traefik logs and dashboard][1]][1]
[1]: https://i.stack.imgur.com/tcoGu.png
Related
I have the following nodes with hostnames docker-php-pos-web-1,docker-php-pos-web-2,docker-php-pos-web-3,and docker-php-pos-web-4 in a docker swarm cluster with caddy proxy configured on distributed mode
I want requests with cron anywhere in the url path to run on docker-php-pos-web-4. An example request would be demo.phppointofsale.com/index.php/ecommerce/cron. If "cron" is not in the url, it would route as normal.
I want to avoid having 2 copies of production_php_point_of_sale_app just for this.
I am already routing to docker-php-pos-web-4 from my load balancer for "cron" in request path, BUT since in docker swarm the mesh network can decide on which node actually "runs" it. I always want docker-php-pos-web-4 to run these tasks
Below is my docker-compose.yml file
version: '3.9'
services:
production_php_point_of_sale_app:
logging:
driver: "local"
deploy:
restart_policy:
condition: any
mode: global
labels:
caddy: "http://*.phppointofsale.com, http://*.phppos.com"
caddy.reverse_proxy.trusted_proxies: "private_ranges"
caddy.reverse_proxy: "{{upstreams}}"
image: phppointofsale/production-app
build:
context: "production_php_point_of_sale_app"
restart: always
env_file:
- production_php_point_of_sale_app/.env
- .env
networks:
- app_network
- mail
caddy_server:
image: lucaslorentz/caddy-docker-proxy:ci-alpine
ports:
- 80:80
networks:
- caddy_controller
- app_network
environment:
- CADDY_DOCKER_MODE=server
- CADDY_CONTROLLER_NETWORK=10.200.200.0/24
volumes:
- caddy_data:/data
deploy:
restart_policy:
condition: any
mode: global
labels:
caddy_controlled_server:
caddy_controller:
image: lucaslorentz/caddy-docker-proxy:ci-alpine
networks:
- caddy_controller
- app_network
environment:
- CADDY_DOCKER_MODE=controller
- CADDY_CONTROLLER_NETWORK=10.200.200.0/24
volumes:
- /var/run/docker.sock:/var/run/docker.sock
deploy:
restart_policy:
condition: any
placement:
constraints: [node.role == manager]
networks:
caddy_controller:
driver: overlay
ipam:
driver: default
config:
- subnet: "10.200.200.0/24"
app_network:
driver: overlay
mail:
driver: overlay
volumes:
caddy_data: {}
Hi I run a docker container with nginx and got the following error:
2019/08/09 11:37:18 [emerg] 1#1: invalid number of arguments in
"upstream" directive in /etc/nginx/conf.d/default.conf:61 nginx:
[emerg] invalid number of arguments in "upstream" directive in
/etc/nginx/conf.d/default.conf:61
My docker compose looks like this:
# #version 2018-01-15
# #author -----
version: "3.7"
networks:
proxy:
external: true
volumes:
# curl https://raw.githubusercontent.com/jwilder/nginx-proxy/master/nginx.tmpl > /var/lib/docker/volumes/proxy_tmpl/_data/nginx.tmpl
conf:
vhost:
certs:
html:
tmpl:
services:
# Nginx proxy
nginx:
image: nginx
networks:
- proxy
ports:
- 80:80
- 443:443
volumes:
- conf:/etc/nginx/conf.d # nginx config
- vhost:/etc/nginx/vhost.d # changed configuration of vhosts (needed by Let's Encrypt)
- html:/usr/share/nginx/html # challenge files
- certs:/etc/nginx/certs:ro # Let's Encrypt certificates
- /var/run/docker.sock:/tmp/docker.sock:ro # docker service
environment:
- ENABLE_IPV6=true
deploy:
mode: global
placement:
constraints: [node.role == manager]
restart_policy:
condition: on-failure
max_attempts: 5
window: 120s
resources:
limits:
memory: 256M
reservations:
memory: 32M
labels:
de.blubbbbb.meta.description: "Nginx"
de.blubbbbb.meta.maintainer: "-----"
de.blubbbbb.meta.version: "2018-01-15"
com.github.jrcs.letsencrypt_nginx_proxy_companion.nginx_proxy: ""
# see also: https://wiki.ssdt-ohio.org/display/rtd/Adjusting+nginx-proxy+Timeout+Configuration
# Docker-gen
dockergen:
# https://hub.docker.com/r/helder/docker-gen
image: helder/docker-gen
networks:
- proxy
volumes:
- conf:/etc/nginx/conf.d
- vhost:/etc/nginx/vhost.d
- html:/usr/share/nginx/html
- certs:/etc/nginx/certs:ro
- tmpl:/etc/docker-gen/templates:ro # docker-gen templates
- /var/run/docker.sock:/tmp/docker.sock:ro # docker service
environment:
ENABLE_IPV6: ""
command: -notify "docker-label-sighup com.github.jrcs.letsencrypt_nginx_proxy_companion.nginx_proxy" -watch -wait 10s:30s /etc/docker-gen/templates/nginx.tmpl /etc/nginx/conf.d/default.conf
deploy:
mode: global
placement:
constraints: [node.role == manager]
restart_policy:
condition: on-failure
max_attempts: 5
window: 120s
resources:
limits:
memory: 256M
reservations:
memory: 32M
labels:
de.blubbbbb.meta.description: "Docker-gen"
de.blubbbbb.meta.maintainer: "-----"
de.blubbbbb.meta.version: "2018-01-15"
com.github.jrcs.letsencrypt_nginx_proxy_companion.docker_gen: ""
# Lets Encrypt
letsencrypt:
image: jrcs/letsencrypt-nginx-proxy-companion
networks:
- proxy
volumes:
- conf:/etc/nginx/conf.d
- vhost:/etc/nginx/vhost.d
- html:/usr/share/nginx/html
- certs:/etc/nginx/certs:rw
- /var/run/docker.sock:/var/run/docker.sock:ro
deploy:
mode: global
placement:
constraints: [node.role == manager]
restart_policy:
condition: on-failure
max_attempts: 5
window: 120s
resources:
limits:
memory: 256M
reservations:
memory: 32M
labels:
de.blubbbbb.meta.description: "Letsencrypt Nginx Proxy Companion"
de.blubbbbb.meta.maintainer: "-----"
de.blubbbbb.meta.version: "2018-01-15"
I run it like this:
docker stack deploy proxy -c docker-compose.yml
What could be the issue? Thanks in advance.
upstream part from conf
upstream {
# Cannot connect to network of this container
server 127.0.0.1 down;
# Cannot connect to network of this container
server 127.0.0.1 down;
# Cannot connect to network of this container
server 127.0.0.1 down;
## Can be connected with "proxy" network
# tools_adminer.1.n1j3poc9mo507somuhyf7adrd
server 10.0.35.3:8080;
# Cannot connect to network of this container
server 127.0.0.1 down;
# Cannot connect to network of this container
server 127.0.0.1 down;
}
In my case there was an adminer docker which has blocked nginx
I have a Swarm cluster with a Manager and a Worker node.
All the containers running on the manager are accessible through Traefik and working fine.
I just deployed a new Worker node and joined my swarm on the node.
Now I start scaling some services and realized they were timing out on the worker node.
So I setup a simple example using the whoami container, and cannot figure out why I cannot access it. Here are my configs (all deployed on the MANAGER node):
version: '3.6'
networks:
traefik-net:
driver: overlay
attachable: true
external: true
services:
whoami:
image: jwilder/whoami
networks:
- traefik-net
deploy:
labels:
- "traefik.port=8000"
- "traefik.frontend.rule=Host:whoami.myhost.com"
- "traefik.docker.network=traefik-net"
replicas: 2
placement:
constraints: [node.role != manager]
My traefik:
version: '3.6'
networks:
traefik-net:
driver: overlay
attachable: true
external: true
services:
reverse-proxy:
image: traefik # The official Traefik docker image
command: --docker --docker.swarmmode --docker.domain=myhost.com --docker.watch --api
ports:
- "80:80" # The HTTP port
# - "8080:8080" # The Web UI (enabled by --api)
- "443:443"
networks:
- traefik-net
volumes:
- /var/run/docker.sock:/var/run/docker.sock # So that Traefik can listen
- /home/ubuntu/docker-configs/traefik/traefik.toml:/traefik.toml
- /home/ubuntu/docker-configs/traefik/acme.json:/acme.json
deploy:
labels:
traefik.port: 8080
traefik.frontend.rule: "Host:traefik.myhost.com"
traefik.docker.network: traefik-net
replicas: 1
placement:
constraints: [node.role == manager]
My worker docker ps output:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b825f95b0366 jwilder/whoami:latest "/app/http" 4 hours ago Up 4 hours 8000/tcp whoami_whoami.2.tqbh4csbqxvsu6z5i7vizc312
50cc04b7f0f4 jwilder/whoami:latest "/app/http" 4 hours ago Up 4 hours 8000/tcp whoami_whoami.1.rapnozs650mxtyu970isda3y4
I tried opening firewall ports, disabling it completely, nothing seems to work. Any help is appreciated
I had to use --advertise-addr y.y.y.y to make it work
I have a few simple REST api services implemented in Express.These services run in docker containers in a swarm mode.Also,I am trying to use express-api gateway with these services.The express api gateway also runs in a container as part of the docker swarm.Following is the Dockercompose file
version: "3"
services:
firstService:
image: chaitanyaw/firstrepository:first
deploy:
replicas: 3
restart_policy:
condition: on-failure
ports:
- '3000:3000'
networks:
- webnet
apiGateway:
image: firstgateway:latest
deploy:
restart_policy:
condition: on-failure
ports:
- '80:80'
networks:
- webnet
visualizer:
image: dockersamples/visualizer:latest
ports:
- 8080:8080
volumes:
- /var/run/docker.sock:/var/run/docker.sock
deploy:
placement:
constraints: [node.role == manager]
networks:
- webnet
networks:
webnet:
also, following is the gateway.config file
http:
port: 80
admin:
port: 9876
hostname: localhost
apiEndpoints:
api:
host: 192.168.99.100
paths: '/ip'
localApi:
host: 192.168.99.100
paths: '/'
serviceEndpoints:
httpbin:
url: 'https://httpbin.org'
services:
urls:
- 'http://192.168.99.100:3000/serviceonerequest'
- 'http://192.168.99.100:3000/servicetwo'
policies:
- basic-auth
- cors
- expression
- key-auth
- log
- oauth2
- proxy
- rate-limit
pipelines:
default:
apiEndpoints:
- api
policies:
# Uncomment `key-auth:` when instructed to in the Getting Started guide.
# - key-auth:
- proxy:
- action:
serviceEndpoint: httpbin
changeOrigin: true
customPipeline:
apiEndpoints:
- localApi
policies:
- proxy:
- action:
serviceEndpoint: services
changeOrigin: true
The ip '192.168.99.100' is the docker-machine ip.
If I run the stack using the above gateway.config file, everything works fine.
However,IF I replace
services:
urls:
- 'http://192.168.99.100:3000/serviceonerequest'
- 'http://192.168.99.100:3000/servicetwo'
with
services:
urls:
- 'http://services_firstService:3000/serviceonerequest'
- 'http://services_firstService:3000/servicetwo'
I get a bad gateway!
This docker swarm runs an overlay network "webnet".So,I must be able to use the service name as the hostname in the gateway.config file according to this link.
In the above working case with the use of IP the services become available 'outside' of the gateway which I do not desire.
What is wrong?
SO I have found where I was going wrong!Look at the compose file above!The service names are firstService (a capital S!).
Now, if I used
urls:
- 'http://services_firstService:3000/serviceonerequest'
- 'http://services_firstService:3000/servicetwo'
the API gateway would keep looking for services_firstservice (Note the small 's' in firstService) which obviously is not present in the overlay network!
Now,I changed the names to small case and it works just as expected!
Here is the new Docker-compose file
version: "3"
services:
firstservice:
image: chaitanyaw/firstrepository:first
deploy:
replicas: 3
restart_policy:
condition: on-failure
networks:
- webnet
apigateway:
image: firstgateway:latest
deploy:
restart_policy:
condition: on-failure
ports:
- '80:80'
networks:
- webnet
visualizer:
image: dockersamples/visualizer:latest
ports:
- 8080:8080
volumes:
- /var/run/docker.sock:/var/run/docker.sock
deploy:
placement:
constraints: [node.role == manager]
networks:
- webnet
networks:
webnet:
Also,here is the gateway.config file
http:
port: 80
admin:
port: 9876
hostname: localhost
apiEndpoints:
api:
host: 192.168.99.100
paths: '/ip'
localApi:
host: 192.168.99.100
paths: '/'
serviceEndpoints:
httpbin:
url: 'https://httpbin.org'
services:
urls:
- 'http://services_firstservice:3000/serviceonerequest'
- 'http://services_firstservice:3000/servicetwo'
policies:
- basic-auth
- cors
- expression
- key-auth
- log
- oauth2
- proxy
- rate-limit
pipelines:
default:
apiEndpoints:
- api
policies:
# Uncomment `key-auth:` when instructed to in the Getting Started guide.
# - key-auth:
- proxy:
- action:
serviceEndpoint: httpbin
changeOrigin: true
customPipeline:
apiEndpoints:
- localApi
policies:
- proxy:
- action:
serviceEndpoint: services
changeOrigin: true
From a Gateway perspective, there's nothing wrong in the configuration. As long the DNS name can be resolved correctly, the request should go through with no problem.
What I would try to do is to log in the Gateway container and try to ping the two services. If something goes wrong, then most likely the docker-compose.yml file has some problem.
I'd try to remove the network key, for example. The network is implicitly created so you do not really need that part.
Also — I do not think that the DNS name for the service is services_firstService; I'd try with firstService directly.
Cheers,
V.
I’m quite new in docker world.
I have a local virtulbox setup:
vm1=swarm manager (mysql,visualizer) IP: 192.168.99.100
vm2= wordpress service IP: 192.168.99.101
I can reach the application on both IP’s 100/101. But I would like to also use the localhost in order to port forward localhost to NET since 192.168.99.0 subnet is HOST only.
In VBOX I have portforwarding set like this for the NAT interface on the machine where apache runs:
HOST PORT 8888 / GUEST PORT 8888
Currently the YAML looks like this:
version: '3.4'
services:
wordpress:
image: wordpress
depends_on:
- mysql
- wordpress
deploy:
placement:
constraints: [node.labels.application==true]
mode: replicated
replicas: 1
update_config:
parallelism: 1
delay: 10s
restart_policy:
condition: on-failure
ports:
- "80:80"
environment:
WORDPRESS_DB_PASSWORD: "12345"
networks:
- wordpress_net
mysql:
image: mysql:latest
volumes:
- "/mnt/sda1/var/lib/docker/volumes/mysql_data/_data:/var/lib/mysql"
deploy:
placement:
constraints: [node.role == manager]
environment:
MYSQL_ROOT_PASSWORD: "12345"
networks:
- wordpress_net
visualizer:
image: dockersamples/visualizer:stable
ports:
- "8080:8080"
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
deploy:
placement:
constraints: [node.role == manager]
networks:
- wordpress_net
networks:
wordpress_net:
How can I attach the eth0 interface to container. So both the swarm network and the NAT-ed network will be reachable ?
I was trying something like this but without success:
services:
wordpress:
image: wordpress
depends_on:
- mysql
- wordpress
deploy:
placement:
constraints: [node.labels.application==true]
mode: replicated
replicas: 1
update_config:
parallelism: 1
delay: 10s
restart_policy:
condition: on-failure
ports:
- target: 80
published: 80
protocol: tcp
mode: ingress
- target: 80
published: 8888
protocol: tcp
mode: host
environment:
WORDPRESS_DB_PASSWORD: "12345"
networks:
- wordpress_net
Thanks !