I use traefik on my server to load balances my apps with a docker backend.
I started rancher (1.6.14) through docker to start other app easily.
I succeed to access to rancher through traefik. But when I start an app through rancher, the containers don't have an IP so traefik can't contact them. In the traefik backend I see http://:8000 for my app with the stack:
docker-compose.yml:
version: '2'
services:
app:
image: mykiwi/ttrss
labels:
traefik.port: 8000
traefik.protocol: http
traefik.frontend.entryPoints: https
traefik.frontend.rule: Host:foo.bar
database:
image: postgres:10-alpine
environment:
- POSTGRES_USER=ttrss
- POSTGRES_PASSWORD=ttrss
volumes:
- database:/var/lib/postgresql/data
volumes:
database: ~
Any idea why / how to fix this ?
I also tried to add this: (inspired by wekan config)
rancher-compose.yml:
version: '2'
services:
app:
scale: 1
retain_ip: true
start_on_create: true
database:
scale: 1
start_on_create: true
Same result.
When containers are launched via Rancher, they are part of Rancher's "managed" network. The containers do get an IP address, but it's from a different network(default: 10.42.0.0/16), not the docker network (172.17.0.0/16).
Rancher also has a loadbalancer service that can take care of the application needs. Please check: https://rancher.com/docs/rancher/v1.6/en/cattle/adding-load-balancers/#adding-a-load-balancer-in-the-ui for more information.
Related
I have 5 microservices which I intend to deploy over docker swarm cluster consisting of 3 nodes.
I also have a postgresql service running over one of the 3 servers(not dockerized but rather installed over the server) which I have. I did assign the network as "host" for all of the services but they simply refuse to start with no logs being generated.
version: '3.8'
services:
frontend-client:
image: xxx:10
container_name: frontend
restart: on-failure
deploy:
mode: replicated
replicas: 3
networks:
- "host"
ports:
- "xxxx:3000"
networks:
host:
name: host
external: true
I also did try to start a centos container from a server which does not have postgres installed and was able to ping as well as telnet the postgresql port as well using the Host network being assigned to it.
Can someone please help me narrow down the issue or look at the possibility which I might be missing???
docker swarm doesn't support "host" network_mode currently, so your best bet (and best practice) would be to pass your postgresql host ip address as an environment variable to the services using it.
if you are using docker-compose instead of docker swarm, you can set network_mode to host:
version: '3.8'
services:
frontend-client:
image: xxx:10
container_name: frontend
restart: on-failure
deploy:
mode: replicated
replicas: 3
network_mode: "host"
ports:
- "xxxx:3000"
notice i've removed networks part of your compose snippet and replaced it with network_mode.
I have a Docker Swarm web application that can deploy fine and can be reached from outside by a browser.
But it was not showing the client IP address to the HTTP service.
So I decided to add a Traefik service to the Docker Compose file to expose the client IP to the HTTP service.
I use a mode: host and a driver: overlay for this reason.
The complete configuration is described in two Docker Compose files that I run in sequence.
First I run the docker stack deploy --compose-file docker-compose-dev.yml common command on the file:
version: "3.9"
services:
traefik:
image: traefik:v2.5
networks:
common:
ports:
- target: 80
published: 80
mode: host
- target: 443
published: 443
mode: host
command:
- "--providers.docker.endpoint=unix:///var/run/docker.sock"
- "--providers.docker.swarmMode=true"
- "--providers.docker.exposedbydefault=false"
- "--providers.docker.network=common"
- "--entrypoints.web.address=:80"
# Set a debug level custom log file
- "--log.level=DEBUG"
- "--log.filePath=/var/log/traefik.log"
- "--accessLog.filePath=/var/log/access.log"
# Enable the Traefik dashboard
- "--api.dashboard=true"
deploy:
placement:
constraints:
- node.role == manager
labels:
# Expose the Traefik dashboard
- "traefik.enable=true"
- "traefik.http.routers.dashboard.service=api#internal"
- "traefik.http.services.traefik.loadbalancer.server.port=888" # A port number required by Docker Swarm but not being used in fact
- "traefik.http.routers.dashboard.rule=Host(`traefik.learnintouch.com`)"
- "traefik.http.routers.traefik.entrypoints=web"
# Basic HTTP authentication to secure the dashboard access
- "traefik.http.routers.traefik.middlewares=traefik-auth"
- "traefik.http.middlewares.traefik-auth.basicauth.users=stephane:$$apr1$$m72sBfSg$$7.NRvy75AZXAMtH3C2YTz/"
volumes:
# So that Traefik can listen to the Docker events
- "/var/run/docker.sock:/var/run/docker.sock:ro"
- "~/dev/docker/projects/common/volumes/logs/traefik.service.log:/var/log/traefik.log"
- "~/dev/docker/projects/common/volumes/logs/traefik.access.log:/var/log/access.log"
networks:
common:
name: common
driver: overlay
Then I run the docker stack deploy --compose-file docker-compose.yml www_learnintouch command on the file:
version: "3.9"
services:
www:
image: localhost:5000/www.learnintouch
networks:
common:
volumes:
- "~/dev/docker/projects/learnintouch/volumes/www.learnintouch/account/data:/usr/local/learnintouch/www/learnintouch.com/account/data"
- "~/dev/docker/projects/learnintouch/volumes/www.learnintouch/account/backup:/usr/local/learnintouch/www/learnintouch.com/account/backup"
- "~/dev/docker/projects/learnintouch/volumes/engine:/usr/local/learnintouch/engine"
- "~/dev/docker/projects/common/volumes/letsencrypt/certbot/conf/live/thalasoft.com:/usr/local/learnintouch/letsencrypt"
- "~/dev/docker/projects/common/volumes/logs:/usr/local/apache/logs"
- "~/dev/docker/projects/common/volumes/logs:/usr/local/learnintouch/logs"
user: "${CURRENT_UID}:${CURRENT_GID}"
deploy:
replicas: 1
restart_policy:
condition: any
delay: 5s
max_attempts: 3
window: 10s
labels:
- "traefik.enable=true"
- "traefik.http.routers.www.rule=Host(`dev.learnintouch.com`)"
- "traefik.http.routers.www.entrypoints=web"
- "traefik.http.services.www.loadbalancer.server.port=80"
healthcheck:
test: curl --fail http://127.0.0.1:80/engine/ping.php || exit 1
interval: 10s
timeout: 3s
retries: 3
networks:
common:
external: true
name: common
Here are the networks:
stephane#stephane-pc:~$ docker network ls
NETWORK ID NAME DRIVER SCOPE
6beaf0c3a518 bridge bridge local
ouffqdmdesuy common overlay swarm
17et43c5tuf0 docker-registry_default overlay swarm
1ae825c8c821 docker_gwbridge bridge local
7e6b4b7733ca host host local
2ui8s1yomngt ingress overlay swarm
460aad21ada9 none null local
tc846a14ftz5 verdaccio overlay swarm
The docker ps command shows that all containers are healthy.
But a request to http://dev.learnintouch.com/ responds with a Bad Gateway error MOST of the times except when rarely it does not and the web application displays fine.
As a side note, I would like any unhealthy service to be restarted and seen by Traefik. Just like Docker Swarm restarts unhealthy services I would like Traefik to restart unhealthy services too.
The service log:
{"level":"debug","msg":"Configuration received from provider docker: {\"http\":{\"routers\":{\"dashboard\":{\"service\":\"api#internal\",\"rule\":\"Host(`traefik.learnintouch.com`)\"},\"nodejs\":{\"entryPoints\":[\"web\"],\"service\":\"nodejs\",\"rule\":\"Host(`dev.learnintouch.com`)\"},\"traefik\":{\"entryPoints\":[\"web\"],\"middlewares\":[\"traefik-auth\"],\"service\":\"traefik\",\"rule\":\"Host(`common-reverse-proxy`)\"},\"www\":{\"entryPoints\":[\"web\"],\"service\":\"www\",\"rule\":\"Host(`dev.learnintouch.com`)\"}},\"services\":{\"nodejs\":{\"loadBalancer\":{\"servers\":[{\"url\":\"http://10.0.14.17:9001\"}],\"passHostHeader\":true}},\"traefik\":{\"loadBalancer\":{\"servers\":[{\"url\":\"http://10.0.14.8:888\"}],\"passHostHeader\":true}},\"www\":{\"loadBalancer\":{\"servers\":[{\"url\":\"http://10.0.14.18:80\"}],\"passHostHeader\":true}}},\"middlewares\":{\"traefik-auth\":{\"basicAuth\":{\"users\":[\"stephane:$apr1$m72sBfSg$7.NRvy75AZXAMtH3C2YTz/\"]}}}},\"tcp\":{},\"udp\":{}}","providerName":"docker","time":"2021-07-04T10:25:01Z"}
{"level":"info","msg":"Skipping same configuration","providerName":"docker","time":"2021-07-04T10:25:01Z"}
I also tried to have Docker Swarm doing the load balancing with adding the - "traefik.docker.lbswarm=true" property to my service but the Bad Gateway error remained.
I also restarted the Swarm manager:
docker swarm leave --force
docker swarm init
but the Bad Gateway error remained.
I also added the two labels:
- "traefik.backend.loadbalancer.sticky=true"
- "traefik.backend.loadbalancer.stickiness=true"
but the Bad Gateway error remained.
It feels like Traefik hits the web service before that one has a chance to be ready. Would there be any way to tell Traefik to wait a given amount of seconds before hitting the web service ?
UPDATE: I could find, not a solution, but a workaround the issue, by splitting the first above common stack file into two files, with one dedicated to the traefik stack. I could then start the 3 stacks in the following order: common, www_learnintouch and traefik
The important thing was to start the traefik stack after the others. If I then had to remove and start again the www_learnintouch stack for example, then I had to follow this by removing and starting again the traefik stack.
Also, if I removed the container of www_learnintouch with a docker rm -f CONTAINER_ID command then I also needed to remove and start again the traefik stack.
Unable to connect to containers running on separate docker hosts
I've got 2 docker Tomcat containers running on 2 different Ubuntu vm's. System-A has a webservice running and System-B has a db. I haven't been able to figure out how to connect the application running on system-A to the db running on system-B. When I run the database on system-A, the application(which is also running on system-A) can connect to the database. I'm using docker-compose to setup the network(which works fine when both containers are running on the same VM). I've execd into etc/hosts file in the application container on system-A and I think whats missing is the ip address of System-B.
services:
db:
image: mydb
hostname: mydbName
ports:
- "8012: 8012"
networks:
data:
aliases:
- mydbName
api:
image: myApi
hostname: myApiName
ports:
- "8810: 8810"
networks:
data:
networks:
data:
You would configure this exactly the same way you would as if Docker wasn't involved: configure the Tomcat instance with the DNS name or IP address of the other server. You would need to make sure the service is published outside of Docker space using a ports: directive.
On server-a.example.com you could run this docker-compose.yml file:
version: '3'
services:
api:
image: myApi
ports:
- "8810:8810"
env:
DATABASE_URL: "http://server-b.example.com:8012"
And on server-b.example.com:
version: '3'
services:
db:
image: mydb
ports:
- "8012:8012"
In principle it would be possible to set up an overlay network connecting the two hosts, but this is a significantly more complicated setup.
(You definitely don't want to use docker exec to modify /etc/hosts in a container: you'll have to repeat this step every time you delete and recreate the container, and manually maintaining hosts files is tedious and error-prone, particularly if you're moving containers between hosts. Consul could work as a service-discovery system that provides a DNS service.)
I try to start Concourse CI with custom docker-compose
version: '2'
services:
concourse-web:
image: concourse/concourse
container_name: concourse-web
command: web
network_mode: host
volumes: ["./keys/web:/concourse-keys"]
environment:
CONCOURSE_BASIC_AUTH_USERNAME: concourse
CONCOURSE_BASIC_AUTH_PASSWORD: changeme
CONCOURSE_EXTERNAL_URL: http://my.internal.ip:8092
CONCOURSE_BIND_PORT: 8092
CONCOURSE_POSTGRES_DATA_SOURCE: |-
postgres://odoo:odoo#localhost:5432/concourse?sslmode=disable
concourse-worker:
image: concourse/concourse
container_name: concourse-worker
network_mode: host
privileged: true
command: worker
volumes: ["./keys/worker:/concourse-keys"]
environment:
CONCOURSE_BIND_PORT: 8092
And worker can't connect to web part.
Can you please help me with this.
P.S. Database postgtresql started on 5432 port on host machine, and with connection all right.
Worker errors:
{"timestamp":"1487953300.400844336","source":"tsa","message":"tsa.connection.channel.forward-worker.register.failed-to-fetch-containers","log_level":2,"data":{"error":"invalid character '\u003c' looking for beginning of value","remote":"127.0.0.1:57960","session":"4.1.1.582"}}
You need to set CONCOURSE_TSA_HOST: concourse-web on the worker as environment variable so that it knows which host to connect to. Right now it is trying to connect to the web part on localhost, but that is incorrect.
Another issue with your configuration is that you're trying to connect to Postgres through localhost: CONCOURSE_POSTGRES_DATA_SOURCE: |-
postgres://odoo:odoo#localhost:5432/concourse?sslmode=disable,
but your Postgres instance is running on the host machine. The host machine is not available on localhost inside a docker container as a docker container has it's own private network. It should instead be:
CONCOURSE_POSTGRES_DATA_SOURCE: |-
postgres://odoo:odoo#my.internal.ip:5432/concourse?sslmode=disable
|-
postgres://odoo:odoo#localhost:5432/concourse?sslmode=disable
should have that entire prefix removed. Replace with
CONCOURSE_POSTGRES_DATA_SOURCE: postgres://odoo:odoo#localhost:5432/concourse?sslmode=disable
I want to use docker-compose with Docker Swarm (I use docker version 1.13 and compose with version: '3' syntax).
Is each service reachable as a "single" service to the other services? Here is an simplified example to be clear:
version: '3'
services:
nodejs:
image: mynodeapp
container_name: my_app
ports:
- "80:8080"
environment:
- REDIS_HOST=my_redis
- REDIS_PORT=6379
deploy:
mode: replicated
replicas: 3
networks:
- my_net
command: npm start
redis:
image: redis
container_name: my_redis
restart: always
expose:
- 6379
deploy:
mode: replicated
replicas: 2
networks:
- my_net
networks:
my_net:
external: true
Let's say I have 3 VMs which are configured as a swarm. So there is one nodejs container running on each VM but there are only two redis container.
On the VM where no redis is running: Will my nodejs container know about the redis?
Addiitonal questions:
When I set replicas: 4 for my redis, I will have two redis container on one VM: Will this be a problem for my nodejs app?
Last question:
When I set replicas: 4 for my nodeapp: Will this even work because I now have exposed two times port 80?
The services have to be stateless. In the case of databases it is necessary to set the cluster mode in each instance, since they are statefull.
In the same order you asked:
One service does not see another service as if it is made of replicas. Nodejs will see a unique Redis, which will have one IP, no matter in which node its replicas are located. That's the beauty of Swarm.
Yes, you can have Nodejs in one node and Redis in another node and they will be visible to each other. That's what the manager does; make the containers "believe" they are running on the same machine.
Also, you can have many replicas in the same node without a problem; they will be perceived as a whole. In fact, they use the same volume.
And last, as implication of (1), there will be no problem because you are not actually exposing port 80 twice. Even having 20 replicas, you have a unique entrypoint to your service, a particular IP:PORT direction.