Traefik Version: 1.6.4
My company uses Docker Swarm to present applications as services, using Traefik for routing. All has been working fine so far, but we're having trouble implementing backend health checks in Traefik.
All of our applications expose a health check, which works fine and returns 200 via a simple curl or hitting it in a web browser. We've applied Docker labels to our swarm services to reflect these health checks. As an example:
traefik.backend.healthcheck.path=/health
traefik.backend.healthcheck.interval=30s
The Traefik service logs report the following:
{"level":"warning","msg":"Health check still failing. Backend: \"backend-for-my-app\" URL: \"https://10.0.0.x:8080\" Reason: received non-200 status code: 406","time":"2018-07-10T19:41:25Z"}
Within the app containers we have Apache running with ModSecurity. ModSecurity is blocking the request because the host header is a numeric IP address. I shelled into the Traefik container and did a couple curls against the app container to test:
curl -k https://10.0.0.x:8080/health <-- ModSecurity blocks this, returns a 406
curl -k -H "Host: myapp.company.com" https://10.0.0.x:8080/health <-- works fine, returns a 200
TLDR: I need a way to set a host header for the Traefik backend health check. In Traefik docs, I don't see a way of doing this. Has anyone run into this issue, and/or know of a solution?
In Traefik 1.7, there is an option to add custom headers, and define a host header for the backend healthcheck:
https://docs.traefik.io/v1.7/configuration/backends/docker/
For example traefik.backend.healthcheck.hostname=foobar.com
Please note that 1.7 is still in RC, but you can test it out to see if it resolves your issue. If you need this in production, you will have to wait for 1.7 to reach stable status.
Related
I use Traefik as proxy and also have react frotend and fastAPI as backend. My backend is publicly exposed. The problem is unvicorn server redirects you from non trailing slash to url with a slash https://api.mydomain.com/posts -> http://api.mydomain.com/posts/ but it doesn't work for ssl. So, I'm getting errors in frontend about CORS and mixing content. Based on this topic FastAPI redirection for trailing slash returns non-ssl link I added --forwarded-allow-ips="*" to uvicorn server and now ssl redirection works, but as I understand it's not secure. I tried to add --forwarded-allow-ips="mydomain.com" but it doesn't work, I have no idea why as "mydomain.com" is an ip of the server so then ip of my proxy. I assume that's because my api get proxy IP from docker network, don't know how to solve this.
If you're positive that the Uvicorn/Gunicorn server is only ever accessible via an internal IP (is completely firewalled from the external network such that all connections come from a trusted proxy [ref]), then it would be okay to use --forwarded-allow-ips="*".
However, I believe the answer to your original question would be to inspect your Docker container(s) and grab their IPs. Since traefik is routing/proxying requests, you probably just need to grab the traefik container ID and run an inspect:
~$ docker ps`
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ids_here traefik:v2.3 "/entrypoint.sh --pr…" 1 second ago Up 1 second 0.0.0.0:80->80/tcp, :::80->80/tcp, 0.0.0.0:443->443/tcp, :::443->443/tcp traefik
~$ docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' ids_here
172.18.0.1
Try whitelisting that IP.
From what I've read, it is not possible to whitelist a subnet inside --forwarded-allow-ips="x". If your server re-starts, your container's internal IPs are subject (likely) to change. A hack-ish fix for this would be forcing the container start order using depends_on. The depends_on container will start first and have a lower/first IP assignment.
However, it is possible in Docker like this [ref]:
labels:
- "traefik.http.middlewares.testIPwhitelist.ipwhitelist.sourcerange=127.0.0.1/32, 192.168.1.7"
- "traefik.http.middlewares.testIPwhitelist.ipwhitelist.ipstrategy.depth=2"
So I have a setup with a react service running in a docker-compose service and on a network in that compose. For that react service I use the http-proxy-middleware to be able to just use relative endpoints (/api/... instead of localhost:xxxx/api/...) both in development and in production but also because one of the libraries that I depend on requires it (for the same reason).
I also have a python flask backend that I want to run on the localhost network to be able to avoid restarting the entire docker-compose on every change.
Currently, the proxy (as expected I suppose) gives a "ECONNREFUSED" error when used as it cannot connect to the backend.
Does anyone have an idea of how I could get the proxy to be able to access the backend without having to run the backend in the docker-compose?
Thanks in advance, Vidar
So I finally got it working, with help from #Hikash, by setting my frontend proxy to connect to the localhost through the IP I get from ip -4 addr show docker0 | grep -Po 'inet \K[\d.]+'.
I am not sure if this is an nginx config issue, a traefik config issue, or a general docker networking issue.
I assume there is a simple setting somewhere that will make it possible.
I have etopian/alpine-php-wordpress running great behind traefik.
Simply Static is a wordpress plugin that crawls the site and adapts the results into a static site with relative paths. To do that Wordpress needs to be able to "make requests to itself", and the Simply Static Diagnostic page gives me a red X because it can't.
I tried some commandline wgets from within the container:
bash-4.3# wget http://edit.example.com
Connecting to edit.example.com (172.24.x.y:80)
wget: error getting response: Invalid argument
bash-4.3# wget https://edit.example.com
Connecting to edit.example.com (172.24.x.y:443)
wget: can't connect to remote host (172.24.x.y): Connection refused
bash-4.3# wget https://edit.example.com:80
Connecting to edit.example.com:80 (172.24.x.y:80)
wget: can't execute 'ssl_helper': No such file or directory
wget: error getting response: Connection reset by peer
I also tried adding an extra host to docker-compose:
extra_hosts:
- "edit.example.com:{{actual.ip.add.ress}}
Still fails, but the IP address shown in the Simply Static Diagnostics changes to the external actual ip of the machine (hardcoded where i put {{actual.ip.add.ress}} above).
These results make me lean towards an nginx config fix. As it seems that edit.adanj.com is correctly resolved to the internal (or external) ip of the docker container, and nginx is not allowing the connection.
Any help?
I found a solution that feels quite hackish:
extra_hosts:
- "edit.example.com:172.trae.fik.ip"
Using the full internal ip of the traefik container on the docker network.
I think that the problem was that Simply Static is trying to make requests via https, which is usually handled by traefik, and the internal nginx within the wordpress container is not listening on 443... So sending those requests around to the frontdoor allows traefik to handle the ssl business and the requests work.
Curious if there are other solutions...
My app integrates with a web service that supports a proxy server. So I need to have integration tests that prove that works.
So I wanted to use Docker to create a local proxy server that I can run real integration tests to verify that my web service can be called through the proxy interface without errors.
So I tried https://github.com/jwilder/nginx-proxy
I started up the container with:
docker run -d -p 80:80 -v /var/run/docker.sock:/tmp/docker.sock:ro jwilder/nginx-proxy
When I use it i get a 503 error 503 Service Temporarily Unavailable
Am I misunderstanding what this proxy does?
Although this has been resolved in the comments, I'll try to answer the following question:
Am I misunderstanding what this proxy does?
Yes. What your project requires, is the availability of a forward-proxy and what you are trying to use, is a reverse-proxy. This will become more clear once you go through the most top rated answers at Difference between proxy server and reverse proxy server
For a TL;DR moment:
There are many forward-proxy software available. You could choose any one of them for your project. Some of them are:
Squid
Polipo
Apache Traffic Server
Privoxy
TinyProxy
I've been working on using Rancher for manager our dashboard applications, part of this has involved exposing multiple kibana containers from the same port, and one kibana 3 container exposing on port 80.
I want to therefore send requests on specified ports: 5602, 5603, 5604 to specific containers, so I setup the following docker-compose.yml config:
kibana:
image: rancher/load-balancer-service
ports:
- 5602:5602
- 5603:5603
- 5604:5604
links:
- kibana3:kibana3
- kibana4-logging:kibana4-logging
- kibana4-metrics:kibana4-metrics
labels:
io.rancher.loadbalancer.target.kibana3: 5602=80
io.rancher.loadbalancer.target.kibana4-logging: 5603=5601
io.rancher.loadbalancer.target.kibana4-metrics: 5604=5601
Everything works as expected, but I get sporadic 503's. When I go into the container and look at the haproxy.cfg I see:
frontend d898fb95-ec51-4c73-bdaa-cc0435d8572a_5603_frontend
bind *:5603
mode http
default_backend d898fb95-ec51-4c73-bdaa-cc0435d8572a_5603_2_backend
backend d898fb95-ec51-4c73-bdaa-cc0435d8572a_5603_2_backend
mode http
timeout check 2000
option httpchk GET /status HTTP/1.1
server cbc23ed9-a13a-4546-9001-a82220221513 10.42.60.179:5603 check port 5601 inter 2000 rise 2 fall 3
server 851bdb7d-1f6b-4f61-b454-1e910d5d1490 10.42.113.167:5603
server 215403bb-8cbb-4ff0-b868-6586a8941267 10.42.85.7:5601
The IPs listed are all three Kibana containers, the first container has a health check has it, but none of the others do (kibana3/kibana4.1 dont have a status endpoint). My understanding of the docker-compose config is it should have only the one server per backend, but all three appear to be listed, I assume this is in part down to the sporadic 503s, and removing this manually and restarting the haproxy service does seem to solve the problem.
I am configuring the load balancer incorrectly or is this worth raising as a Github issue with Rancher?
I posted on the Rancher forums as that was suggested from Rancher Labs on twitter: https://forums.rancher.com/t/load-balancer-sporadic-503s-with-multiple-port-bindings/2358
Someone from rancher posted a link to a github issue which was similar to what I was experiencing: https://github.com/rancher/rancher/issues/2475
In summary, the load balancers will rotate through all matching backends, there is a work around involving "dummy" domains, which I've confirmed with my configuration does work, even if it is slightly inelegant.
labels:
# Create a rule that forces all traffic to redis at port 3000 to have a hostname of bogus.com
# This eliminates any traffic from port 3000 to be directed to redis
io.rancher.loadbalancer.target.conf/redis: bogus.com:3000
# Create a rule that forces all traffic to api at port 6379 to have a hostname of bogus.com
# This eliminates any traffic from port 6379 to be directed to api
io.rancher.loadbalancer.target.conf/api: bogus.com:6379
(^^ Copied from rancher github issue, not my workaround)
I'm going to see how easy it would be to route via port and raise a PR/Github issue as I think it's a valid usecase for an LB in this scenario.
Make sure that you are using the port initially exposed on the docker container. For some reason, if you bind it to a different port, HAProxy fails to work. If you are using a container from DockerHub that is using a port already taken on your system, you may have to rebuild that docker container to use a different port by routing it through a proxy like nginx.