Sporadic 503s from specified ports - docker

I've been working on using Rancher for manager our dashboard applications, part of this has involved exposing multiple kibana containers from the same port, and one kibana 3 container exposing on port 80.
I want to therefore send requests on specified ports: 5602, 5603, 5604 to specific containers, so I setup the following docker-compose.yml config:
kibana:
image: rancher/load-balancer-service
ports:
- 5602:5602
- 5603:5603
- 5604:5604
links:
- kibana3:kibana3
- kibana4-logging:kibana4-logging
- kibana4-metrics:kibana4-metrics
labels:
io.rancher.loadbalancer.target.kibana3: 5602=80
io.rancher.loadbalancer.target.kibana4-logging: 5603=5601
io.rancher.loadbalancer.target.kibana4-metrics: 5604=5601
Everything works as expected, but I get sporadic 503's. When I go into the container and look at the haproxy.cfg I see:
frontend d898fb95-ec51-4c73-bdaa-cc0435d8572a_5603_frontend
bind *:5603
mode http
default_backend d898fb95-ec51-4c73-bdaa-cc0435d8572a_5603_2_backend
backend d898fb95-ec51-4c73-bdaa-cc0435d8572a_5603_2_backend
mode http
timeout check 2000
option httpchk GET /status HTTP/1.1
server cbc23ed9-a13a-4546-9001-a82220221513 10.42.60.179:5603 check port 5601 inter 2000 rise 2 fall 3
server 851bdb7d-1f6b-4f61-b454-1e910d5d1490 10.42.113.167:5603
server 215403bb-8cbb-4ff0-b868-6586a8941267 10.42.85.7:5601
The IPs listed are all three Kibana containers, the first container has a health check has it, but none of the others do (kibana3/kibana4.1 dont have a status endpoint). My understanding of the docker-compose config is it should have only the one server per backend, but all three appear to be listed, I assume this is in part down to the sporadic 503s, and removing this manually and restarting the haproxy service does seem to solve the problem.
I am configuring the load balancer incorrectly or is this worth raising as a Github issue with Rancher?

I posted on the Rancher forums as that was suggested from Rancher Labs on twitter: https://forums.rancher.com/t/load-balancer-sporadic-503s-with-multiple-port-bindings/2358
Someone from rancher posted a link to a github issue which was similar to what I was experiencing: https://github.com/rancher/rancher/issues/2475
In summary, the load balancers will rotate through all matching backends, there is a work around involving "dummy" domains, which I've confirmed with my configuration does work, even if it is slightly inelegant.
labels:
# Create a rule that forces all traffic to redis at port 3000 to have a hostname of bogus.com
# This eliminates any traffic from port 3000 to be directed to redis
io.rancher.loadbalancer.target.conf/redis: bogus.com:3000
# Create a rule that forces all traffic to api at port 6379 to have a hostname of bogus.com
# This eliminates any traffic from port 6379 to be directed to api
io.rancher.loadbalancer.target.conf/api: bogus.com:6379
(^^ Copied from rancher github issue, not my workaround)
I'm going to see how easy it would be to route via port and raise a PR/Github issue as I think it's a valid usecase for an LB in this scenario.

Make sure that you are using the port initially exposed on the docker container. For some reason, if you bind it to a different port, HAProxy fails to work. If you are using a container from DockerHub that is using a port already taken on your system, you may have to rebuild that docker container to use a different port by routing it through a proxy like nginx.

Related

Nginx Proxy manager cant connect to docker containers

My setup: I have a Raspberry pi at home connected to my Fritzbox 6660 Cable over Lan. The Pi is Running Docker with Portainer. While playing around and learning I was able to deploy numerous different containers with different programs. Now I would like to be able to connect to those containers from outside of my home network. In this example I will describe my Problem with my Grafana Container.(but I tryed other containers as well)
So Currently running are Grafana, InfluxDB(to feed Grafana) and nginx proxy manager.
I setup Nginx with the Docker compose file from nginx`s quick start page:
version: '3'
services:
app:
image: 'jc21/nginx-proxy-manager:latest'
restart: unless-stopped
ports:
- '80:80'
- '81:81'
- '443:443'
volumes:
- ./data:/data
- ./letsencrypt:/etc/letsencrypt
once Nginx was running I made sure that Grafana and Nginx are running on the same docker network (nginx_default in this case)
For my custom Domain I signed up for a Duckdns account and created my domain "http://example.duckdns.org"
I used Duckdns`s install instructions to configure the DynDns settings in my fritzbox
with Update-URL:http://www.duckdns.org/update?domains=example&token=xxxxxxx-680f-4c66-a982-60d7e2f56911&ip=
Domainname: example.duckdns.org
username: none (as stated from duckdns install page)
password: xxxxxxxx-680f-4c66-a982-60d7e2f56911
dont worry the "xxxxxx" is actually different in my case.
Further I enabled portforwarding to the static Ip adress of my Raspberry on the ports 80 and 443 since those are the once nginx needs.
Then I went on the nginxpm webpage on port 81 and set up a proxy host like so:
Domain names: grafana.example.duckdns.org (I also tryed without grafana at the beginning, same result)
Scheme: http
Forward Hostname: Raspberry pi Ip
Forward Port: 3000 because thats where I can reach Grafana
I also enabled Block common exploits and websockets support. I know I should enable SSL but wont for this example.
My Nginx now sais this Proxy Host is online. But still I cant connect. Browser says Timeout.
I have this raspberry pi for 2 weeks now and have dumped more than one week just to figure out how to reach over the web. even tryed traefik at some point. But also no success.
I have watched dozens of tutorials, and reconstructed way more than one documentation example. But everytime those tutorials say something about success when they show their container webpage from outside home network. My browsers just give me "ERR_CONNECTION_TIMED_OUT"
I also tryed NO_IP and ddnss.
So please if anyone has suggestions I would highly appreciate.
I am curious if you could solve this problem because I get a similar error and I tried any possible IP combination in Nginx. I can reach the "Congratulations! You've successfully started the Nginx Proxy Manager." side from outside, but the redirection to the docker container does not work.
Regards

Bind incoming docker connection to specific hostname inside docker container

I'm trying to migrate some Webpack based projects to run inside docker containers and have some issues with configuring networking.
Our WebPack devServer is configured in the following way:
{
host: 'dev.ng.com',
port: 4000,
compress: true,
disableHostCheck: true
}
in /etc/hosts file we have the following record:
127.0.0.1 dev.ng.com
and everything works fine.
When I run it inside docker I was getting EADDRNOTAVAIL error till I added to my docker-compose.yml the following lines:
extra_hosts:
- "dev.ng.com:127.0.0.1"
But now my app inside the docker app is not available from the host.
The relevant docker-compose.yml part is following:
gui-client:
image: "gui-client"
ports:
- "4000:4000"
extra_hosts:
- "dev.ng.com:127.0.0.1"
If I change in my Webpack host: 'dev.ng.com' to host:'0.0.0.0' it works fine, but I prefer not to change the Webpack config and run it as is.
My knowledge of docker networks internals is limited by I guess that all inbound connections to docker container from the host should be redirected to dev.ng.com:4000 while now they redirected to 0.0.0.0:4000, can it be achieved?
Yes, 127.0.0.1 is reachable normally only from the localhost. Containers work like if they would be virtual machines.
You need to configure it to listen everywhere. So very likely, "dev.ng.com:0.0.0.0" is what you want. Such things should be carefully used in normal circumstances, because mostly we do not want to share internal services to the internet. But here it serves only the purpose to make your configuration independent from the ip/netmask what docker gives to your container app.
Beside that, you need to forward the incoming connections of the host to your container. This can be done by a
- ports:
"0.0.0.0:4000:4000"
In your docker-compose.yml.
Possibly you will also want to make your port 4000 (of the host) reachable from the external world, this can be done by your firewall rules.
In professional configurations, there is typically some frontend (to provide encryption/security/load balancing), but if you only want to show your work to your boss, a http://a.b.c.d:4000 is pretty enough.

docker networking with traefik: publishing ports necessary in some cases?

I run traefik as proxy for many containers successfully.
I also run a dockered prosody (XMPP-server) with published ports (5222 and 5269) in the traefik network.
As I want it to work with strict TLS and no STARTTLS I use HostSNI-rule and tcp instead of http protocol in traffic for labeling, I passthrough the tls because it is terminated in xmpp instance itself (not traefik). I place the certificates there. It works.
Soon I realize I might have done it all wrong: since I want to compare it with a second XMPP service (ejabberd) behind traefik, a port conflict (5222 and 5269 already in use) makes me wonder why I mapped the ports at all on a bridged network with traefik in first instance?
So I disabled them for prosody (no ports, no expose in docker-compose file of prosody). But now it can't be reached anymore although I have set the labels and service for traefik rules accordingly. (In Portainer>Inspect>Networksettings>ports show they are recognized, available in image).
First conclusion: I think traffic is not going over traefik at all (as local dns points to the docker host with the published ports). But if I stop traefik, xmpp clients won't connect with published ports neither. So then it must still go via traefik proxy, right?. How can one check/probe that actually? Why do I need the ports being exposed/published (I tried all of it) in docker in the use-case to work solely with traefik?
I decided to let prosody be with strict TLS (as is, including mapped ports which makes no sense to me) and enable STARTTLS with ejabberd and adding new entry-points for port 5221 and 5268 including mapping them in traefik. But no success for connection - what do I miss in the concept? (as standalone ejabberd container works well):
labels:
- "traefik.enable=true"
- "traefik.tcp.routers.xmpp2-c2c.entrypoints=xmpp2"
- "traefik.tcp.routers.xmpp2-c2c.rule=Host(`local.lan`)"
#- "traefik.tcp.routers.xmpp2-c2c.rule=HostSNI(`local.lan`)"
#- "traefik.tcp.routers.xmpp2-c2c.tls=true"
#- "traefik.tcp.routers.xmpp2-c2c.tls.passthrough=true"
- "traefik.tcp.routers.xmpp2-c2c.service=xmpp2-c2c-service"
- "traefik.tcp.services.xmpp2-c2c-service.loadbalancer.server.port=5222"
Why should I use traefik at all then? Maybe I should put them (XMPP servers) not behind a reverse-proxy as it is missing the primary goal for serving different services at the same domain (tls termination). But for the sake of comprehension I still expect it to work with traefik - or is there anything else I forgot to think about?

Make a request from one container to the second container with localhost

I have two docker-compose files set up - one for the frontend application, and one for the backend.
Frontend runs on 3000 port and is exposed on 80: 0.0.0.0:80:3000
Backend runs on 3001 port and is exposed on the same port also publicly: 0.0.0.0:3001:3001
From the host machine, I can easily make a request to the backend:
$ curl 127.0.0.1:3001
But I cannot do it from the frontend container - nothing is listening on that port because those are two different containers in different networks.
I tried to connect both of them in one network - then I can use the IP of the backend container, or a hostname, to make a valid request. But it's still not the localhost. How can I solve this?
When using Docker, localhost points to the container itself, not to your computer. There are a few ways to do what you want. But none of them will work with localhost from a container.
The cleanest way to do it is by setting up hostnames for your services within the yml and set up your applications to look for those hostnames instead of localhost.
Let me know if you need examples and I will look for them at home and post it here to you.

Two Nginx instances, listening to different ports, only one reachable by domain

I use docker-compose stacks to run things on my personal VPS. Right now, I have a stack that's composed of:
Nginx (exposed port 443)
Ghost (blogging)
MySQL (for Ghost)
Adminer (for MySQL, exposed port 8080)
I wanted to try out Matomo analytics software, but I didn't want to add that to my current stack until I was happy with it, so I decided to create a second docker-compose stack for just the Matomo analytics:
Nginx (exposed port 444)
Matomo
MariaDB (for Matomo)
Adminer (for MariaDB, exposed port 8081)
With both stacks running, I can access everything at its appropriate port, but only by IP address. If I try to use my domain, it can only connect to the first Nginx, the one exposing port 443. If I try https://www.example.com:444 in my browser, it isn't able to connect. If I try https://myip:444 in my browser, it connects to the second Nginx instance exposing port 444, warning me that the SSL certificate has issues (since I'm connecting to my IP, not my domain), and then lets me through.
I was wondering if anyone knew why this behavior was happening. I'm admittedly new to setting up VPSs, using Nginx to proxy to other hosted services, etc. If it turns out Nginx cannot be used this way, I'd love to hear recommendations on how else I could arrange this. (Do I have to only have one Nginx instance running at a time, and I must proxy through it to everything else? etc)
Thank you!
I was able to fix this by troubleshooting my Cloudflare. I posted this question while waiting for my domain to accept the name servers for my VPS instead of Cloudflare. When this was finished, I tested it and it did get through at https://example.com:444. This proved it was Cloudflare blocking me.
I found this page which explained that the free Cloudflare plans only support a few ports, which does not include port 444. If I upgraded to a pro plan, I would have that option.
Therefore, I can conclude that the solution to my problem is to either upgrade my Cloudflare plan or merge the two docker-compose stacks so that I can accept requests for everything on just port 443.

Resources