Traefik causing very slow LAN speeds and router crash - docker

I've recently been trying to migrate my home server to a Docker microservice style setup. I've installed fresh Ubuntu Server 18.04, set up Traefik container and Nextcloud container, but am experiencing a peculiar issue.
When I access Nextcloud over the internet it works OK, however on LAN I connect to the website, attempt to download a file and the download is extremely slow for a few seconds before making my router reboot itself. I have tried a Jellyfin container as well and the behavior is the same, so not an issue with Nextcloud. I have tried exposing the ports of the service containers directly and then the issue is resolved, most probably issue is with Traefik.
Here's my traefik.toml, docker-compose.yml, and Traefik container configuration.
I'd greatly appreciate any help, as I would like to use Traefik as a reverse proxy, not directly expose any ports. :-)

Related

Docker containers cannot be accessed from the internet but work when accessing from local network

First of all, sorry if I am not following the correct format for StackOverflow, this is my first time asking something here.
I am running Docker in an Ubuntu lxc in Proxmox but the Docker containers cannot be accessed from the internet. I am using Nginx Proxy Manager. Surprisingly, the containers worked well when I was running Docker desktop on Windows 11 but I switched to Ubuntu to try to make things easier and it didn't work so I tried Proxmox - which I had used before, and it is not working either. I have NGINX set up with Cloudflare and when I try to access for example, my Nextcloud container from the internet I get a "Web Server is down" (error code 521).
Everything works fine when I access from the local network. I can ping websites from both inside the container and the host with no lost packages so I know the containers have internet access.
I forgot to add that I have opened all the ports necessary for my Nextcloud container to work and I checked online with ismyportopen.com and it looks like the ports I need are open.

Unable to make Docker container use OpenConnect VPN connection

I have a VM running Ubuntu 16.04, on which I want to deploy an application packaged as a Docker container. The application needs to be able to perform an HTTP request towards a server under VPN (e.g. server1.vpn-remote.com)
I successfully configured the host VM in order to connect to the VPN through openconnect, I can turn this connection on/off using a systemd service.
Unfortunately, when I run docker run mycontainer, neither the host nor the container are able to reach server1.vpn-remote.com. Weirdly enough, there is no error displayed in the VPN connection service logs, which is stuck to the openconnect messages confirming a successful connection.
If I restart the VPN connection after starting mycontainer, the host machine is able to access server1.vpn-remote.com, but not the container. Moreover, if I issue any command like docker run/start/stop/restart on mycontainer or any other container, the connection gets broken again even for the host machine.
NOTE: I already checked on the ip routes and there seems to be no conflict between Docker and VPN subnets.
NOTE: running the container with --net="host" results in both host and container being able to access the VPN but I would like to avoid this option as I will eventually make a docker compose deployment which requires all containers to run in bridge mode.
Thanks in advance for your help
EDIT: I figured out it is a DNS issue, as I'm able to ping the IP corresponding to server1.vpn-remote.com even after the VPN connection seemed to be failing. I'm going through documentation regarding DNS management with Docker and Docker Compose and their usage of the host's /etc/resolv.conf file.
I hope you don't still need help six months later! Some of the details are different, but this sounds a bit like a problem I had. In my case the solution was a bit disappointing: after you've connected to your VPN, restart the docker daemon:
sudo systemctl restart docker
I'm making some inferences here, but it seems that, when the daemon starts, it makes some decisions/configs based on the state of the network at that time. In my case, the daemon starts when I boot up. Unsurprisingly, when I boot up, I haven't had a chance to connect to the VPN yet. As a result, my container traffic, including DNS lookups, goes through my network directly.
Hat tip to this answer for guiding me down the correct path.

Stop nginx service that is started from docker container

I was watching nginx tutorial and for the purpose of follow up, I created Ubuntu 18.04 docker container. I installed and started nginx service as shown as in the tutorial and everything was going well. Then I removed both docker image and container that I'm working on. Despite removal of container and image, "http://104.200.23.232/" this address on my machine still returns nginx welcome page. As I think, this indicates that nginx service is still up and running. My question is that how can I stop and disable auto start of nginx service now?
Note: My host machine operating system is Windows 10 and restarting computer did not help to solve this problem.
Yeah, well blind guessing here but the IP "104.200.23.232" is registered for Linode which is a cloudhosting/vps provider, right? so probably not the IP of your local computer. Where did you get the reference for this ip? Did you try or install something on a cloudserver? I am pretty sure there are just some things mixed up.
Use "prune" to start fresh:
docker system prune
Try this to stop nginx in docker container:
service nginx stop

Docker, Haproxy, RabbitMQ

I have a docker instance of haproxy in front of a 3 node rabbitmq cluster.
I the same Docker swarm I have a Springboot microservice that access the queue thru the proxy.
If I let everything come up on its own the microservice keeps trying to connect to rabbitmq and cannot.
If I restart the haproxy docker container, when it comes up everything is fine.
This makes it look like either
1) if Haproxy cannot connect to the rabbitmq servers because they are not up, it does NOT eventually connect to them when they are up
or
2) after trying to connect thru haproxy and failing, a restart of haproxy makes them try again and succeed.
neither makes sense to me. surely if haproxy is looking for 3 servers but one goes down, when it comes back up it will eventually pull it into the round robin?
Can anyone explain what (might be) happening?
Found this was the issue:
https://discourse.haproxy.org/t/haproxy-fails-to-start-if-backend-server-names-dont-resolve/322/20
It seems that =because haproxy is unable to resolve the dns name, it disables the server. the problem is it doesn't autoenable when the server is up.

Can not reach Kibana remotely using ELK Docker images

I have a remote Ubuntu 14.04 machine. I downloaded and ran a couple of ELK Docker images, but I seem to be getting the same behavior in all of them. I tried the images in these two repositories: spujadas/elk-docker and deviantony/docker-elk. The problem is, in both images, Elasticsearch, Logstash and Kibana all work perfectly locally, however when I try to reach Kibana from a remote computer using http://host-ip:5601, I get a connection timeout and can't reach Kibana. Also, I can reach Elasticsearch from http://host-ip:9200. As both the repositories suggest, I injected some data into Logstash, but that didn't work either. Is there some tweak I need to make in order to reach Kibana remotely?
EDIT: I tried opening up port 5601 as suggested here, but that didn't work either.
As #Rawkode suggested in the comments, the problem was the firewall. The VM I'm working on was created on Azure and I had to create an inbound security rule to allow Kibana to be accessed from port 5601. More on this subject can be read from here.

Resources