I have a docker instance of haproxy in front of a 3 node rabbitmq cluster.
I the same Docker swarm I have a Springboot microservice that access the queue thru the proxy.
If I let everything come up on its own the microservice keeps trying to connect to rabbitmq and cannot.
If I restart the haproxy docker container, when it comes up everything is fine.
This makes it look like either
1) if Haproxy cannot connect to the rabbitmq servers because they are not up, it does NOT eventually connect to them when they are up
or
2) after trying to connect thru haproxy and failing, a restart of haproxy makes them try again and succeed.
neither makes sense to me. surely if haproxy is looking for 3 servers but one goes down, when it comes back up it will eventually pull it into the round robin?
Can anyone explain what (might be) happening?
Found this was the issue:
https://discourse.haproxy.org/t/haproxy-fails-to-start-if-backend-server-names-dont-resolve/322/20
It seems that =because haproxy is unable to resolve the dns name, it disables the server. the problem is it doesn't autoenable when the server is up.
Related
I have a VM running Ubuntu 16.04, on which I want to deploy an application packaged as a Docker container. The application needs to be able to perform an HTTP request towards a server under VPN (e.g. server1.vpn-remote.com)
I successfully configured the host VM in order to connect to the VPN through openconnect, I can turn this connection on/off using a systemd service.
Unfortunately, when I run docker run mycontainer, neither the host nor the container are able to reach server1.vpn-remote.com. Weirdly enough, there is no error displayed in the VPN connection service logs, which is stuck to the openconnect messages confirming a successful connection.
If I restart the VPN connection after starting mycontainer, the host machine is able to access server1.vpn-remote.com, but not the container. Moreover, if I issue any command like docker run/start/stop/restart on mycontainer or any other container, the connection gets broken again even for the host machine.
NOTE: I already checked on the ip routes and there seems to be no conflict between Docker and VPN subnets.
NOTE: running the container with --net="host" results in both host and container being able to access the VPN but I would like to avoid this option as I will eventually make a docker compose deployment which requires all containers to run in bridge mode.
Thanks in advance for your help
EDIT: I figured out it is a DNS issue, as I'm able to ping the IP corresponding to server1.vpn-remote.com even after the VPN connection seemed to be failing. I'm going through documentation regarding DNS management with Docker and Docker Compose and their usage of the host's /etc/resolv.conf file.
I hope you don't still need help six months later! Some of the details are different, but this sounds a bit like a problem I had. In my case the solution was a bit disappointing: after you've connected to your VPN, restart the docker daemon:
sudo systemctl restart docker
I'm making some inferences here, but it seems that, when the daemon starts, it makes some decisions/configs based on the state of the network at that time. In my case, the daemon starts when I boot up. Unsurprisingly, when I boot up, I haven't had a chance to connect to the VPN yet. As a result, my container traffic, including DNS lookups, goes through my network directly.
Hat tip to this answer for guiding me down the correct path.
I've recently been trying to migrate my home server to a Docker microservice style setup. I've installed fresh Ubuntu Server 18.04, set up Traefik container and Nextcloud container, but am experiencing a peculiar issue.
When I access Nextcloud over the internet it works OK, however on LAN I connect to the website, attempt to download a file and the download is extremely slow for a few seconds before making my router reboot itself. I have tried a Jellyfin container as well and the behavior is the same, so not an issue with Nextcloud. I have tried exposing the ports of the service containers directly and then the issue is resolved, most probably issue is with Traefik.
Here's my traefik.toml, docker-compose.yml, and Traefik container configuration.
I'd greatly appreciate any help, as I would like to use Traefik as a reverse proxy, not directly expose any ports. :-)
I have a dockerfile that runs a Tomcat 7 web server with two REST APIs, a PostgreSQL database, and a Django site on Apache. (I know that best practices suggest running these as separate containers, but I wanted to package the whole system as one container for usability by non-devs).
One of my REST API calls the other REST API through the endpoint http://localhost:8080/rest2/insert. However, in bridge mode in Docker, localhost refers to the host, not the container.
I have tried hardcoding the container ip 172.17.0.2 but I still get a Connection Refused error.
I assume this will be a problem for the localhost PostgreSQL connection from rest1 as well. What are my options?
Any help is much appreciated!
If you do put them together in the same container, then localhost should be correct.
use ps -e to check if they have been started.
For whatever reason, the Connection Refused error was caused by a lack of memory from Docker. I increased the memory limit and everything was fine. Hopefully this will help someone struggling in the future!
In order to debug and setup a pair of docker stacks (one is a client and other a server along with their own private services they each require) using docker compose, I'm running them locally to make sure they're functioning correctly.
They will eventually be communicating across the internet with a nginx server on the server side to act as a reverse proxy. But for now, i'm specifying the client use the 172.19.0.3:1234 address of the server container.
I'm able to curl/ping both the client container and server container from the host machine, but running an interactive session and trying to curl the server's 172.19.0.3:1234 address just times out.
I feel the 172.x is being used incorrectly here. Is their some obvious issue with what I've described so far? What is the better approach for what I'm trying to do.
Seems that after doing some searching, I am in a similar situation to this question: Communicating between Docker containers in different networks on the same host.
I've decided to use docker network connect to connect the client to the server's network for my purposes.
Note: Question is related to Bluemix docker support.
I am trying to connect two different Docker Containers deployed in Bluemix. I am getting the exception:
java.net.NoRouteToHostException: No route to host
when I try such connection (Java EE app running on Liberty trying to access MySQL). I tried using both private and public IPs of MySQL Docker Container.
The point is that I am able to access MySQL Docker Container from outside Bluemix. So the IP, port, and MySQL itself are ok.
It seems something related to the internal networking of Docker Container support within Bluemix. If I try to access from inside Bluemix it fails, if I do from outside it works. Any help?
UPDATE: I continued investigating as you can see in comments, and it seems a timing issue. I mean, it seems once containers are up and running, there is some connectivity work still undone. If I am able to wait around 1 minute, before trying the connection it works.
60 seconds should be the rule of thumb for the networking start working after container creation.