How to handle host network changes with Docker? - docker

So I'm running Docker on a managed host and after like a week of running or so the containers aren't accessible from the outside anymore, and all requests to the outside from inside the containers result in getaddrinfo EAI_AGAIN errors. Restarting the Docker service solves this. I think it's caused by a network change on the host - for example restarting the csf firewall also results in these errors and it works again after restarting the Docker service.
What is the correct way to handle this?

Related

Docker on several hosts (Windows and Ubuntu)

I have 2 hosts with Docker: Ubuntu and Windows(docker-desktop). I want to connect a container with Adminer(Windows) to a container with MariaDB(Ubuntu). Is it possible?
My hosts locate in the same local network, and containers are isolated from each other(of course). I was trying to create overlay network and to bind between each other, but worker's message has been: "docker: Error response from daemon: attaching to network failed, make sure your network options are correct and check manager logs: context deadline exceeded." My goal is to make communication between different host containers. Please, can you help me with my problem :)
Important: Make sure your host machines can communicate with each other over the network, and there are no issues due to network firewalls, etc.
Looks like need to bind the port of your container to your host machine.
When you are running your Docker containers use the -p option. You can find a lot of examples at https://docs.docker.com/config/containers/container-networking/.
After you have exposed your containers port, you can access them using <host-machine-ip>:<host-machine-port-you-used-to-bind-container>

Unable to make Docker container use OpenConnect VPN connection

I have a VM running Ubuntu 16.04, on which I want to deploy an application packaged as a Docker container. The application needs to be able to perform an HTTP request towards a server under VPN (e.g. server1.vpn-remote.com)
I successfully configured the host VM in order to connect to the VPN through openconnect, I can turn this connection on/off using a systemd service.
Unfortunately, when I run docker run mycontainer, neither the host nor the container are able to reach server1.vpn-remote.com. Weirdly enough, there is no error displayed in the VPN connection service logs, which is stuck to the openconnect messages confirming a successful connection.
If I restart the VPN connection after starting mycontainer, the host machine is able to access server1.vpn-remote.com, but not the container. Moreover, if I issue any command like docker run/start/stop/restart on mycontainer or any other container, the connection gets broken again even for the host machine.
NOTE: I already checked on the ip routes and there seems to be no conflict between Docker and VPN subnets.
NOTE: running the container with --net="host" results in both host and container being able to access the VPN but I would like to avoid this option as I will eventually make a docker compose deployment which requires all containers to run in bridge mode.
Thanks in advance for your help
EDIT: I figured out it is a DNS issue, as I'm able to ping the IP corresponding to server1.vpn-remote.com even after the VPN connection seemed to be failing. I'm going through documentation regarding DNS management with Docker and Docker Compose and their usage of the host's /etc/resolv.conf file.
I hope you don't still need help six months later! Some of the details are different, but this sounds a bit like a problem I had. In my case the solution was a bit disappointing: after you've connected to your VPN, restart the docker daemon:
sudo systemctl restart docker
I'm making some inferences here, but it seems that, when the daemon starts, it makes some decisions/configs based on the state of the network at that time. In my case, the daemon starts when I boot up. Unsurprisingly, when I boot up, I haven't had a chance to connect to the VPN yet. As a result, my container traffic, including DNS lookups, goes through my network directly.
Hat tip to this answer for guiding me down the correct path.

Ports stuck after Docker hangs and restarts

We usually work with certain external containers which we make extensions to. We usually run a docker-compose file with all necessary services.
At times, docker hangs as in doesn't execute commands anymore. We can then force restart docker, and when docker is backup we can do a docker-compose down. However, when we do a docker-compose up back again, we usually have a port conflict issue:
Error starting userland proxy: listen tcp 0.0.0.0:9081: bind: Only one usage of each socket address (protocol/network address/port) is normally permitted.
Completely stopping docker or resetting network doesnt work either.
The only solution that works is restarting the PC. Is there any command or something we can use to verify/free those ports?
(Note: This is Docker for Windows, which might very well be a reason for these issues.)

Error: getaddrinfo EAI_AGAIN (docker, nginx)

I know this error relate to DNS lookup timed out error, means it is a network connectivity error or proxy related error.
However I do not know how to fix it.
I use docker-compose.yml ,and 3 containers are inside.
This is my docker-compose.yml. as link
docker-compose.yml
I created 2 networks to divide external and internal network. All request from the client are going through Nginx,port:8090 which only is expose to the internet.
The issue is that I got the error message "getaddrinfo EAI_AGAIN exampleAuth.auth0.com:443" when I send request to verify users from API container(internal network).
Here is what I have tried to so far
I tried to add DNS 8.8.8.8 in docker demon
ping 8.8.8.8 from API container (it does not work)
ping 8.8.8.8 from Nginx container (it does work)
ping between internal and default entwork is find
Do you guys have any idea?
Changing in my Dockerbuild alpine to stretch-slim (debian) has solved a similar issue of yours.
I've experienced the same issue from within an alpine container when running npm install. In my case the network had changed, stopping and restarting the container solved the issue.
docker-compose down
docker-compose up
Source: https://github.com/moby/moby/issues/32106
I was facing the same issue. The solution is to add DNSes to daemon.json.
This solution won't take affect unless you restart docker on machine. So restarting is essential for this issue to be resolved.

"java.net.NoRouteToHostException: No route to host" between two Docker Containers

Note: Question is related to Bluemix docker support.
I am trying to connect two different Docker Containers deployed in Bluemix. I am getting the exception:
java.net.NoRouteToHostException: No route to host
when I try such connection (Java EE app running on Liberty trying to access MySQL). I tried using both private and public IPs of MySQL Docker Container.
The point is that I am able to access MySQL Docker Container from outside Bluemix. So the IP, port, and MySQL itself are ok.
It seems something related to the internal networking of Docker Container support within Bluemix. If I try to access from inside Bluemix it fails, if I do from outside it works. Any help?
UPDATE: I continued investigating as you can see in comments, and it seems a timing issue. I mean, it seems once containers are up and running, there is some connectivity work still undone. If I am able to wait around 1 minute, before trying the connection it works.
60 seconds should be the rule of thumb for the networking start working after container creation.

Resources