Unable to make Docker container use OpenConnect VPN connection - docker

I have a VM running Ubuntu 16.04, on which I want to deploy an application packaged as a Docker container. The application needs to be able to perform an HTTP request towards a server under VPN (e.g. server1.vpn-remote.com)
I successfully configured the host VM in order to connect to the VPN through openconnect, I can turn this connection on/off using a systemd service.
Unfortunately, when I run docker run mycontainer, neither the host nor the container are able to reach server1.vpn-remote.com. Weirdly enough, there is no error displayed in the VPN connection service logs, which is stuck to the openconnect messages confirming a successful connection.
If I restart the VPN connection after starting mycontainer, the host machine is able to access server1.vpn-remote.com, but not the container. Moreover, if I issue any command like docker run/start/stop/restart on mycontainer or any other container, the connection gets broken again even for the host machine.
NOTE: I already checked on the ip routes and there seems to be no conflict between Docker and VPN subnets.
NOTE: running the container with --net="host" results in both host and container being able to access the VPN but I would like to avoid this option as I will eventually make a docker compose deployment which requires all containers to run in bridge mode.
Thanks in advance for your help
EDIT: I figured out it is a DNS issue, as I'm able to ping the IP corresponding to server1.vpn-remote.com even after the VPN connection seemed to be failing. I'm going through documentation regarding DNS management with Docker and Docker Compose and their usage of the host's /etc/resolv.conf file.

I hope you don't still need help six months later! Some of the details are different, but this sounds a bit like a problem I had. In my case the solution was a bit disappointing: after you've connected to your VPN, restart the docker daemon:
sudo systemctl restart docker
I'm making some inferences here, but it seems that, when the daemon starts, it makes some decisions/configs based on the state of the network at that time. In my case, the daemon starts when I boot up. Unsurprisingly, when I boot up, I haven't had a chance to connect to the VPN yet. As a result, my container traffic, including DNS lookups, goes through my network directly.
Hat tip to this answer for guiding me down the correct path.

Related

Docker container extremely slow when network mode host

I’ve had a working setup with a docker-compose and especially a wildfly image running in network mode = host.
Since the company stopped the internet connection, the startup of the container is extremely slow and end with a timeout.
I found out that when I run the container in network mode = bridge, it is working just normally.
I tried with a docker-hub wildfly empty image to be sure the issue is not on my side and it’s the same problem.
It starts in 5s in bridge, and 33 in host…
I use the command :
docker run --network host jboss/wildfly:18.0.1.Final
to start the container in network host mode.
My docker version is 19.03.15 and it’s running in a VM in bridge mode.
I need the network mode host because we access the containers from outside the VM and they need to communicate with each other.
I can’t use internet anymore on the VM neither the host machine because of the security policy of the company.
So I’m looking for a solution to still use network host without this not-understandable slowness…
I’m not sure if it’s coming from wildfly or the docker itself ?
Thanks by advance,
Loïc.

Updated windows docker and now it doesn't work outside localhost

I have been running a media cluster for sometime without any issues. I have everything networked into two different docker networks... the first network just bridges the docker instance to the local machine, the second network is a docker VPN container that I use for the other media services (an earlier version of what I am working on can be found here: https://github.com/Xander-Rudolph/MediaDocker)
The strangest thing happened today though. I ran the docker update for windows and now docker spools up without any errors or issues, however none of the services work outside of the machine running docker. Usually I have a poke through for a couple of the services in my router (namely wordpress/joomla which is on the bridge) and they work outside of my local network, but none of them are working anymore. I was able to confirm its not the DNS A record because I'm able to use the RDP ports I have mapped for my router, and when I test on another machine in the same network, it can't access the services via the internal IP (but it can RDP).
Anyone have any idea what could have changed to break this? I've already updated all my docker images and even rebuilt my VPN container (before I realized its a networking issue). What are some steps I can do to try to troubleshoot what is going wrong in docker to prevent access outside of localhost?
Update
I've been able to rule out the docker update as the root cause... I upgraded docker on my laptop (which was previously running the same version as my desktop) and its not having the same issue... this configuration must be localized to this desktop... No idea what the issue is... Will try a linux VM on the desktop instead of docker for windows...
Update 2
After a lot of screwing around in both a VM and in WSL, I'm still only able to access the docker services from localhost but not a different machine on my network or via the IP on the host machine (perhaps something similar to this: Can't access localhost via IP address). RDP does work so the computer is accessible but the services are not.
I'm not sure if this is a result of a docker networking config or a windows network config (I'm using WSL with docker installed on ubuntu 20.08) but I'm not seeing anything stick out. I'm going to remove the tag for docker windows but this is definitely an issue with networking and I suspect it has something to do with the fact that the containers are running behind a VPN... although I don't know why I would be able to access them on localhost but not the IP on another VM...
When I run
netstat -a -o
on WSL I can see the established ports on localhost... EX:
tcp 0 0 localhost:7878 localhost:37520 ESTABLISHED
but when I look on the host machine (for wsl) I don't see the connection. I tried to use netsh to create a firewall rule to see if that would help:
netsh advfirewall firewall add rule name="TCP Port 7878" dir=in localport=7878 protocol=TCP action=allow
but it didn't have any effect.
Any suggestions for ways to trace the network to see where/how its failing/getting blocked would be extremely helpful.
Your question: "...What are some steps I can do to try to troubleshoot what is going wrong in docker to prevent access outside of localhost?..."
Troubleshooting help for you, first do you have multiple networking adapters (Ethernet, Wi-Fi, etc.) present on the host. First ensure, the priority of these adapters needs to be configured in correct order so the Windows networking stack can correctly choose gateway routes.
Now, to fix this set your primary internet-connected networking adapter to have the lowest InterfaceMetric value, use can use these Powershell commands from an elevated console:
Get-NetIPInterface -AddressFamily IPv4 | Sort-Object -Property InterfaceMetric -Descending
Please ensure that the host's primary internet-connected network adapter has the lowest InterfaceMetric value.
// Use this command to make the change for e.g. lets say your
// primary adapter InterfaceAlias is 'Wi-Fi'
Set-NetIPInterface -InterfaceAlias 'Wi-Fi' -InterfaceMetric 3
Now step two, if your host's primary network adapter is bridged because you have an External virtual switch setup in Hyper-V, then you will set the external virtual switch to have the lowest InterfaceMetric value.
Lastly, confirm/verify your routing tables, when you run this, the last line should show the primary adapter's gateway address along with it's ifMetric value):
Get-NetRoute -AddressFamily IPv4
If you’re using Docker Toolbox then any port you publish with docker run -p will be published on the Toolbox VM’s private IP address.
docker-machine ip will tell you.
It is frequently
192.168.99.100
Taken from: https://forums.docker.com/t/cant-connect-to-container-on-localhost-with-port-mapping/52716/25
After several attempts using the references below, I was still not getting anywhere. The recommendation by #derple didn't get me anywhere (since I was in wsl) but the article he linked someone had said they switched to linux and uninstalled and reinstalled docker desktop... and for some stupid reason that works.
These are my exact steps I took to fix it:
Uninstall docker desktop
Install WSL and docker inside an ubuntu18.04 instance in wsl
Test docker in wsl with localhost (worked only on localhost still)
Uninstall WSL using windows add/remove features
reinstall docker desktop
Oddly the get-netipinterface and get-netroute look exactly the same as they did before I did the uninstall and reinstall but things seem to be working now... I have no idea why the above worked...

docker on windows 10 can't mount volumes when VPN enabled

I'm seeing problems mounting local volumes when running docker on Windows 10. The problems only appear when I have my company VPN enabled.
C:\Users\matt> docker run --rm -v d:/tmp:/data alpine ls /data
my_local_test_file.txt
When connected to VPN, I get this:
C:\Users\matt> docker run --rm -v d:/tmp:/data alpine ls /data
C:\Program Files\Docker\Docker\Resources\bin\docker.exe: Error response from daemon: error while creating mount source path '/host_mnt/d/tmp': mkdir /host_mnt/d: file exists.
Docker version is 17.12.0-ce-win47
I believe the problem is that docker uses the network when mounting local volumes, and the VPN routes ALL network traffic via the VPN gateway, so docker can't see the local drive.
Is there a workaround for this?
I'm aware I could run docker within a linux VM, or use docker toolbox. Neither of those are particularly good.
Is there another possible workaround?
the VPN routes ALL network traffic via the VPN gateway
You're probably right, in which case all traffic routed from Docker client to Docker daemon will also be through the VPN. When you use Docker CLI on Windows, it will connect to the Docker daemon which is accessible through the network. Using a VPN may disrupt this mechanism.
I think what's happening is:
When VPN is disabled, you use the Docker daemon on your machine and everything works
When VPN is enabled, another Docker daemon is used either because your VPN redirect traffic addressed to your Docker host (127.0.0.1 by default or set via -H flag or DOCKER_HOST env variable). This means that somehow this IP or host exists on your VPN network and there is a Docker daemon listening on it (which is kind of odd admittedly, it may be risky to use that daemon)
If that's really happening, you'll certainly see different output from docker ps -a, docker images, etc. because you are connecting to different daemons. (the daemon accessible through your VPN is actually being owned by someone else, you'd better not use it!)
What you can do:
Do not route 127.0.0.1 (or whatever is configured as Docker host) through your VPN
Action to take will depend on the VPN software you are using, or you can add route directly on your windows machine (here is a good article on the subject)
Find out your IP when VPN is enabled and configure Daemon to listen to this IP
When your VPN is enabled, run ipconfig /all and find the interface used by your VPN and it's IP address, for example 10.142.0.12 (you can compare output before/after enabling VPN to identify which one it is)
Configure your Docker daemon to listen this IP address and restart it. Either use the UI, or on Windows config file is located at %programdata%\docker\config\daemon.json by default, you need to specify "hosts": ["10.142.0.12", "127.0.0.1"] for example (see docs for details)
Configure Docker host to 10.142.0.12 when VPN is enabled, either by setting environment variable DOCKER_HOST=10.142.0.12 or with client docker -H 10.142.0.12 <cmd>
/!\ Security note: this may present a security issue as anyone knowing your IP on the VPN network will be able to use the Daemon on your machine
Hope this helps. I am not a Windows expert so I was not able to give details on Windows-related issues, but feel free to ask details if needed.

Docker for Windows swarm overlay networking, connecting to the swarm from outside or localhost

I cannot connect to the published port on the swarm that uses overlay networking. I am using Docker for Windows with Windows containers. Both Windows and Docker are fully upgraded. After Windows' 1709 update, I was hoping this issue would be resolved. I looked for information on the Internet to see if I was doing something wrong to no avail. I would like to know if anyone was successfully able to get it working.
On a side note, when I direct the port on my machine in docker run -p 80:80 without using swarm, "localhost" does not work as well. I think this is a known limitation though. Both issues work when I switch to Linux containers.
Expected behavior
I am running a dotnet kestrel web server service. I should be able to connect to my service using the published port.
Actual behavior
Firefox gives me timeout, opera straight away returns connection refused. Cannot telnet into it either. Container IP's assigned by the overlay network do not work either.
Information
docker service ls gives me this:
Ports cannot be seen there, is it because publish mode is host? Ports information is available in the output of docker service ps
And when I change the publish mode, I can scale it as well and the port information is seen in docker service ls albeit still cannot connect. the one below is without the publish mode=host parameter:
For more info, this is the output of the docker network ls I wonder if i need some sort of bridge network like in Linux.
Steps to reproduce the behavior
Initialise swarm
Start the service, in my case: a simple web service built using aspnetcore:latest image. I tried different parameters, even used a docker-stack.yml:
docker service create --name=web --publish mode=host,published=80,target=80 web:aspnetcorelatest in the case above, I was unable to scale it on the same machine, which is normal i guess
docker service create --name=web --publish published=85,target=80 web:aspnetcorelatest
Try to connect using one of http://localhost or another IP. I tried connecting over VPN, from another machine as well as Internet IP.

Docker: able to telnet to remote machines from host but not from container

We have a couple docker containers deployed on ECS. The application inside the container uses remote service, so it needs to access them using their 10.X.X.X private IPs.
We are using Docker 1.13 with CentOS 7 and docker/alpine as our base image. We are also using netwokMode: host for our containers. The problem comes when we can successfully run telnet 10.X.X.X 9999 from the host machine but if we run the same command from inside the container, it just hangs and it's not able to connect.
In addition, we have net.ipv4.ip_forward enabled in the host machines (where the container runs) but disabled in the remote machine.
Not sure what could be the issue, maybe iptables?
I have spent the day with the same problem (tried with both network mode 'bridge' and 'host'), and it looks like an issue with using busybox's telnet inside ECS - Alpine's telnet is a symlink to busybox. I don't know enough about busybox/networking to suggest what the root cause is, but I was able to prove the network path was clear by using other tools.
My 'go to' for testing a network path is using netcat as follows. The 'success' or 'failure' message varies from version to version, but a refusal or a timeout (-w#) is pretty obvious. All netcat does here is request a socket - it doesn't actually talk to the listening application, so you need something else to test that.
nc -vz -w2 HOST PORT
My problem today was troubleshooting an app's mongo connection. nc showed the path was clear, but telnet had the same issue as you reported. I ended up installing the mongo client and checking with that, and I could connect properly.
If you need to actually run commands over telnet from inside your ECS container, perhaps try installing a different telnet tool and avoiding the busybox inbuilt one.

Resources