Big IP remove tcp/ip route and block communication to Docker Container - docker

Short Version :
Why Big IP delete some route when establishing VPN connection ?
This impact “Docker Desktop for Windows” by blocking any communication with docker container because TCP/IP route to reach container is delete by Big IP.
Long Version :
Context: Docker is use to run application (Microsoft SQL Server) in container. Communication with container is done by NAT interface create by Docker.
Issue description: Unable to connect to my Docker Container when Big IP is running.
Overview : When I start new docker container that contains SQL Server, I can connect on it and execute SQL Query… but if I’m starting Big IP to connect on ICN, no connection to my Docker container that running SQL Server is possible…. even if my container still to run (and SQL server too)
Root cause: TCP/IP Route to my Docker container is delete by Big Ip.
Step by step to reproduce
Step 1 : Start my Docker container
docker run -e "ACCEPT_EULA=Y" --name MyLocalServer -p 1433:1433 -e "SA_PASSWORD=XXXXX" -d microsoft/mssql-server-windows-developer
Step 2 : Able to connect to SQL located in Docker container
Step 3 : Docker network details
Return technical information about network subnet for my docker container.
 
Step 4 : Route table before VPN connection
We see the route for my container
 
Step 5 : When connecting my VPN, Big IP remove route for my Docker container
Big IP log :
Step 6 : Route table appear like this after VPN connection established
Note : route for 172.29.48.0/20 disappear
Step 7 : Now, unable to connect on SQL Container
Got following error “A network-related or instance-specific error has occurred while establishing a connection to SQL Server.”
Step 8 : When I disconnect my VPN, deleted routes are restore by Big IP
Step 9 : And, now, access to my SQL is possible
Conclusion
Big IP removing routes that allow communication with Docker Container.
I have try to:
#1 : Add route manually after Big Ip connection with following command:
*route add 172.29.48.0 mask 255.255.240.0 0.0.0.0 METRIC 10 IF 34*
… but Big IP remove new entry in routing table automatically as previously seen when BIG IP Connecting.
#2 : I try to change range of IP user by Docker to access container to use 192.168.1.x (previously : 172.29.48.0)
But as previously, Big IP remove route for this range too :

This question is for your network administrator, who probably only follows the security policy of the company giving you the VPN access.
Based on K49720803: BIG-IP Edge Client operations guide | Chapter 3: Common approaches to configuring VPN, you would ask for disabling the Prohibit routing table changes option or maybe try adding a second network card dedicated to your Docker, with hopes it would not be managed by the VPN client at all - but I didn't try.

Related

Docker - Two containers on same network can't connect over web socket

Disclaimer
This is only happening on my machine. I tested the exact same code and procedure on my colleague's machine and it's working fine.
Problem
Hello, I have a fairly weird problem at hand.
I am running two Docker containers: One is a crossbar server instance, and the other is an application that uses WAMP (Web Application Messaging Protocol) and registers to the running crossbar server.
Nothing crazy
I run these two applications on two different docker containers that share the same network.
docker network create poc-bridge
docker run --net=poc-bridge -d --name cross my-crossbar-image
docker run --net=poc-bridge --name app my-app-image
Here is the dockerfile I used to build the image my-crossbar-image
FROM crossbario/crossbar
EXPOSE 8080
USER root
COPY deployment/crossbar/.crossbar /node/.crossbar
It simply exposes the port and copy some config files.
The other image for the app that needs to register to the crossbar server is not relevant.
Once I run my app in its container and it tries to register something to the crossbar server using the websocket address ws://cross:8080/ws I get: OSError: [Errno 113] Connect call failed ('172.24.0.2', 8080)
What I tried
I checked that the two containers are actually on the same network (they are)
I could ping container cross from my container app with docker exec app ping cross -c2 (weird)
What can it be???
The reason of the problem was not clear. However, it disappeared. All I had to do was:
Stopping/Removing all the created containers
Removing all the created images
Removing all the created networks
Re-building all again
Now the services can communicate to each other

Unable to make Docker container use OpenConnect VPN connection

I have a VM running Ubuntu 16.04, on which I want to deploy an application packaged as a Docker container. The application needs to be able to perform an HTTP request towards a server under VPN (e.g. server1.vpn-remote.com)
I successfully configured the host VM in order to connect to the VPN through openconnect, I can turn this connection on/off using a systemd service.
Unfortunately, when I run docker run mycontainer, neither the host nor the container are able to reach server1.vpn-remote.com. Weirdly enough, there is no error displayed in the VPN connection service logs, which is stuck to the openconnect messages confirming a successful connection.
If I restart the VPN connection after starting mycontainer, the host machine is able to access server1.vpn-remote.com, but not the container. Moreover, if I issue any command like docker run/start/stop/restart on mycontainer or any other container, the connection gets broken again even for the host machine.
NOTE: I already checked on the ip routes and there seems to be no conflict between Docker and VPN subnets.
NOTE: running the container with --net="host" results in both host and container being able to access the VPN but I would like to avoid this option as I will eventually make a docker compose deployment which requires all containers to run in bridge mode.
Thanks in advance for your help
EDIT: I figured out it is a DNS issue, as I'm able to ping the IP corresponding to server1.vpn-remote.com even after the VPN connection seemed to be failing. I'm going through documentation regarding DNS management with Docker and Docker Compose and their usage of the host's /etc/resolv.conf file.
I hope you don't still need help six months later! Some of the details are different, but this sounds a bit like a problem I had. In my case the solution was a bit disappointing: after you've connected to your VPN, restart the docker daemon:
sudo systemctl restart docker
I'm making some inferences here, but it seems that, when the daemon starts, it makes some decisions/configs based on the state of the network at that time. In my case, the daemon starts when I boot up. Unsurprisingly, when I boot up, I haven't had a chance to connect to the VPN yet. As a result, my container traffic, including DNS lookups, goes through my network directly.
Hat tip to this answer for guiding me down the correct path.

Adding NS record to docker net's DNS server

When running a docker container inside a docker network (i.e. docker network create $DOCKERNETNAME and then using --net=$DOCKERNETNAME when running the container). The net creates a DNS server at 127.0.0.11.
I want to create a NS record inside this DNS server (the one running at 127.0.0.1), so I can have a separate DNS server inside the docker net for some fake domain. How can I do that?
Please note that all this is being done for educational purposes and has no other goal.

Failing to connect to localhost from inside a container Connection refused

I'm currently testing an Ansible role using Molecule.
Basically, Molecule launches a container that is Ansible compliant and runs the role on it.
In order to test the container, Molecule also embed unit tests using Testinfra. The python unit tests are run from within the container so you can check the compliance of the role.
As I'm working on an Nginx based role, one of the unit tests is simply issuing a curl http://localhost:80
I do get the below error message in response:
curl: (7) Failed to connect to localhost port 80: Connection refused
When I:
launch a Vagrant machine
apply the role with Ansible
connect via vagrant ssh
issue a curl http://localhost command
nginx answers correctly.
Therefore, I believe that:
the role is working properly and Nginx is installed correctly
Docker has a different way to set-up the network. In a way, localhost and 127.0.0.1 are not the same anymore.
My questions are the following:
Am I correct?
Can this difference be overcome so the curl would work?
Docker containers start in their own network namespace by default. This namespace includes a separate loopback interface (127.0.0.1) that is distinct from the same interface on the host and any other containers. If you want to access an application from another container or via a published port on the host, you need to listen on all interfaces (0.0.0.0) rather than the loopback interface.
One other issue I often see is at some layer in the connection (the host, or inside of a container), the "localhost" name is mapped to the IPv6 value of ::1 in the /etc/host file, and somewhere in that connection only the IPv4 value is valid (either where the port was published, the application is listening, or IPv6 isn't enabled on the host or docker engine). Therefore, make sure to try connecting to the IPv4 address directly, 127.0.0.1, to eliminate any potential IPv6 issues.
Regarding the curl command and how to correct it, I cannot answer that without more details on how you are running the curl (is it in a separate container), how you are running your application, and how the two are joined on the network (did you create a new network in docker for your application and unit tests to run). The typical solution is to create a new network in docker, run both containers on that network, and connect via docker's included DNS to the container or service name of the destination, e.g. curl http://my_app/.
Edit: based on the comments, if your application and curl command are both running inside the same container, then curl http://127.0.0.1/ should work. There's no change I'm aware of needed with to curl to make it work inside of a container vs on a VM. The error you are seeing is likely from the application not starting and listening on the port as expected, possibly a race condition where the curl command is run too soon, or the base assumptions of how the tool works is incorrect. Start by changing the unit test to verify the application is up and running and listening on the port with commands like ps -ef and ss -lt.
it actually have nothing to do with the differences between Docker and Vagrant (i.e. containers vs VMs).
The testInfra code is actually run from outside the container / VM, hence the fact the subprocess.call(['curl', 'http://localhost']) is failing.
In order to run a command from the container / VM, I should use:
host.check_output('curl http://localhost')

Connecting to Docker container connection refused - but container is running

I am running 2 spring boot applications: A client and rest-api. The client communicates to the rest-api which communicates to a mongodb database. All 3 tiers are running inside docker containers.
I launch the containers normally specifying the exposed ports in the dockerfile and mapping them to a port on the host machine such as: -p 7070:7070, where 7070 is a port exposed in the Dockerfile.
When I run the applications through the java -jar [application_name.war] command, the application works fine and they all can communicate.
However, when I run the applications in a Docker container I get connection refused error, such as when the client tries to connect to the rest-api I get a connection refused error at http://localhost:7070.
But the command docker ps shows that the containers are all running and listening on the exposed and mapped ports.
I have no clue why the containers aren't recognizing that the other containers are running and listening on their ports.
Does this have anything to do with iptables?
Any help is appreciated.
Thanks
EDIT 1: The applications when ran inside containers work fine on my machine, and they don't throw any connection refused errors. The error only happens on that particular different machine.
I used container linking to solve this problem. Make sure you add --link <name>:<alias> at run-time to the container you want linked. <name> is the name of the container you want to link to and <alias> will be the host/domain of an entry in Spring's application.properties file.
Example:
spring.data.mongodb.host=mongodb if the alias supplied at run-time is 'mongodb':
--link myContainerName:mongodb

Resources