Docker for Windows Host Network Stack Use - docker

Currently, I'm working on a project where we are using ROS, Bebop_autonomy, and OpenCV to control a Parrot Bebop2 autonomously. The machines we use in the workspace are running Ubuntu 14.04.5, and I can start a container using an image
I created with "docker run -it --network=host username/image". After configuring everything inside the container, the bebop_autonomy node is fine and can communicate on the Bebop's network perfectly. When you run ip addr in both the container and host machine, they show the same address, as you'd expect.
However, when trying to run it on my Windows machine, the ip is different than the host machine, and I'm never receiving any ACK packets when I try to communicate with the Bebop. I'm assuming this is because the packets aren't being sent to the right ip, or they aren't being forwarded correctly.
I have tried creating my own network and setting the ip manually with "docker network create" and passing it in to the run command as an argument, but I can't seem to get it to work at all. I've also tried creating different switches in the Hyper-V manager, but nothing I've read in the last few days has helped me figure this out.
I've got a good handle on how docker works, but most of the reference material I see is talking about a host that already runs linux. If I can't figure this out, it's almost useless for us to continue with docker in the first place.
Is there any way to configure Docker for Windows to work in the same way that Docker works on Linux when providing --network=host?

I ended up achieving what I wanted by creating a separate network in Hyper-V manager, setting that network to only use an external wifi-adapter, and running the container using that network. There has to be a better way though.

Related

Updated windows docker and now it doesn't work outside localhost

I have been running a media cluster for sometime without any issues. I have everything networked into two different docker networks... the first network just bridges the docker instance to the local machine, the second network is a docker VPN container that I use for the other media services (an earlier version of what I am working on can be found here: https://github.com/Xander-Rudolph/MediaDocker)
The strangest thing happened today though. I ran the docker update for windows and now docker spools up without any errors or issues, however none of the services work outside of the machine running docker. Usually I have a poke through for a couple of the services in my router (namely wordpress/joomla which is on the bridge) and they work outside of my local network, but none of them are working anymore. I was able to confirm its not the DNS A record because I'm able to use the RDP ports I have mapped for my router, and when I test on another machine in the same network, it can't access the services via the internal IP (but it can RDP).
Anyone have any idea what could have changed to break this? I've already updated all my docker images and even rebuilt my VPN container (before I realized its a networking issue). What are some steps I can do to try to troubleshoot what is going wrong in docker to prevent access outside of localhost?
Update
I've been able to rule out the docker update as the root cause... I upgraded docker on my laptop (which was previously running the same version as my desktop) and its not having the same issue... this configuration must be localized to this desktop... No idea what the issue is... Will try a linux VM on the desktop instead of docker for windows...
Update 2
After a lot of screwing around in both a VM and in WSL, I'm still only able to access the docker services from localhost but not a different machine on my network or via the IP on the host machine (perhaps something similar to this: Can't access localhost via IP address). RDP does work so the computer is accessible but the services are not.
I'm not sure if this is a result of a docker networking config or a windows network config (I'm using WSL with docker installed on ubuntu 20.08) but I'm not seeing anything stick out. I'm going to remove the tag for docker windows but this is definitely an issue with networking and I suspect it has something to do with the fact that the containers are running behind a VPN... although I don't know why I would be able to access them on localhost but not the IP on another VM...
When I run
netstat -a -o
on WSL I can see the established ports on localhost... EX:
tcp 0 0 localhost:7878 localhost:37520 ESTABLISHED
but when I look on the host machine (for wsl) I don't see the connection. I tried to use netsh to create a firewall rule to see if that would help:
netsh advfirewall firewall add rule name="TCP Port 7878" dir=in localport=7878 protocol=TCP action=allow
but it didn't have any effect.
Any suggestions for ways to trace the network to see where/how its failing/getting blocked would be extremely helpful.
Your question: "...What are some steps I can do to try to troubleshoot what is going wrong in docker to prevent access outside of localhost?..."
Troubleshooting help for you, first do you have multiple networking adapters (Ethernet, Wi-Fi, etc.) present on the host. First ensure, the priority of these adapters needs to be configured in correct order so the Windows networking stack can correctly choose gateway routes.
Now, to fix this set your primary internet-connected networking adapter to have the lowest InterfaceMetric value, use can use these Powershell commands from an elevated console:
Get-NetIPInterface -AddressFamily IPv4 | Sort-Object -Property InterfaceMetric -Descending
Please ensure that the host's primary internet-connected network adapter has the lowest InterfaceMetric value.
// Use this command to make the change for e.g. lets say your
// primary adapter InterfaceAlias is 'Wi-Fi'
Set-NetIPInterface -InterfaceAlias 'Wi-Fi' -InterfaceMetric 3
Now step two, if your host's primary network adapter is bridged because you have an External virtual switch setup in Hyper-V, then you will set the external virtual switch to have the lowest InterfaceMetric value.
Lastly, confirm/verify your routing tables, when you run this, the last line should show the primary adapter's gateway address along with it's ifMetric value):
Get-NetRoute -AddressFamily IPv4
If you’re using Docker Toolbox then any port you publish with docker run -p will be published on the Toolbox VM’s private IP address.
docker-machine ip will tell you.
It is frequently
192.168.99.100
Taken from: https://forums.docker.com/t/cant-connect-to-container-on-localhost-with-port-mapping/52716/25
After several attempts using the references below, I was still not getting anywhere. The recommendation by #derple didn't get me anywhere (since I was in wsl) but the article he linked someone had said they switched to linux and uninstalled and reinstalled docker desktop... and for some stupid reason that works.
These are my exact steps I took to fix it:
Uninstall docker desktop
Install WSL and docker inside an ubuntu18.04 instance in wsl
Test docker in wsl with localhost (worked only on localhost still)
Uninstall WSL using windows add/remove features
reinstall docker desktop
Oddly the get-netipinterface and get-netroute look exactly the same as they did before I did the uninstall and reinstall but things seem to be working now... I have no idea why the above worked...

Docker containers on WSL2 don't get added to the bridge network

Issue: My containers (all of which are webservers) can't communicate with each other by container name (the DNS lookup fails). I can make them communicate by creating a new network and adding each created container to that network, but I'd prefer to not have to do this manually.
Details: According to the docs all new containers should automatically get added to the bridge network and be able to communicate to each other simply by container_name:port. However, on WSL2, even though the bridge network exists, the containers don't seem to be added to it because they can't communicate with each other by name.
Workarounds that I've tried:
I am making it work right now by creating a network and adding containers on that network. However, this is cumbersome and not feasible when I eventually have a large number of containers.
docker-compose is an idea, but my integration test suite creates containers from inside it and all my integration tests will not work (and I'll have to switch to a new integration test suite entirely).
Is there a way that I can make new containers automatically join the bridge network (or my own network) without using docker-compose?
Docker Desktop version: 3.2.2 (61853)
Windows 10; Build 19042.928
Turns out my docker containers WERE getting added to the default bridge network. However, them not being able to communicate with each other is an intended design. Containers on the default bridge network can't talk to each other by host name; they must use IP to communicate.
docker run --network="bridge" <mycontainer>
You can check exactly what is going on inside with
docker inspect <containerID>
I would go with these test options to isolate issue
1- check bridge network itself working fine in WSL system, as WSL is new have some issue.
2- checking container through if yes it means docker is creating container correctly
3- try to resolve IP to check if it is resolving, if yes then it can be purely DNS issue
4- as per 3rd point will check DNS pod if it is functioning correctly.
If possible could you share exact error and DNS pod status.

Why can't I properly set up a docker swarm when my docker is set up to use Linux containers?

I've been trying to set up a docker swarm for two laptops with Windows 10 and which are in the same network. I've opened the necessary ports and have even tried disabling the firewall for both. However, everytime I try to connect to the manager (I tried them both as the manager), I get either a timeout error or a connection is unavailable error.
I tried reinstalling Docker on both machines and while doing so, noticed the checkbox 'Use Windows Containers'. I checked on that and after completing the installation, was able to set up the swarm properly.
I tried changing it back to use Linux containers but encountered the same problems again.
Can anyone explain why this is so? I assumed that using either Windows or Linux containers wouldn't really matter when setting up a swarm. I want to understand so I can check if I can still create a swarm with Linux containers.

Docker for Windows swarm overlay networking, connecting to the swarm from outside or localhost

I cannot connect to the published port on the swarm that uses overlay networking. I am using Docker for Windows with Windows containers. Both Windows and Docker are fully upgraded. After Windows' 1709 update, I was hoping this issue would be resolved. I looked for information on the Internet to see if I was doing something wrong to no avail. I would like to know if anyone was successfully able to get it working.
On a side note, when I direct the port on my machine in docker run -p 80:80 without using swarm, "localhost" does not work as well. I think this is a known limitation though. Both issues work when I switch to Linux containers.
Expected behavior
I am running a dotnet kestrel web server service. I should be able to connect to my service using the published port.
Actual behavior
Firefox gives me timeout, opera straight away returns connection refused. Cannot telnet into it either. Container IP's assigned by the overlay network do not work either.
Information
docker service ls gives me this:
Ports cannot be seen there, is it because publish mode is host? Ports information is available in the output of docker service ps
And when I change the publish mode, I can scale it as well and the port information is seen in docker service ls albeit still cannot connect. the one below is without the publish mode=host parameter:
For more info, this is the output of the docker network ls I wonder if i need some sort of bridge network like in Linux.
Steps to reproduce the behavior
Initialise swarm
Start the service, in my case: a simple web service built using aspnetcore:latest image. I tried different parameters, even used a docker-stack.yml:
docker service create --name=web --publish mode=host,published=80,target=80 web:aspnetcorelatest in the case above, I was unable to scale it on the same machine, which is normal i guess
docker service create --name=web --publish published=85,target=80 web:aspnetcorelatest
Try to connect using one of http://localhost or another IP. I tried connecting over VPN, from another machine as well as Internet IP.

Easy, straightforward, robust way to make host port available to Docker container?

It is really easy to mount directories into a docker container. How can I just as easily "mount a port into" a docker container?
Example:
I have a MySQL server running on my local machine. To connect to it from a docker container I can mount the mysql.sock socket file into the container. But let's say for some reason (like intending to run a MySQL slave instance) I cannot use mysql.sock to connect and need to use TCP.
How can I accomplish this most easily?
Things to consider:
I may be running Docker natively if I'm using Linux, but I may also be running it in a VM if I'm on Mac or Windows, through Docker Machine or Docker for Mac/Windows (Beta). The answer should handle both scenarios seamlessly, without me as the user having to decide which solution is right depending on my specific Docker setup.
Simply assigning the container to the host network is often not an option, so that's unfortunately not a proper solution.
Potential solution directions:
1) I understand that setting up proper local DNS and making the Docker container (network) talk to it might be a proper, robust solution. If there is such a DNS service that can be set up with 1, max 2 commands and then "just work", that might be something.
2) Essentially what's needed here is that something will listen on a port inside the container and like a sort of proxy route traffic between the TCP/IP participants. There's been discussion on this closed Docker GH issue that shows some ip route command-line magic, but that's a bit too much of a requirement for many people, myself included. But if there was something akin to this that was fully automated while understanding Docker and, again, possible to get up and running with 1-2 commands, that'd be an acceptable solution.
I think you can run your container with --net=host option. In this case container will bind to the host's network and will be able to access all the ports on your local machine.

Resources