WSL2 lan redirection issue when Docker installed - docker

My current setup is my laptop and desktop, they're connected in the same network. I do webdev so i code in my laptop and see the results in my desktop's browser. After installing docker i can no longer access my laptop's web servers that are being runned inside WSL2 directly, ONLY if they're running inside a Docker container.
This issue wasn't present in WSL1, which is:
If you install docker desktop on your windows machine and enable the new WSL2 integration it will mess your windows 'hosts' file (found at %SYSTEMROOT%\System32\drivers\etc\hosts)
Docker automatically adds the following:
# Added by Docker Desktop
192.168.1.77 host.docker.internal
192.168.1.77 gateway.docker.internal
# To allow the same kube context to work on the host and the container:
127.0.0.1 kubernetes.docker.internal
Note that 192.168.1.77 is my laptop static IP which i set in my router
My guess is that these inputs are redirecting my desktop's requests to docker, completely breaking directly access.
This is not ideal because i have to build a container for coding a simple react app which leads to alot of useless complexity
ps: I have tried the following entries in hosts file, no success:
192.168.1.77 localhost
also:
172.22.3.92 ubuntu.wsl # managed by wsl2-host (service that creates a hostname for WSL2 ip)
192.168.1.77 ubuntu.wsl
The only workaround is to disable docker and clean hosts file, which isn't really what it suposed to be.

Add this to ~/.wslconfig (win) or /etc/wsl.conf (wsl)
[network]
generateHosts = false

Related

Docker nodejs server running in VirtualBox guest OS (Ubuntu Server 20.04): access via host (Win10) localhost

I'm using docker inside VirtualBox (Ubuntu Server 20.04), since I cannot use Docker Desktop in the host (Windows 10).
I have a docker nodejs container on port 3000. In my host I can access it through 192.168.56.110:3000, where 192.168.56.110 is the IP address of the VM but I need to access it through localhost:3000.
The setup is of course similar to a docker-toolbox installation (as in this question), for which I've found that the host localhost also does not work, to put it simple.
I've tried to map localhost in Windows 10, as suggested in a few answers like this one adding to C:\Windows\System32\Drivers\etc\hosts:
192.168.56.110 localhost
192.168.56.110 dev.com
But while dev.com works right away as expected, localhost doesn't.
Is there a workaround?
When I run an Angular development server in the guest, I can access it via localhost:4200 from the host but it is not completely clear to me what happens behind the scenes. Side question, how does ng serve lead to the "redirection" of localhost in the host OS?
Run as administor a windows command prompt and enter:
netsh interface portproxy add v4tov4 listenaddress=127.0.0.1 listenport=3000 connectaddress=<replace with docker ip address> connectport=3000
To confirm, run
netsh interface portproxy show all
Good luck.

Unable to connect with Xdebug from a Docker container inside and WSL2 instance

I have to do PHP development, for this I was given an Windows 10 machine, this is something I cannot change. So I use the WSL2 feature to setup and development server using Ubuntu 20.04.
First attempt:
Used Windows with Docker Desktop. I configured the environment but was had issues. The mapping between of the project volume and Docker caused important processed like Composer, git etc to be very slow. So this is considered unworkable for me.
Second attempt:
Setup a development environment directly in the WSL2 instance. This works. I'm able to connect with the Xdebug debugger using PhpStorm. But again the rest of the operations are very slow and is considered unworkable for me.
Third attempt:
I was advised to do the following: Create WSL2 Ubuntu 20.04 instance. Install docker on it and store the project folder directly in \\wsl$. In this WSL2 instance I run a docker webserver container. The webserver becomes accessible by localhost.
This seems to work very good, not sure why though... The websites running on the docker webserver are very fast and executing git or composer commands are fast. I open the project folder directly from the \\wsl$ location with PhpStorm.
The only issue I'm having is that I'm unable to create Xdebug session using PhpStorm.
My question is: How to configure the development environment so I can use Xdebug?
Facts & specs
Windows 10 as host machine.
WSL instance: Ubuntu 20.04
Docker webserver instance: Ubuntu 20.04 (php7.4-fmp and apache2, xdebug 3.0.3 port 9000)
The docker webserver container can access the host network (192.x.x.x.)
The docker webserver container can access the WSL network (172.20.x.x)
I use the following xdebug settings:
xdebug.mode = debug
xdebug.client_host = host.docker.internal (this goes to the 192.x.x. address)
Any advise on how to make Xdebug work in this setup?

How can I access a service running on WSL2 from inside a Docker container?

I am using Windows 10 1909 and have installed WSL2, using Ubuntu 20.04, the 19.03.13-beta2 docker version, having installed Docker for Windows Edge version using the WSL2 option. The integration is working pretty great, but I have one issue which I cannot solve.
On the WSL2 instance, there are services running, exposing some ports (3000, 3001, 3002,...). From one of the docker containers, I need to access the services for a specific development scenario (API Gateway), and this I cannot get to work.
I have tried using the WSL2 IP address directly, but then the connect just times out. I have also tried using host.docker.internal, which resolves to something else than the WSL2 IP address, but it still doesn't work.
Is there a special trick I need to pull, or is this kind of routing currently not supported, but will be, or is this for some other reason not possible?
This illustrates what I am trying to achieve:
The other routings work - i.e. I can access all the service ports coming from the node.js processes inside WSL2 from the Windows browser, and also I can access the exposed service ports from the containers both from inside WSL2 and from Windows. It's just this missing link I cannot make work.
So what you need to do in the windows machine port forward the port you are running on the WSL machine, this script port forwards the port 4000
netsh interface portproxy delete v4tov4 listenport="4000" # Delete any existing port 4000 forwarding
$wslIp=(wsl -d Ubuntu -e sh -c "ip addr show eth0 | grep 'inet\b' | awk '{print `$2}' | cut -d/ -f1") # Get the private IP of the WSL2 instance
netsh interface portproxy add v4tov4 listenport="4000" connectaddress="$wslIp" connectport="4000"
And on the container docker run command you have to add
--add-host=host.docker.internal:host-gateway
or if you are using docker-compose:
extra_hosts:
- "host.docker.internal:host-gateway"
Then inside the container you should be able to curl to
curl host.docker.internal:4000
and get a response!
For what it's worth: This scenario is working if you use the WSL2 subsystem IP address.
It does not work if you use host.docker.internal - this DNS alias is defined in the containers, but it maps to the IP address of the Windows host, not of the WSL2 host, and that routing back inside the WSL2 host does not work.
The reason why this (probably temporarily) did not work is somewhat unclear - I will revisit this answer if the problem should reappear and I manage to track down what the actual problem may have been.
I ran into this problem with the latest Docker Desktop. I rolled it back to 4.2 and it worked.
Docker Desktop 4.2
Windows 19044.1466
Ubuntu 20.04
I have a java service running on a linux local host (accessing the IP address using ifconfig command), my other containers running on docker desktop using the WSL2 based engine, which can communicate to my java service using the IP address.
This sounds like the issue which is discussed here. For me the only thing that worked was running the docker container with --net=host and then using [::1] instead of localhost in the container to access other containers running in WSL.
So for example, container1 is started with docker run --net=host and then calls container2 like this: http://[::1]:8000/container2 (adjust port and path to your specific application)

Access Docker daemon on Host without knowing Host OS

I use docker-compose to spin up a few containers as part of an application I'm developing. One of the containers needs to start a docker swarm service on the host machine. On Docker for Windows and Docker for Mac, I can connect to the host docker daemon using the REST Api by using the "host.docker.internal" DNS name and this works great. However, if I run the same compose file on linux, "host.docker.internal" does not work (yet, seems it may be coming in the next version of docker). To make matters worse, on Linux I can use network mode of "host" to work around the issue but that isn't supported on Windows or Mac.
How can I either:
Create a docker-compose file or structure a containerized application to be slightly different based on the host platform (windows|mac|linux) without having to create multiple docker-compose.yml files or different application code?
Access the host docker daemon in a consistent way regardless of the host OS?
If it matters, the container that is accessing the docker daemon of the host is using the docker python sdk and making api calls to docker over tcp without TLS (this is used for development only).
Update w/ Solution Detail
For a little more background, there's a web application (aspnet core/C#) that allows users to upload a zip file. The zip file contains, among other things, an exported docker image file. There's also an nginx container in front of all of this to allow for ssl termination and load balancing. The web application pulls out the docker image, then using the docker daemon's http api, loads the image, re-tags the image, then pushes it to a private docker repository (which is running somewhere on the developer's network, external to docker). After that, it posts a message to a message queue where a separate python application uses the python docker library to deploy the docker image to a docker swarm.
For development purposes, the applications all run as containers and thus need to interact with docker running on the host machine as a stand alone swarm node. SoftwareEngineer's answer lead me down the right path. I mapped the docker socket from the host into the web application container at first but ran into a limitation of .net core that won't be resolved until .net 5 which is that there's no clean way of doing http over a unix socket.
I worked around that issue by eventually realizing that nginx can reverse proxy http traffic to a unix socket. I setup all containers (including the dynamically loaded swarm service from the zips) to be part of an overlay network to give them all access to each other and allowing me to hit an http endpoint to control the host machine's docker/swarm daemon over http.
The last hurdle I ran into was that nginx couldn't write to the mapped in /var/run/docker.sock file so I modified nginx.conf to allow it to run as root within the container.
As far as I can tell, the docker socket is available at the path /var/run/docker.sock on all systems. I have personally verified this with a recent Linux distro (Ubuntu), Windows 10 Pro running Docker for Windows (2.2.0) with both WSL2 (Ubuntu and Alpine) and the windows cmd (cli) and powershell. From memory, it works with OSX too, and I used to do the same thing in WSL1.
Mapping this into a container is achieved on any terminal with the -v, --volume, or --mount flags. So,
docker container run -v /var/run/docker.sock:/var/run/docker.sock
Mounts the socket into an identical path within the container. This means that you can access the socket using the standard docker client (docker) from within the container with no extra configuration. Using this path inside a Linux container is recommended because the standard location and is likely to be less confusing to anyone maintaining your code in the future (including yourself).

Docker Desktop for Windows: Cannot ping google.com from windows containers

I was creating a container using microsoft/windowsservercore image. And then when I tried to ping google.com from inside the container, I got this error:
Ping request could not find host www.google.com. Please check the name
and try again.
Then I switched to Linux Container mode in docker for windows. Then tried the same in an ubuntu container but this time it worked fine. Then when I switched back to Windows Container mode and tried the same thing again, it worked this time. Although my issue was resolved but I still don't understand what caused this issue in the first place ?
Docker for windows and linux have different default network settings.
Typically, the default for linux is bridged mode while in windows you have NAT.
You can alter your configuration with Network Connection Settings for windows
See: https://docs.docker.com/docker-for-windows/#network
The first option for me is always to look at the network section when executing docker inspect *containername*. This command gives you information about your network settings for the container. Other options are to check your firewall settings.
In general I usually use ping 8.8.8.8 since www.google.com cannot be pinged even from my standard windows machine.

Resources