Can't connect to WSL2 localhost server from WSL2 docker container - docker

I am running a simple web server on https://0.0.0.0:4000 (accessible also as https://local.phx-cd.shoepping.at:4000 with mapping to 127.0.0.1 in Ubuntu hosts file) on my WSL2 Ubuntu. I can connect to it from both Ubuntu and Windows host - so far so good. But additionally, in my Docker for Win with WSL2 integration, I run a selenium chrome container which is connecting and testing stuff on that web server (using bridge), but it can't connect to it!
I connected to the container and tried to curl to the web server - connection refused. Since I have dual boot on my computer, I tried to switch to my Linux distro, run web server there and selenium in Linux Docker and connection to the local web server worked. So I think it has something to do with the WSL2.
My docker-compose.yaml (I left out my selenium hub config)
selenium-chrome-local:
image: selenium/node-chrome-debug:3.141.59
restart: always
ports:
- 5901-5902:5900
volumes:
- /dev/shm:/dev/shm
- ../../temp:/home/seluser/Downloads
depends_on:
- selenium-hub-local
environment:
- SCREEN_WIDTH=1920
- SCREEN_HEIGHT=1080
extra_hosts:
- "local.phx-cd.shoepping.at:10.99.99.1"
networks:
- selgrid
- dockerhost
networks:
selgrid:
dockerhost:
driver: bridge
ipam:
config:
- subnet: 10.99.99.0/24
Let me know if you need more config. Thanks.

Are you sure that the Ubuntu WSL2 instance is running bridged? By default, WSL2 instances run NAT'd (whereas WSL1 instances ran bridged). So, while yes, the Docker network is bridged, it still can't access the NAT'd WSL2 VM without some extra work.
I'm fairly sure that you are running into the root problem described in WSL issue #4150. If so, here are some things to try ...
Option #1 - Port forwarding to the WSL2 instance
There are several workarounds suggested in that GitHub issue, but the basics that would work for your case boil down to forwarding port 4000 from the Windows host interface to the WSL2 instance's private IP address. In PowerShell:
netsh interface portproxy delete v4tov4 listenport="4000" # Delete any existing port 4000 forwarding
$wslIp=(wsl -d Ubuntu -e sh -c "ip addr show eth0 | grep 'inet\b' | awk '{print `$2}' | cut -d/ -f1") # Get the private IP of the WSL2 instance
netsh interface portproxy add v4tov4 listenport="4000" connectaddress="$wslIp" connectport="4000"
Note that you'll need to do this after each reboot, or else set up a script that runs at logon as described in the GitHub issue (see this comment).
Option #2 - WSL1
I would also propose that assuming it fits your workflow and if your web app runs on it, you can simply use WSL1 instead of WSL2. You can try this out by:
Backing up your existing distro (from PowerShell or cmd, use wsl --export <DistroName> <FileName>
Import the backup into a new WSL1 instance with wsl --import <NewDistroName> <InstallLocation> <FileNameOfBackup> --version 1
It's possible to simply change versions in place, but I tend to like to have a backup anyway before doing it, and as long as you are backing up, you may as well leave the original in place.
Possible Option #3 - socat forwarding or tunnel
While I haven't tested your particular use case directly, I have played around with socat in WSL2 with success. From the looks of it socat could be used for port forwarding from WSL2 to (at the least) the Windows host (which would be accessible to the Docker container). See this comment an example on GitHub about a similar use-case as yours.
Possible Option #4 - WSL2 in bridge mode
The GitHub thread referenced above also has some details on how to enable bridge-mode on the WSL2 interface using Hyper-V. I believe this requires Windows 10 Professional or Enterprise. It also has to be done after each reboot, as with Option 1. Again, probably overkill for this case, if port forwarding or WSL1 can accomplish what you need.

Run this command on PowerShell as Administrator:
Replace {#requiredWindowsPort} with the port that will be used in the browser
Replace {#requiredWSL2Port} with the port running in WSL2 you want to connect to.
netsh interface portproxy add v4tov4 listenport={#requiredWindowsPort}
listenaddress=0.0.0.0 connectport={#requiredWSL2Port}
connectaddress=$($(wsl hostname -I).Trim());

Related

Xdebug inside Colima docker container doesn't connect to PhpStorm debugger on Mac

I am trying to use Colima to run an apache-php docker container. My uni provides docker images derived from upstream ones configured for our course using docker-compose.
The container works as it should but I can't get its Xdebug to connect to my PhpStorm.
This is what it says in the Xdebug log:
Creating socket for 'host.docker.internal:9003', poll success, but error: Operation now in progress (29).
This tells me absolutely nothing.
The setup is admittedly quite complex (x86 Apache ran via QEMU in Docker in Linux VM in macOS on ARM CPU) but I can do nc host.docker.internal 9003 from any docker container, so I have no idea why Xdebug isn't able to reach my host. (Only works when the IDE is running and on no other ports, so it's definitely connecting to PhpStorm.)
Any idea what could be going on here?
On Colina, the IP address is hard coded to "192.168.5.2", so setting xdebug.client_host=192.168.5.2 should do the trick. There is now also an alias for it, called host.lima.internal.
As per this documentation page.
The problem is the uni's docker-compose.yml which configured the container with:
extra_hosts:
- "host.docker.internal:host-gateway"
and apparently that can break host.docker.internal in some situations: https://github.com/docker/for-linux/issues/264#issuecomment-759737542
The solution is to remove those two lines.

Can't connect to localhost of the host machine from inside of my Docker container

The question is basic: how to I connect to the localhost of the host machine from inside of a Docker container?
I tried answers from this post, using add-host host.docker.internal:host-gateway or writing --network=host when running my container but none of these methods seem to work.
I have a simple hello world webserver up on my machine, and I can see it's contents with curl localhost:8000 from my host, but I can't curl it from inside the container. I tried curl host.docker.internal:8000, curl localhost:8000, and curl 127.0.0.1:8000 from inside the container (based on the solution I used to make localhost available there) but none of them seem to work and I get a Connection refused error every time.
I asked somebody else to try this out for me on their own machine and it worked for them, so I don't think I'm doing anything wrong.
Does anybody have any idea what is wrong with my containers?
Host machine: Ubuntu 20.04.01
Docker version: 20.10.7
Used image: Ubuntu 20.04 (and i386/ubuntu18.04)
Temporary solution
This does not completely solve the problem for production purposes, but at least in order to get the localhost working, by adding these lines into docker-compose.yml it solved my issue for now (source):
services:
my-service:
network_mode: host
I am using apache nifi to use Java REST endpoints with the same ubuntu and docker versions, so in my case, it looks like this:
services:
nifi:
network_mode: host
After changing docker-compose.yml, I recommend stopping docker container, removing containers(docker-compose rm - do not use if you need some containers, otherwise use docker container rm container_id) and build with docker-compose up --build again.
In this case, I needed to use another localhost IP for my service to access with a browser (nifi started on other ip - 127.0.1.1 but works fine as well).
Searching for the problem / deeper into ubuntu-docker networking
Firstly, I will write down some useful commands that may be useful to find out a solution for the docker-ubuntu networking issue:
ip a - show all routing, network devices, interfaces and tunnels (mainly I can observe state DOWN with docker0)
ifconfig - list all interfaces
brctl show - ethernet bridge administration (docker0 has no attached interface / veth pair)
docker network ls - manages docker networks - names, drivers, scope...
docker network inspect bridge - I can see for docker0 bridge has no attached docker containers - empty and not used bridge
(useful link for ubuntu-docker networking explanation)
I guess that problem lays within veth pair (see link above), because when docker-compose occurs, there is a new bridge created (not docker0) that is connected to veth pair in my case, and docker0 is not used. My guess is that if docker0 is used, then host.docker.internal:host-gateway would work. Somehow in ubuntu networking, there is docker0 not used as the default bridge and this maybe should be changed.
I don't have much time left actually, well, I suppose someone can use this information and resolve the core of the problem later on.

Correct IP address to access web application in apache docker container

I have an easy apache docker setup defined in a docker-compose.yml:
services:
apache:
image: php:7.4-apache
command: /bin/bash -c "/var/www/html/startup.sh && exec 'apache2-foreground'"
volumes:
- ./:/var/www/html
- /c/Windows/System32/drivers/etc/hosts:/tmp/hostsfile
ports:
- "80:80"
From the startup.sh script I want to modify the hosts file from the host OS through the volume. Here I want to dynamically add an entry to resolve the hostname test.local to the ip address of the docker web application like so:
<ip-address> test.local
This way I should be able to open the application with the specified hostname http://test.local in my local browser.
Before writing the startup.sh script I wanted to try to manually open the application at http://172.19.0.2 via the containers IP address I got from
docker inspect apache_test
But the page won't open: ERR_CONNECTION_TIMED_OUT
Shouldn't I be able to access the application from that IP? What am I missing? Do I use the wrong IP address?
BTW: I am using Docker Desktop for Windows with the Hyper-V backend
Edit: I am able to access the application via http://localhost but since I want to add a new entry to the hosts file this is not the solution.
Apparently in newer versions of Docker Desktop for Windows you can't access (ping) your linux containers by IP.
Now there is some really dirty workaround for this that involves changing a .ps1 file of your Docker installation to get back the DockerNAT interface on windows. After that you need to add a new route in your windows routing table as described here:
route /P add <container-ip> MASK 255.255.0.0 <ip-found-in-docker-desktop-settings>
Then you might be able to ping your docker container from the windows host. I didn't test it though...
I found a solution to my original issue (resolution of test.local to container IP via hosts file) while reading one of the threads linked above here
That involves setting a free loopback IP in the 127.0.0.0/8 IP range in your ports section of the docker-compose.yml:
ports:
- "127.55.0.1:80:80"
After that you add the following to your hosts file:
127.55.0.1 test.local
And you can open your application at http://test.local
To do that for other dockerized applications too just choose another free loopback address.

Considering URL redirections: How to use GUIs of web applications running in different containers within the same docker network on a remote server?

I have the feeling that I am overlooking something obvious as my solutions/ideas so far seem too cumbersome. I have searched intensively for a good solution, but so far without success - probably because I do not know what to look for.
Question:
How do you interact with the graphical interfaces of web servers running in different containers (within the same Docker Network) on a remote server, given URL redirections between these containers?
Initial situation:
I have two containers (a Flask web application and a Tomcat server with OpenAM running on it) running on my docker host (Azure-VM).
On the VM I can output the content of both containers via the ports that I have opened.
Using ssh port forwarding I can interact with the graphical components of both containers on my local machine.
Both containers were created with the same docker-compose and can be accessed via their domain name without additional network settings.
So far I have configured OpenAM on my local machine using ssh port forwarding.
Problem:
The Flask web app references OpenAM by its domain name defined in docker-compose and vice versa. I forward to my local machine the port of the Flask container. The Flask application is running and I can interact with it in my browser.
The system fails as soon as I am redirected from Flask to OpenAM on my local machine because the reference to the OpenAM container used by Flask is specific to the Docker network. Also, the port of the OpenAM Container is different.
In other words, the routing between the two networks is nonexistent.
Solutions Ideas:
Execute the requests on the VM using command-line tools.
Use a container with a headless browser that automatically executes the requests.
Use Network Setting 'Host' and execute the headless browser on the VM instead.
Route all requests through a single container (similar to a VPN) and use ssh port forwarding.
Simplified docker-compose:
version: "3.4"
services:
openam:
image: openidentityplatform/openam
ports:
- 5001:8080
command: /usr/local/tomcat/bin/catalina.sh run
flask:
build: ./SimpleHTTPServer
ports:
- 5002:8000
command: python -m http.server 8000
Route all requests through a single container - This is the correct approach.
See API gateway pattern
The best solution that I could find so far. It does not serve for production. However, for prototyping or if simply trying to emulate a server structure by using containers it is an easy setup.
General Idea:
Deploy a third VNC container running a Webbrowser and forward the port of this third container to your local machine. As the third container is part of the docker network it can naturally resolve the internal domain names and the VNC installation on your local machine enables you to interact with the GUIs.
Approach
Add the VNC to the docker-compose of the original question.
Enable X11 forwarding on the server and client-side.
Forward the port of the VNC container using ssh.
Install VNC on the client, start a new session, and enter the predefined password.
Try it out.
Step by Step
Add the VNC container (inspired by creack's post on stackoverflow) to the docker-compose file from the original question:
version: "3.4"
services:
openam:
image: openidentityplatform/openam
ports:
- 5001:8080
command: /usr/local/tomcat/bin/catalina.sh run
flask:
build: ./SimpleHTTPServer
ports:
- 5002:8000
command: python -m http.server 8000
firefoxVnc:
container_name: firefoxVnc
image: creack/firefox-vnc
ports:
- 5900:5900
environment:
- HOME=/
command: x11vnc -forever -usepw -create
Run the docker-compose: docker-compose up
Enable X11 forwarding on the server and client-side.
On client side $ vim ~/.ssh/config and add the following lines:
Host *
ForwardAgent yes
ForwardX11 yes
On server-side run $ vim /etc/ssh/sshd_config and edit the following lines:
X11Forwarding yes
X11DisplayOffset 10
Forward the port of the VNC container using ssh
ssh -v -X -L 5900:localhost:5900 gw.example.com
Make sure to include the -X flag for X11. The -v flag is just for debugging.
Install VNC on the client, start a new session and enter the predefined password.
Install VNC viewer on your local machine
Open the installed viewer and start a new session using the forwarded address localhost:59000
When prompted type in the password 1234 which was set in the original Dockerfile of the VNC dicker image (see creack's post linked above).
You can now either go to openam:8080/openam/ or apache:80 within the browser of the VNC localhost:5900 session.
An even better solution that is clean, straightforward, and also works perfectly when running parts of the application on different virtual machines.
Setup and Use an SSH SOCKS Tunnel
For Google Chrome and macOS:
Set your network settings to host within the Dockerfile or docker-compose.
Start an SSH tunnel:
$ ssh -N -D 9090 [USER]#[SERVER_IP]
Add the SwitchyOmega proxy addon to your Chrome browser.
Configure SwitchyOmega by going to New Profile > Proxy Profile, clicking create, entering the same server IP as for the ssh command and the port 9090.
Open a new terminal tap and run:
"/Applications/Google Chrome.app/Contents/MacOS/Google Chrome" \
--user-data-dir="$HOME/proxy-profile" \
--proxy-server="socks5://localhost:9090"
A new Crome session will open up in which you can simply browse your docker applications.
Reference | When running Linux or Windows | Using Firefox (no addon needed)
The guide How to Set up SSH SOCKS Tunnel for Private Browsing explains how to set up an SSH SOCKS Tunnel running Mac, Windows, or Linux and using Google Chrome or Firefox. I simply referenced the setup for macOS and Crome in case the link should die.

How can I access a service running on WSL2 from inside a Docker container?

I am using Windows 10 1909 and have installed WSL2, using Ubuntu 20.04, the 19.03.13-beta2 docker version, having installed Docker for Windows Edge version using the WSL2 option. The integration is working pretty great, but I have one issue which I cannot solve.
On the WSL2 instance, there are services running, exposing some ports (3000, 3001, 3002,...). From one of the docker containers, I need to access the services for a specific development scenario (API Gateway), and this I cannot get to work.
I have tried using the WSL2 IP address directly, but then the connect just times out. I have also tried using host.docker.internal, which resolves to something else than the WSL2 IP address, but it still doesn't work.
Is there a special trick I need to pull, or is this kind of routing currently not supported, but will be, or is this for some other reason not possible?
This illustrates what I am trying to achieve:
The other routings work - i.e. I can access all the service ports coming from the node.js processes inside WSL2 from the Windows browser, and also I can access the exposed service ports from the containers both from inside WSL2 and from Windows. It's just this missing link I cannot make work.
So what you need to do in the windows machine port forward the port you are running on the WSL machine, this script port forwards the port 4000
netsh interface portproxy delete v4tov4 listenport="4000" # Delete any existing port 4000 forwarding
$wslIp=(wsl -d Ubuntu -e sh -c "ip addr show eth0 | grep 'inet\b' | awk '{print `$2}' | cut -d/ -f1") # Get the private IP of the WSL2 instance
netsh interface portproxy add v4tov4 listenport="4000" connectaddress="$wslIp" connectport="4000"
And on the container docker run command you have to add
--add-host=host.docker.internal:host-gateway
or if you are using docker-compose:
extra_hosts:
- "host.docker.internal:host-gateway"
Then inside the container you should be able to curl to
curl host.docker.internal:4000
and get a response!
For what it's worth: This scenario is working if you use the WSL2 subsystem IP address.
It does not work if you use host.docker.internal - this DNS alias is defined in the containers, but it maps to the IP address of the Windows host, not of the WSL2 host, and that routing back inside the WSL2 host does not work.
The reason why this (probably temporarily) did not work is somewhat unclear - I will revisit this answer if the problem should reappear and I manage to track down what the actual problem may have been.
I ran into this problem with the latest Docker Desktop. I rolled it back to 4.2 and it worked.
Docker Desktop 4.2
Windows 19044.1466
Ubuntu 20.04
I have a java service running on a linux local host (accessing the IP address using ifconfig command), my other containers running on docker desktop using the WSL2 based engine, which can communicate to my java service using the IP address.
This sounds like the issue which is discussed here. For me the only thing that worked was running the docker container with --net=host and then using [::1] instead of localhost in the container to access other containers running in WSL.
So for example, container1 is started with docker run --net=host and then calls container2 like this: http://[::1]:8000/container2 (adjust port and path to your specific application)

Resources