Considering URL redirections: How to use GUIs of web applications running in different containers within the same docker network on a remote server? - docker

I have the feeling that I am overlooking something obvious as my solutions/ideas so far seem too cumbersome. I have searched intensively for a good solution, but so far without success - probably because I do not know what to look for.
Question:
How do you interact with the graphical interfaces of web servers running in different containers (within the same Docker Network) on a remote server, given URL redirections between these containers?
Initial situation:
I have two containers (a Flask web application and a Tomcat server with OpenAM running on it) running on my docker host (Azure-VM).
On the VM I can output the content of both containers via the ports that I have opened.
Using ssh port forwarding I can interact with the graphical components of both containers on my local machine.
Both containers were created with the same docker-compose and can be accessed via their domain name without additional network settings.
So far I have configured OpenAM on my local machine using ssh port forwarding.
Problem:
The Flask web app references OpenAM by its domain name defined in docker-compose and vice versa. I forward to my local machine the port of the Flask container. The Flask application is running and I can interact with it in my browser.
The system fails as soon as I am redirected from Flask to OpenAM on my local machine because the reference to the OpenAM container used by Flask is specific to the Docker network. Also, the port of the OpenAM Container is different.
In other words, the routing between the two networks is nonexistent.
Solutions Ideas:
Execute the requests on the VM using command-line tools.
Use a container with a headless browser that automatically executes the requests.
Use Network Setting 'Host' and execute the headless browser on the VM instead.
Route all requests through a single container (similar to a VPN) and use ssh port forwarding.
Simplified docker-compose:
version: "3.4"
services:
openam:
image: openidentityplatform/openam
ports:
- 5001:8080
command: /usr/local/tomcat/bin/catalina.sh run
flask:
build: ./SimpleHTTPServer
ports:
- 5002:8000
command: python -m http.server 8000

Route all requests through a single container - This is the correct approach.
See API gateway pattern

The best solution that I could find so far. It does not serve for production. However, for prototyping or if simply trying to emulate a server structure by using containers it is an easy setup.
General Idea:
Deploy a third VNC container running a Webbrowser and forward the port of this third container to your local machine. As the third container is part of the docker network it can naturally resolve the internal domain names and the VNC installation on your local machine enables you to interact with the GUIs.
Approach
Add the VNC to the docker-compose of the original question.
Enable X11 forwarding on the server and client-side.
Forward the port of the VNC container using ssh.
Install VNC on the client, start a new session, and enter the predefined password.
Try it out.
Step by Step
Add the VNC container (inspired by creack's post on stackoverflow) to the docker-compose file from the original question:
version: "3.4"
services:
openam:
image: openidentityplatform/openam
ports:
- 5001:8080
command: /usr/local/tomcat/bin/catalina.sh run
flask:
build: ./SimpleHTTPServer
ports:
- 5002:8000
command: python -m http.server 8000
firefoxVnc:
container_name: firefoxVnc
image: creack/firefox-vnc
ports:
- 5900:5900
environment:
- HOME=/
command: x11vnc -forever -usepw -create
Run the docker-compose: docker-compose up
Enable X11 forwarding on the server and client-side.
On client side $ vim ~/.ssh/config and add the following lines:
Host *
ForwardAgent yes
ForwardX11 yes
On server-side run $ vim /etc/ssh/sshd_config and edit the following lines:
X11Forwarding yes
X11DisplayOffset 10
Forward the port of the VNC container using ssh
ssh -v -X -L 5900:localhost:5900 gw.example.com
Make sure to include the -X flag for X11. The -v flag is just for debugging.
Install VNC on the client, start a new session and enter the predefined password.
Install VNC viewer on your local machine
Open the installed viewer and start a new session using the forwarded address localhost:59000
When prompted type in the password 1234 which was set in the original Dockerfile of the VNC dicker image (see creack's post linked above).
You can now either go to openam:8080/openam/ or apache:80 within the browser of the VNC localhost:5900 session.

An even better solution that is clean, straightforward, and also works perfectly when running parts of the application on different virtual machines.
Setup and Use an SSH SOCKS Tunnel
For Google Chrome and macOS:
Set your network settings to host within the Dockerfile or docker-compose.
Start an SSH tunnel:
$ ssh -N -D 9090 [USER]#[SERVER_IP]
Add the SwitchyOmega proxy addon to your Chrome browser.
Configure SwitchyOmega by going to New Profile > Proxy Profile, clicking create, entering the same server IP as for the ssh command and the port 9090.
Open a new terminal tap and run:
"/Applications/Google Chrome.app/Contents/MacOS/Google Chrome" \
--user-data-dir="$HOME/proxy-profile" \
--proxy-server="socks5://localhost:9090"
A new Crome session will open up in which you can simply browse your docker applications.
Reference | When running Linux or Windows | Using Firefox (no addon needed)
The guide How to Set up SSH SOCKS Tunnel for Private Browsing explains how to set up an SSH SOCKS Tunnel running Mac, Windows, or Linux and using Google Chrome or Firefox. I simply referenced the setup for macOS and Crome in case the link should die.

Related

Export docker container through cloudflared

I have a NAS where I am running various web apps in docker containers through docker-compose. I want some of these web apps to be accessible through the internet, not only when I am connected to my home network.
The problem I'm currently facing is that while cloudflare is able to expose the default web apps (default NAS management 192.168.1.135:80 can be mapped to subdomain.domain.com, for instance), it is unable to expose any docker container I try to run (192.168.1.135:4444 cannot be mapped to subdomain2.domain.com), and I receive a 502 bad gateway error with every app I have tried so far.
The configuration shouldn't be the issue, and it's definitely not the NoTLSVerify flag because the apps run on HTTP and I have configured it that way, so I am out of options to know what is going on and how to solve it.
Looks like the apps you're running on your NAS are proxied through the docker runtime. Consequently, the IP:port you need to add to the cloudflare tunnel config is the one that is reachable from the Host (not the IP of the host itself).
If the host is 192.168.1.135, you need to know which the the IP (internal to the docker network) of the app that you want to access from the outside, typically in the 172.0.0.1/24 range.
Example: If the containers running the apps you want to access are running on 172.0.0.2:4444 for app1 and 172.0.0.3:5555 for app2, the cloudflare config would look like this:
tunnel: the_ID_of_the_tunnel
credentials-file: /root/.cloudflared/the_ID_of_the_tunnel.json
ingress:
- hostname: yourapp1.example.com
service: http://172.0.0.2:4444
- hostname: ypurapp2.example.com
service: http://172.0.0.3:5555
- service: http_status:404
See more details and a video here: How to redirect subdomain to port (docker)
Turns out the problem is due to how docker works with networks, not with how Cloudflare accesses them. I first had to create a network that connected both containers, since adding cloudflare to my docker-compose file didn't work for some reason.
Create a docker network docker network create tunnel
Run docker without specifying the network docker run -d --name cloudflare cloudflare/cloudflared:latest tunnel --no-autoupdate run --token
Add the docker to the network docker network connect tunnel cloudflare
Run the container (note the container should have, as you specified, the network name identical to the one you created earlier, but cloudflare should not be in your docker-compose file) docker-compose up
In the cloudflare tunnel config, you will have to specify the docker internal address of your container (as #lu4t suggested). You can identify the address with docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' container

NGINX Reverse Proxy with Docker Host Mode for Local Development

Most of the things I'm finding online are all about using docker-compose and more to create a reverse proxy for local development of dockerized applications. This is for my local development environment.
I have a need to create an nginx reverse proxy that can route requests to applications on my local computer that are not running in docker containers (non-dockerized).
Example:
I start up a web app A (not in docker) running on http://localhost:8822
I start up another web app B (not in docker) running on https://localhost:44320
I have an already running publicly available api on https://public-url-for-api-app-a.net
I also have a public A Record setup in my DNS for *.mydomain.local.com -> 127.0.0.1
I am trying to figure out how to use a nginx:mainline-alpine container in host mode to allow me to do the following:
I type http://web-app-a.mydomain.local.com -> reverse proxy to http://localhost:8822
I type http://web-app-b.mydomain.local.com -> reverse proxy to https://localhost:44320
I type http://api-app-a.mydomain.local.com -> reverse proxy to https://public-url-for-api-app-a.net
Ideally, this "solution" would run on both Windows and Mac but I am currently falling short in my attempts at this on my Windows machine.
Some stuff I've tried:
Following this tutorial, Start up my nginx docker container in "host" mode via:
docker run --rm -d --network host --name my_nginx nginx:mainline-alpine
I'm unable to get it to load on http://localhost:80. I'm wondering if I'm hitting some limitation of docker and windows? -- I receive a "The site can't be reached" here.
Custom building my own docker image with nginx configs and exposed ports (before trying host network mode)
Other relevant information:
Docker-Desktop on Windows version: 4.4.4 (73704)
Nginx Container via nginx:mainline-alpine tag.
Web App A = Front End Vue App
Web App B = Front End .NET Framework App
Web App C = Backend .NET Framework App
At this point, I've read too many posts that my brain is mush -- so it could very well be something obvious I'm missing. I'm beginning to think it may be better to simply run nginx.exe locally but that's not ideal because I don't want to have to check in binaries to my source in order for this setup to work.

Which Linux capability to use to properly run "sysctl -w net.ipv4.conf.tun0.route_localnet=1" in a Docker container?

I'm using an OpenVPN server in a Docker container for multiple client connections.
This container is located in a specific Docker network in which I have a web server as client target.
I want to publish the host name of my web server to clients so that they won't need to know its IP address in order to reach it.
To do so, I want to open the Docker's native DNS server to the OpenVPN clients and push to them the OpenVPN's IP as DNS server.
However, the Docker DNS server resides in the OpenVPN container, listening on 127.0.0.11 (with iptables internal redirections but that's another story).
Thus, in the OpenVPN server container, I need to add an iptables rule in order to forward a DNS request coming from the external OpenVPN IP to the internal 127.0.0.11 one.
But such an internal forward requires me to execute the following command:
sysctl -w net.ipv4.conf.tun0.route_localnet=1
Using the only NET_ADMIN capability when running docker run (--cap-add=NET_ADMIN), I get the following error message:
sysctl: error setting key 'net.ipv4.conf.tun0.route_localnet': Read-only file system
However, this perfectly works using the --privileged flag, but the one is too permissive.
Is there any Linux capability that can do the trick without using the --privileged flag?
I couldn't find the answer in the Linux capabilities manual.
I found a solution, using the --sysctl's docker run option
Solution in docker-compose.yml:
sysctls:
- net.ipv4.conf.tun0.route_localnet=1 # Doesn't work as tun0 doesn't
# exist yet at container start time
- net.ipv4.conf.default.route_localnet=1 # Workaround.

Correct IP address to access web application in apache docker container

I have an easy apache docker setup defined in a docker-compose.yml:
services:
apache:
image: php:7.4-apache
command: /bin/bash -c "/var/www/html/startup.sh && exec 'apache2-foreground'"
volumes:
- ./:/var/www/html
- /c/Windows/System32/drivers/etc/hosts:/tmp/hostsfile
ports:
- "80:80"
From the startup.sh script I want to modify the hosts file from the host OS through the volume. Here I want to dynamically add an entry to resolve the hostname test.local to the ip address of the docker web application like so:
<ip-address> test.local
This way I should be able to open the application with the specified hostname http://test.local in my local browser.
Before writing the startup.sh script I wanted to try to manually open the application at http://172.19.0.2 via the containers IP address I got from
docker inspect apache_test
But the page won't open: ERR_CONNECTION_TIMED_OUT
Shouldn't I be able to access the application from that IP? What am I missing? Do I use the wrong IP address?
BTW: I am using Docker Desktop for Windows with the Hyper-V backend
Edit: I am able to access the application via http://localhost but since I want to add a new entry to the hosts file this is not the solution.
Apparently in newer versions of Docker Desktop for Windows you can't access (ping) your linux containers by IP.
Now there is some really dirty workaround for this that involves changing a .ps1 file of your Docker installation to get back the DockerNAT interface on windows. After that you need to add a new route in your windows routing table as described here:
route /P add <container-ip> MASK 255.255.0.0 <ip-found-in-docker-desktop-settings>
Then you might be able to ping your docker container from the windows host. I didn't test it though...
I found a solution to my original issue (resolution of test.local to container IP via hosts file) while reading one of the threads linked above here
That involves setting a free loopback IP in the 127.0.0.0/8 IP range in your ports section of the docker-compose.yml:
ports:
- "127.55.0.1:80:80"
After that you add the following to your hosts file:
127.55.0.1 test.local
And you can open your application at http://test.local
To do that for other dockerized applications too just choose another free loopback address.

Can't connect to WSL2 localhost server from WSL2 docker container

I am running a simple web server on https://0.0.0.0:4000 (accessible also as https://local.phx-cd.shoepping.at:4000 with mapping to 127.0.0.1 in Ubuntu hosts file) on my WSL2 Ubuntu. I can connect to it from both Ubuntu and Windows host - so far so good. But additionally, in my Docker for Win with WSL2 integration, I run a selenium chrome container which is connecting and testing stuff on that web server (using bridge), but it can't connect to it!
I connected to the container and tried to curl to the web server - connection refused. Since I have dual boot on my computer, I tried to switch to my Linux distro, run web server there and selenium in Linux Docker and connection to the local web server worked. So I think it has something to do with the WSL2.
My docker-compose.yaml (I left out my selenium hub config)
selenium-chrome-local:
image: selenium/node-chrome-debug:3.141.59
restart: always
ports:
- 5901-5902:5900
volumes:
- /dev/shm:/dev/shm
- ../../temp:/home/seluser/Downloads
depends_on:
- selenium-hub-local
environment:
- SCREEN_WIDTH=1920
- SCREEN_HEIGHT=1080
extra_hosts:
- "local.phx-cd.shoepping.at:10.99.99.1"
networks:
- selgrid
- dockerhost
networks:
selgrid:
dockerhost:
driver: bridge
ipam:
config:
- subnet: 10.99.99.0/24
Let me know if you need more config. Thanks.
Are you sure that the Ubuntu WSL2 instance is running bridged? By default, WSL2 instances run NAT'd (whereas WSL1 instances ran bridged). So, while yes, the Docker network is bridged, it still can't access the NAT'd WSL2 VM without some extra work.
I'm fairly sure that you are running into the root problem described in WSL issue #4150. If so, here are some things to try ...
Option #1 - Port forwarding to the WSL2 instance
There are several workarounds suggested in that GitHub issue, but the basics that would work for your case boil down to forwarding port 4000 from the Windows host interface to the WSL2 instance's private IP address. In PowerShell:
netsh interface portproxy delete v4tov4 listenport="4000" # Delete any existing port 4000 forwarding
$wslIp=(wsl -d Ubuntu -e sh -c "ip addr show eth0 | grep 'inet\b' | awk '{print `$2}' | cut -d/ -f1") # Get the private IP of the WSL2 instance
netsh interface portproxy add v4tov4 listenport="4000" connectaddress="$wslIp" connectport="4000"
Note that you'll need to do this after each reboot, or else set up a script that runs at logon as described in the GitHub issue (see this comment).
Option #2 - WSL1
I would also propose that assuming it fits your workflow and if your web app runs on it, you can simply use WSL1 instead of WSL2. You can try this out by:
Backing up your existing distro (from PowerShell or cmd, use wsl --export <DistroName> <FileName>
Import the backup into a new WSL1 instance with wsl --import <NewDistroName> <InstallLocation> <FileNameOfBackup> --version 1
It's possible to simply change versions in place, but I tend to like to have a backup anyway before doing it, and as long as you are backing up, you may as well leave the original in place.
Possible Option #3 - socat forwarding or tunnel
While I haven't tested your particular use case directly, I have played around with socat in WSL2 with success. From the looks of it socat could be used for port forwarding from WSL2 to (at the least) the Windows host (which would be accessible to the Docker container). See this comment an example on GitHub about a similar use-case as yours.
Possible Option #4 - WSL2 in bridge mode
The GitHub thread referenced above also has some details on how to enable bridge-mode on the WSL2 interface using Hyper-V. I believe this requires Windows 10 Professional or Enterprise. It also has to be done after each reboot, as with Option 1. Again, probably overkill for this case, if port forwarding or WSL1 can accomplish what you need.
Run this command on PowerShell as Administrator:
Replace {#requiredWindowsPort} with the port that will be used in the browser
Replace {#requiredWSL2Port} with the port running in WSL2 you want to connect to.
netsh interface portproxy add v4tov4 listenport={#requiredWindowsPort}
listenaddress=0.0.0.0 connectport={#requiredWSL2Port}
connectaddress=$($(wsl hostname -I).Trim());

Resources