Correct IP address to access web application in apache docker container - docker

I have an easy apache docker setup defined in a docker-compose.yml:
services:
apache:
image: php:7.4-apache
command: /bin/bash -c "/var/www/html/startup.sh && exec 'apache2-foreground'"
volumes:
- ./:/var/www/html
- /c/Windows/System32/drivers/etc/hosts:/tmp/hostsfile
ports:
- "80:80"
From the startup.sh script I want to modify the hosts file from the host OS through the volume. Here I want to dynamically add an entry to resolve the hostname test.local to the ip address of the docker web application like so:
<ip-address> test.local
This way I should be able to open the application with the specified hostname http://test.local in my local browser.
Before writing the startup.sh script I wanted to try to manually open the application at http://172.19.0.2 via the containers IP address I got from
docker inspect apache_test
But the page won't open: ERR_CONNECTION_TIMED_OUT
Shouldn't I be able to access the application from that IP? What am I missing? Do I use the wrong IP address?
BTW: I am using Docker Desktop for Windows with the Hyper-V backend
Edit: I am able to access the application via http://localhost but since I want to add a new entry to the hosts file this is not the solution.

Apparently in newer versions of Docker Desktop for Windows you can't access (ping) your linux containers by IP.
Now there is some really dirty workaround for this that involves changing a .ps1 file of your Docker installation to get back the DockerNAT interface on windows. After that you need to add a new route in your windows routing table as described here:
route /P add <container-ip> MASK 255.255.0.0 <ip-found-in-docker-desktop-settings>
Then you might be able to ping your docker container from the windows host. I didn't test it though...
I found a solution to my original issue (resolution of test.local to container IP via hosts file) while reading one of the threads linked above here
That involves setting a free loopback IP in the 127.0.0.0/8 IP range in your ports section of the docker-compose.yml:
ports:
- "127.55.0.1:80:80"
After that you add the following to your hosts file:
127.55.0.1 test.local
And you can open your application at http://test.local
To do that for other dockerized applications too just choose another free loopback address.

Related

Docker app to host mysql connection not working (host os & image = windows)

My app from container wants to access Mysql from host machine, but is not able to connect. I googled a lot and tried many solutions but could not figure the error, could you please help me on this.
It is a windows image
IIS Website works
Website pages that use DB Connection does not work
Mysql DB is install in local machine (same pc where docker desktop is installed in)
Connection string in app uses 'host.docker.internal' with port 3306.
Tried docker uninstall, reinstall, image prune, container prune, WSL stop and start, host file commenting for below lines:
192.168.1.8 host.docker.internal
192.168.1.8 gateway.docker.internal
Below is the ipconfig from container
Nslookup and Ping commands:
network LS:
Docker Compose:
version: "3.9"
services:
web:
container_name: dinesh_server_container
image: dinesh_server:1
build: .
ports:
- "8000:80"
- "8001:81"
volumes:
- .\rowowcf:c:\rowowcf
- .\rowowcf_supportfiles:c:\rowowcf_supportfiles
- .\rowocollectionsite:c:\rowocollectionsite
environment:
TZ: Asia/Calcutta
Build image uses: FROM mcr.microsoft.com/dotnet/framework/aspnet:4.8
Host OS: Win 10 Pro 10.0.19043 Build 19043
HyperV is enabled too.
Tried the below too:
extra_hosts:
- "host.docker.internal:host-gateway"
Since it is on windows OS - host network mode is not supported (per my research)
EDIT:
MYSQL Bind Address is 0.0.0.0:
Maybe the problem is not directly related to docker but to mysql.
Please, try making your local mysql database listen in all the network interfaces of your host: by default it only listens in 127.0.0.1 and for this reason perhaps docker is unable to connect to it.
I am not sure about how to do it in Windows but typically you need to provide the value 0.0.0.0 for the bind-address configuration option in mysqld.cnf:
bind-address = 0.0.0.0
Please, consider review as well this article or this related serverfault question.
From a totally different point of view, I found this and this other related open issues in the Docker for Windows repository that in a certain way resembles your problem: especially the first one provides some workaround for the problem, like shutting down the wsl backend and restarting the docker daemon. I doesn't look like a solution to me, but perhaps it could be of help.

Considering URL redirections: How to use GUIs of web applications running in different containers within the same docker network on a remote server?

I have the feeling that I am overlooking something obvious as my solutions/ideas so far seem too cumbersome. I have searched intensively for a good solution, but so far without success - probably because I do not know what to look for.
Question:
How do you interact with the graphical interfaces of web servers running in different containers (within the same Docker Network) on a remote server, given URL redirections between these containers?
Initial situation:
I have two containers (a Flask web application and a Tomcat server with OpenAM running on it) running on my docker host (Azure-VM).
On the VM I can output the content of both containers via the ports that I have opened.
Using ssh port forwarding I can interact with the graphical components of both containers on my local machine.
Both containers were created with the same docker-compose and can be accessed via their domain name without additional network settings.
So far I have configured OpenAM on my local machine using ssh port forwarding.
Problem:
The Flask web app references OpenAM by its domain name defined in docker-compose and vice versa. I forward to my local machine the port of the Flask container. The Flask application is running and I can interact with it in my browser.
The system fails as soon as I am redirected from Flask to OpenAM on my local machine because the reference to the OpenAM container used by Flask is specific to the Docker network. Also, the port of the OpenAM Container is different.
In other words, the routing between the two networks is nonexistent.
Solutions Ideas:
Execute the requests on the VM using command-line tools.
Use a container with a headless browser that automatically executes the requests.
Use Network Setting 'Host' and execute the headless browser on the VM instead.
Route all requests through a single container (similar to a VPN) and use ssh port forwarding.
Simplified docker-compose:
version: "3.4"
services:
openam:
image: openidentityplatform/openam
ports:
- 5001:8080
command: /usr/local/tomcat/bin/catalina.sh run
flask:
build: ./SimpleHTTPServer
ports:
- 5002:8000
command: python -m http.server 8000
Route all requests through a single container - This is the correct approach.
See API gateway pattern
The best solution that I could find so far. It does not serve for production. However, for prototyping or if simply trying to emulate a server structure by using containers it is an easy setup.
General Idea:
Deploy a third VNC container running a Webbrowser and forward the port of this third container to your local machine. As the third container is part of the docker network it can naturally resolve the internal domain names and the VNC installation on your local machine enables you to interact with the GUIs.
Approach
Add the VNC to the docker-compose of the original question.
Enable X11 forwarding on the server and client-side.
Forward the port of the VNC container using ssh.
Install VNC on the client, start a new session, and enter the predefined password.
Try it out.
Step by Step
Add the VNC container (inspired by creack's post on stackoverflow) to the docker-compose file from the original question:
version: "3.4"
services:
openam:
image: openidentityplatform/openam
ports:
- 5001:8080
command: /usr/local/tomcat/bin/catalina.sh run
flask:
build: ./SimpleHTTPServer
ports:
- 5002:8000
command: python -m http.server 8000
firefoxVnc:
container_name: firefoxVnc
image: creack/firefox-vnc
ports:
- 5900:5900
environment:
- HOME=/
command: x11vnc -forever -usepw -create
Run the docker-compose: docker-compose up
Enable X11 forwarding on the server and client-side.
On client side $ vim ~/.ssh/config and add the following lines:
Host *
ForwardAgent yes
ForwardX11 yes
On server-side run $ vim /etc/ssh/sshd_config and edit the following lines:
X11Forwarding yes
X11DisplayOffset 10
Forward the port of the VNC container using ssh
ssh -v -X -L 5900:localhost:5900 gw.example.com
Make sure to include the -X flag for X11. The -v flag is just for debugging.
Install VNC on the client, start a new session and enter the predefined password.
Install VNC viewer on your local machine
Open the installed viewer and start a new session using the forwarded address localhost:59000
When prompted type in the password 1234 which was set in the original Dockerfile of the VNC dicker image (see creack's post linked above).
You can now either go to openam:8080/openam/ or apache:80 within the browser of the VNC localhost:5900 session.
An even better solution that is clean, straightforward, and also works perfectly when running parts of the application on different virtual machines.
Setup and Use an SSH SOCKS Tunnel
For Google Chrome and macOS:
Set your network settings to host within the Dockerfile or docker-compose.
Start an SSH tunnel:
$ ssh -N -D 9090 [USER]#[SERVER_IP]
Add the SwitchyOmega proxy addon to your Chrome browser.
Configure SwitchyOmega by going to New Profile > Proxy Profile, clicking create, entering the same server IP as for the ssh command and the port 9090.
Open a new terminal tap and run:
"/Applications/Google Chrome.app/Contents/MacOS/Google Chrome" \
--user-data-dir="$HOME/proxy-profile" \
--proxy-server="socks5://localhost:9090"
A new Crome session will open up in which you can simply browse your docker applications.
Reference | When running Linux or Windows | Using Firefox (no addon needed)
The guide How to Set up SSH SOCKS Tunnel for Private Browsing explains how to set up an SSH SOCKS Tunnel running Mac, Windows, or Linux and using Google Chrome or Firefox. I simply referenced the setup for macOS and Crome in case the link should die.

Docker for windows: how to access container from dev machine (by ip/dns name)

Questions like seem to be asked but I really don't get it at all.
I have a window 10 dev machine host with docker for windows installed. Besides others networks it has DockerNAT netwrok with IP 10.0.75.1
I run some containers with docker-compose:
version: '2'
services:
service_a:
build: .
container_name: docker_a
It created some network bla_default, container has IP 172.18.0.4, ofcource I can not connect to 172.18.0.4 from host - it doesn't have any netwrok interface for this.
What should I do to be able to access this container from HOST machine? (by IP) and if possible by some DNS name? What should I add to my docker-compose.yml, how to configure netwroks?
For me it should be something basic, but I really don't understand how all this stuff works and to access to container from host directly.
Allow access to internal docker networks from dev machine:
route /P add 172.0.0.0 MASK 255.0.0.0 10.0.75.2
Then use this https://github.com/aacebedo/dnsdock to enable DNS discovery.
Tips:
> docker run -d -v /var/run/docker.sock:/var/run/docker.sock --name dnsdock --net bridge -p 53:53/udp aacebedo/dnsdock:latest-amd64
> add 127.0.0.1 as DNS server on dev machine
> Use labels described in docs to have pretty dns names for containers
So the answer on the original question:
YES WE CAN!
Oh, this not actual.
MAKE DOCKER GREAT AGAIN!
The easiest option is port mapping: https://docs.docker.com/compose/compose-file/#/ports
just add
ports:
- "8080:80"
to the service definition in compose. If your service listens on port 80, requests to localhost:8080 on your host will be forwarded to the container. (I'm using docker machine, so my docker host is another IP, but I think localhost is how docker for windows appears)
Treating the service as a single process listening on one (or a few) ports has worked best for me, but if you want to start reading about networking options, here are some places to dig in:
https://docs.docker.com/engine/userguide/networking/
Docker's official page on networking - a very high level introduction, with most of the detail on the default bridge behavior.
http://www.linuxjournal.com/content/concerning-containers-connections-docker-networking
More information on network layout within a docker host
http://www.dasblinkenlichten.com/docker-networking-101-host-mode/
Host mode is kind of mysterious, and I'm curious if it works similarly on windows.

It's possible to tie a domain to the docker container when building it?

Currently in the company where I am working on they have a central development server which contains a LAMP environment. Each developer has access to the application as: developer_username.domain.com. The application we're working on uses licenses and the licenses are generated under each domain and are tied to the domain only meaning I can't use license from other developer.
The following example will give you an idea:
developer_1.domain.com ==> license1
developer_2.domain.com ==> license2
developer_n.domain.com ==> licenseN
I am trying to dockerize this enviroment at least having PHP and Apache in a container and I was able to create everything I need and it works. Take a look to this docker-compose.yml:
version: '2'
services:
php-apache:
env_file:
- dev_variables.env
image: reypm/php55-dev
build:
context: .
args:
- PUID=1000
- PGID=1000
ports:
- "80:80"
- "9001:9001"
extra_hosts:
- "dockerhost:xxx.xxx.xxx.xxx"
volumes:
- ~/var/www:/var/www
That will build what I need but the problem comes when I try to access the server because I am using http://localhost and then the license won't work and I won't be able to use the application.
The idea is to access as developer_username.domain.com, so my question is: is this a work that should be done on the Dockerfile or the Docker Compose I mean at image/container level let's say by setting up a ENV var perhaps or is this a job for /etc/hosts on the host running the Docker?
tl;dr
No! Docker doesn't do that for you.
Long answer:
What you want to do is to have a custom hostname on the machine hosting docker mapped to a container in Docker compose network. right?
Let's take a step back and see how networking in docker works:
By default Compose sets up a single network for your app. Each container for a service joins the default network and is both reachable by other containers on that network, and discoverable by them at a hostname identical to the container name.
This network is not equal to your host network and without explicit ports exporting (for a specific container) you wouldn't have access to this network. All exposing does, is that:
The exposed port is accessible on the host and the ports are available to any client that can reach the host.
From now on you can put a reverse proxy (like nginx) or you can edit /etc/hosts to define how clients can access the host (i.e. Docker host, the machine running Docker compose).
The hostname is defined when you start the container, overwriting anything you attempt to put inside the image. At a high level, I'd recommend doing this with a mix of custom docker-compose.yml and a volume per developer, but each running an identical image. The docker-compose.yml can include the hostname and domain setting. Then everything else that needs to be hostname specific on the filesystem, and the license itself, should point to files on the volume. Lastly, include an entrypoint that does the right thing if a new hostname is started with a default or empty volume, populating it with the new hostname data and prompting for the license.

Host network access from linked container

I've following coder-compose configuration:
version: '2'
services:
nginx:
build: ./nginx
links:
- tomcat1:tomcat1
- tomcat2:tomcat2
- tomcat3:tomcat3
ports:
- "80:80"
tomcat1:
build: ./tomcat
ports:
- "8080"
tomcat2:
build: ./tomcat
ports:
- "8080"
tomcat3:
build: ./tomcat
ports:
- "8080"
So, the question is, how to get access to the host network from the linked container(s):tomcat1, tomcat2, tomcat3. Here is the diagram:
Update
Seems, my diagram doesn't help much. Nginx is a load balancer, Tomcat 1-3 are application nodes. Deployed web. app needs to get access to internet resource.
Internet access is by default active on all containers (in bridge mode). All you need to check is if the http(s)_proxy variables are set if you are behind a proxy.
If your question if how to access docker host from a container (and not the reverse: access a container from the local docker host), then you would need to inspect the routing table of a container: see "From inside of a Docker container, how do I connect to the localhost of the machine?"
export DOCKER_HOST_IP=$(route -n | awk '/UG[ \t]/{print $2}')
There is a recent (June 2016) effort to add a adding a dockerhost as an entry in /etc/hosts of all running containers: issue 23177.
Update March 2020: this issue has been closed, and redirect to PR 40007: "Support host.docker.internal in dockerd on Linux"
This PR allows containers to connect to Linux hosts by appending a special string "host-gateway" to --add-host e.g. "--add-host=host.docker.internal:host-gateway" which adds host.docker.internal DNS entry in /etc/hosts and maps it to host-gateway-ip
This PR also add a daemon flag call host-gateway-ip which defaults to
the default bridge IP
Docker Desktop will need to set this field to the Host Proxy IP so DNS requests for host.docker.internal can be routed to VPNkit
This will be in Docker for Linux (and Docker Desktop, which runs the Linux daemon, although inside a lightweight VM).
Difference between this and the current implementation on Docker Desktop is that;
the current Docker Desktop implementation is in a part of the code-base that's proprietary (i.e., part of how Docker Desktop is configured internally)
this code could be used by the Docker Desktop team in future as well (to be discussed)
this PR does not set up the "magic" host.docker.internal automatically on every container, but it can be used to run a container that needs this host by adding docker run --add-host host.docker.internal:host-gateway
(to be discussed); setting that "magic" domain automatically on containers that are started could be implemented by adding an option for this in the ~/.docker/config.json CLI configuration file.

Resources