I was creating a container using microsoft/windowsservercore image. And then when I tried to ping google.com from inside the container, I got this error:
Ping request could not find host www.google.com. Please check the name
and try again.
Then I switched to Linux Container mode in docker for windows. Then tried the same in an ubuntu container but this time it worked fine. Then when I switched back to Windows Container mode and tried the same thing again, it worked this time. Although my issue was resolved but I still don't understand what caused this issue in the first place ?
Docker for windows and linux have different default network settings.
Typically, the default for linux is bridged mode while in windows you have NAT.
You can alter your configuration with Network Connection Settings for windows
See: https://docs.docker.com/docker-for-windows/#network
The first option for me is always to look at the network section when executing docker inspect *containername*. This command gives you information about your network settings for the container. Other options are to check your firewall settings.
In general I usually use ping 8.8.8.8 since www.google.com cannot be pinged even from my standard windows machine.
Related
I am trying to use Colima to run an apache-php docker container. My uni provides docker images derived from upstream ones configured for our course using docker-compose.
The container works as it should but I can't get its Xdebug to connect to my PhpStorm.
This is what it says in the Xdebug log:
Creating socket for 'host.docker.internal:9003', poll success, but error: Operation now in progress (29).
This tells me absolutely nothing.
The setup is admittedly quite complex (x86 Apache ran via QEMU in Docker in Linux VM in macOS on ARM CPU) but I can do nc host.docker.internal 9003 from any docker container, so I have no idea why Xdebug isn't able to reach my host. (Only works when the IDE is running and on no other ports, so it's definitely connecting to PhpStorm.)
Any idea what could be going on here?
On Colina, the IP address is hard coded to "192.168.5.2", so setting xdebug.client_host=192.168.5.2 should do the trick. There is now also an alias for it, called host.lima.internal.
As per this documentation page.
The problem is the uni's docker-compose.yml which configured the container with:
extra_hosts:
- "host.docker.internal:host-gateway"
and apparently that can break host.docker.internal in some situations: https://github.com/docker/for-linux/issues/264#issuecomment-759737542
The solution is to remove those two lines.
I have been running a media cluster for sometime without any issues. I have everything networked into two different docker networks... the first network just bridges the docker instance to the local machine, the second network is a docker VPN container that I use for the other media services (an earlier version of what I am working on can be found here: https://github.com/Xander-Rudolph/MediaDocker)
The strangest thing happened today though. I ran the docker update for windows and now docker spools up without any errors or issues, however none of the services work outside of the machine running docker. Usually I have a poke through for a couple of the services in my router (namely wordpress/joomla which is on the bridge) and they work outside of my local network, but none of them are working anymore. I was able to confirm its not the DNS A record because I'm able to use the RDP ports I have mapped for my router, and when I test on another machine in the same network, it can't access the services via the internal IP (but it can RDP).
Anyone have any idea what could have changed to break this? I've already updated all my docker images and even rebuilt my VPN container (before I realized its a networking issue). What are some steps I can do to try to troubleshoot what is going wrong in docker to prevent access outside of localhost?
Update
I've been able to rule out the docker update as the root cause... I upgraded docker on my laptop (which was previously running the same version as my desktop) and its not having the same issue... this configuration must be localized to this desktop... No idea what the issue is... Will try a linux VM on the desktop instead of docker for windows...
Update 2
After a lot of screwing around in both a VM and in WSL, I'm still only able to access the docker services from localhost but not a different machine on my network or via the IP on the host machine (perhaps something similar to this: Can't access localhost via IP address). RDP does work so the computer is accessible but the services are not.
I'm not sure if this is a result of a docker networking config or a windows network config (I'm using WSL with docker installed on ubuntu 20.08) but I'm not seeing anything stick out. I'm going to remove the tag for docker windows but this is definitely an issue with networking and I suspect it has something to do with the fact that the containers are running behind a VPN... although I don't know why I would be able to access them on localhost but not the IP on another VM...
When I run
netstat -a -o
on WSL I can see the established ports on localhost... EX:
tcp 0 0 localhost:7878 localhost:37520 ESTABLISHED
but when I look on the host machine (for wsl) I don't see the connection. I tried to use netsh to create a firewall rule to see if that would help:
netsh advfirewall firewall add rule name="TCP Port 7878" dir=in localport=7878 protocol=TCP action=allow
but it didn't have any effect.
Any suggestions for ways to trace the network to see where/how its failing/getting blocked would be extremely helpful.
Your question: "...What are some steps I can do to try to troubleshoot what is going wrong in docker to prevent access outside of localhost?..."
Troubleshooting help for you, first do you have multiple networking adapters (Ethernet, Wi-Fi, etc.) present on the host. First ensure, the priority of these adapters needs to be configured in correct order so the Windows networking stack can correctly choose gateway routes.
Now, to fix this set your primary internet-connected networking adapter to have the lowest InterfaceMetric value, use can use these Powershell commands from an elevated console:
Get-NetIPInterface -AddressFamily IPv4 | Sort-Object -Property InterfaceMetric -Descending
Please ensure that the host's primary internet-connected network adapter has the lowest InterfaceMetric value.
// Use this command to make the change for e.g. lets say your
// primary adapter InterfaceAlias is 'Wi-Fi'
Set-NetIPInterface -InterfaceAlias 'Wi-Fi' -InterfaceMetric 3
Now step two, if your host's primary network adapter is bridged because you have an External virtual switch setup in Hyper-V, then you will set the external virtual switch to have the lowest InterfaceMetric value.
Lastly, confirm/verify your routing tables, when you run this, the last line should show the primary adapter's gateway address along with it's ifMetric value):
Get-NetRoute -AddressFamily IPv4
If you’re using Docker Toolbox then any port you publish with docker run -p will be published on the Toolbox VM’s private IP address.
docker-machine ip will tell you.
It is frequently
192.168.99.100
Taken from: https://forums.docker.com/t/cant-connect-to-container-on-localhost-with-port-mapping/52716/25
After several attempts using the references below, I was still not getting anywhere. The recommendation by #derple didn't get me anywhere (since I was in wsl) but the article he linked someone had said they switched to linux and uninstalled and reinstalled docker desktop... and for some stupid reason that works.
These are my exact steps I took to fix it:
Uninstall docker desktop
Install WSL and docker inside an ubuntu18.04 instance in wsl
Test docker in wsl with localhost (worked only on localhost still)
Uninstall WSL using windows add/remove features
reinstall docker desktop
Oddly the get-netipinterface and get-netroute look exactly the same as they did before I did the uninstall and reinstall but things seem to be working now... I have no idea why the above worked...
I am using Windows 10 1909 and have installed WSL2, using Ubuntu 20.04, the 19.03.13-beta2 docker version, having installed Docker for Windows Edge version using the WSL2 option. The integration is working pretty great, but I have one issue which I cannot solve.
On the WSL2 instance, there are services running, exposing some ports (3000, 3001, 3002,...). From one of the docker containers, I need to access the services for a specific development scenario (API Gateway), and this I cannot get to work.
I have tried using the WSL2 IP address directly, but then the connect just times out. I have also tried using host.docker.internal, which resolves to something else than the WSL2 IP address, but it still doesn't work.
Is there a special trick I need to pull, or is this kind of routing currently not supported, but will be, or is this for some other reason not possible?
This illustrates what I am trying to achieve:
The other routings work - i.e. I can access all the service ports coming from the node.js processes inside WSL2 from the Windows browser, and also I can access the exposed service ports from the containers both from inside WSL2 and from Windows. It's just this missing link I cannot make work.
So what you need to do in the windows machine port forward the port you are running on the WSL machine, this script port forwards the port 4000
netsh interface portproxy delete v4tov4 listenport="4000" # Delete any existing port 4000 forwarding
$wslIp=(wsl -d Ubuntu -e sh -c "ip addr show eth0 | grep 'inet\b' | awk '{print `$2}' | cut -d/ -f1") # Get the private IP of the WSL2 instance
netsh interface portproxy add v4tov4 listenport="4000" connectaddress="$wslIp" connectport="4000"
And on the container docker run command you have to add
--add-host=host.docker.internal:host-gateway
or if you are using docker-compose:
extra_hosts:
- "host.docker.internal:host-gateway"
Then inside the container you should be able to curl to
curl host.docker.internal:4000
and get a response!
For what it's worth: This scenario is working if you use the WSL2 subsystem IP address.
It does not work if you use host.docker.internal - this DNS alias is defined in the containers, but it maps to the IP address of the Windows host, not of the WSL2 host, and that routing back inside the WSL2 host does not work.
The reason why this (probably temporarily) did not work is somewhat unclear - I will revisit this answer if the problem should reappear and I manage to track down what the actual problem may have been.
I ran into this problem with the latest Docker Desktop. I rolled it back to 4.2 and it worked.
Docker Desktop 4.2
Windows 19044.1466
Ubuntu 20.04
I have a java service running on a linux local host (accessing the IP address using ifconfig command), my other containers running on docker desktop using the WSL2 based engine, which can communicate to my java service using the IP address.
This sounds like the issue which is discussed here. For me the only thing that worked was running the docker container with --net=host and then using [::1] instead of localhost in the container to access other containers running in WSL.
So for example, container1 is started with docker run --net=host and then calls container2 like this: http://[::1]:8000/container2 (adjust port and path to your specific application)
I have a VM running Ubuntu 16.04, on which I want to deploy an application packaged as a Docker container. The application needs to be able to perform an HTTP request towards a server under VPN (e.g. server1.vpn-remote.com)
I successfully configured the host VM in order to connect to the VPN through openconnect, I can turn this connection on/off using a systemd service.
Unfortunately, when I run docker run mycontainer, neither the host nor the container are able to reach server1.vpn-remote.com. Weirdly enough, there is no error displayed in the VPN connection service logs, which is stuck to the openconnect messages confirming a successful connection.
If I restart the VPN connection after starting mycontainer, the host machine is able to access server1.vpn-remote.com, but not the container. Moreover, if I issue any command like docker run/start/stop/restart on mycontainer or any other container, the connection gets broken again even for the host machine.
NOTE: I already checked on the ip routes and there seems to be no conflict between Docker and VPN subnets.
NOTE: running the container with --net="host" results in both host and container being able to access the VPN but I would like to avoid this option as I will eventually make a docker compose deployment which requires all containers to run in bridge mode.
Thanks in advance for your help
EDIT: I figured out it is a DNS issue, as I'm able to ping the IP corresponding to server1.vpn-remote.com even after the VPN connection seemed to be failing. I'm going through documentation regarding DNS management with Docker and Docker Compose and their usage of the host's /etc/resolv.conf file.
I hope you don't still need help six months later! Some of the details are different, but this sounds a bit like a problem I had. In my case the solution was a bit disappointing: after you've connected to your VPN, restart the docker daemon:
sudo systemctl restart docker
I'm making some inferences here, but it seems that, when the daemon starts, it makes some decisions/configs based on the state of the network at that time. In my case, the daemon starts when I boot up. Unsurprisingly, when I boot up, I haven't had a chance to connect to the VPN yet. As a result, my container traffic, including DNS lookups, goes through my network directly.
Hat tip to this answer for guiding me down the correct path.
I cannot connect to the published port on the swarm that uses overlay networking. I am using Docker for Windows with Windows containers. Both Windows and Docker are fully upgraded. After Windows' 1709 update, I was hoping this issue would be resolved. I looked for information on the Internet to see if I was doing something wrong to no avail. I would like to know if anyone was successfully able to get it working.
On a side note, when I direct the port on my machine in docker run -p 80:80 without using swarm, "localhost" does not work as well. I think this is a known limitation though. Both issues work when I switch to Linux containers.
Expected behavior
I am running a dotnet kestrel web server service. I should be able to connect to my service using the published port.
Actual behavior
Firefox gives me timeout, opera straight away returns connection refused. Cannot telnet into it either. Container IP's assigned by the overlay network do not work either.
Information
docker service ls gives me this:
Ports cannot be seen there, is it because publish mode is host? Ports information is available in the output of docker service ps
And when I change the publish mode, I can scale it as well and the port information is seen in docker service ls albeit still cannot connect. the one below is without the publish mode=host parameter:
For more info, this is the output of the docker network ls I wonder if i need some sort of bridge network like in Linux.
Steps to reproduce the behavior
Initialise swarm
Start the service, in my case: a simple web service built using aspnetcore:latest image. I tried different parameters, even used a docker-stack.yml:
docker service create --name=web --publish mode=host,published=80,target=80 web:aspnetcorelatest in the case above, I was unable to scale it on the same machine, which is normal i guess
docker service create --name=web --publish published=85,target=80 web:aspnetcorelatest
Try to connect using one of http://localhost or another IP. I tried connecting over VPN, from another machine as well as Internet IP.