Im experimenting with Docker containers and Im having a problem with resolving ips from hostnames from inside my server.
It works fine on my machine (windows 10).
Basically Im just pinging hostnames on our internal network from my server (windows server 2016 running in a VM on VMWare) and it cannot find the host.
I run the container like this:
docker run -it microsoft/nanoserver
and when in the command prompt I ping one of our internal servers using its hostname.
This works fine on my windows 10 machine.
However if I ping the ip directly it works on the server.
If I ping the same hostname directly from the host it works fine.
Im quite new at this and I've been trying to figure it out using various guides, but I havent found anyone who has asked this before.
Any ideas?
The Docker container does not know anything about "your" network. Docker uses virtual interfaces to spin container - networks.
docker run --dns=127.0.0.1
Anyway, you can add your DNS Server to the Docker engine or add some static "host" entries like:
docker run --add-host=myserver.local:192.168.66.66 ...
Related
I have been using docker for some time now on my Synology NAS and it works just fine but I need something a little more powerful to run Plex Media Server.
I have installed Photon OS (on my ESXi Host) running Docker and enabled SSH. I have setup a static IP using the following network file configs:
5-Main.link
[Match]
MACAddress=00:00:00:00:00:00 (example)
[Link]
Description=Main
Name=Main
5-Main.network
[Match] Name=Main
[Network] Gateway=192.168.1.1
Address=192.168.1.3/24
DNS=192.168.1.1
DHCP=no
[DHCP] UseDNS=false
I have installed Portainer fine and that works on the 192.168.1.3:9443 address.
What I am trying to achive like I had done successfully on the NAS is to grant a container its own IP address. I created a docker network using the following command:
docker network create --driver=macvlan --gateway=192.168.1.1 --subnet=192.168.1.0/24 --ip-range=192.168.1.0/24 -o parent=Main LAN
This creates the network fine but when you attach it to a container, give the container a MAC address and an IP address within the same range (192.168.1.50) it does not ping out of the container or from the host (or another device) in. curl ipino.io also does not work from inside of the container. These exact steps works perfectly on the Synology NAS (with selecting the NAS' correct NIC etc)
I have tried using Debian OS instead but that also has the same problem. I have tried accepting all traffic in the IP tables but no effect also. On Debian OS, I have also allowed the NIC to be DHCP instead and tried using a static IP within the container but still no effect. I am running out of ideas. I feel this would be something simple but I have searched high and low now but I am coming to a dead end. Any advice would be greatly appreciated.
Im using an nginx container on my VMs as a proxy to some services.
In most of the VMs it works just fine, but I have a single VM which the nginx does not work in it. Tried to make requests with curl inside the container and it does not work, whereas, the curls do work outside the container on the VM.
Because it works on all the other VMs, I assumed it is a problem with the docker configuration on that specific VM.
The error i get from the curl inside the container is:
Failed to connect to x.x.x.x port 443: no route to host
(Tried maybe to add the --add-host parameter in the docker run, but it didn't help either)
Appreciate any help :)
Solved my problem by appending --net=host to the docker run command.
Apparently, for this specific VM, I had to explicitly define the network of the nginx container as host
(which enable the container to sends request to the outside world)
I am using Windows 10 1909 and have installed WSL2, using Ubuntu 20.04, the 19.03.13-beta2 docker version, having installed Docker for Windows Edge version using the WSL2 option. The integration is working pretty great, but I have one issue which I cannot solve.
On the WSL2 instance, there are services running, exposing some ports (3000, 3001, 3002,...). From one of the docker containers, I need to access the services for a specific development scenario (API Gateway), and this I cannot get to work.
I have tried using the WSL2 IP address directly, but then the connect just times out. I have also tried using host.docker.internal, which resolves to something else than the WSL2 IP address, but it still doesn't work.
Is there a special trick I need to pull, or is this kind of routing currently not supported, but will be, or is this for some other reason not possible?
This illustrates what I am trying to achieve:
The other routings work - i.e. I can access all the service ports coming from the node.js processes inside WSL2 from the Windows browser, and also I can access the exposed service ports from the containers both from inside WSL2 and from Windows. It's just this missing link I cannot make work.
So what you need to do in the windows machine port forward the port you are running on the WSL machine, this script port forwards the port 4000
netsh interface portproxy delete v4tov4 listenport="4000" # Delete any existing port 4000 forwarding
$wslIp=(wsl -d Ubuntu -e sh -c "ip addr show eth0 | grep 'inet\b' | awk '{print `$2}' | cut -d/ -f1") # Get the private IP of the WSL2 instance
netsh interface portproxy add v4tov4 listenport="4000" connectaddress="$wslIp" connectport="4000"
And on the container docker run command you have to add
--add-host=host.docker.internal:host-gateway
or if you are using docker-compose:
extra_hosts:
- "host.docker.internal:host-gateway"
Then inside the container you should be able to curl to
curl host.docker.internal:4000
and get a response!
For what it's worth: This scenario is working if you use the WSL2 subsystem IP address.
It does not work if you use host.docker.internal - this DNS alias is defined in the containers, but it maps to the IP address of the Windows host, not of the WSL2 host, and that routing back inside the WSL2 host does not work.
The reason why this (probably temporarily) did not work is somewhat unclear - I will revisit this answer if the problem should reappear and I manage to track down what the actual problem may have been.
I ran into this problem with the latest Docker Desktop. I rolled it back to 4.2 and it worked.
Docker Desktop 4.2
Windows 19044.1466
Ubuntu 20.04
I have a java service running on a linux local host (accessing the IP address using ifconfig command), my other containers running on docker desktop using the WSL2 based engine, which can communicate to my java service using the IP address.
This sounds like the issue which is discussed here. For me the only thing that worked was running the docker container with --net=host and then using [::1] instead of localhost in the container to access other containers running in WSL.
So for example, container1 is started with docker run --net=host and then calls container2 like this: http://[::1]:8000/container2 (adjust port and path to your specific application)
I'm just starting up with Docker and the first example that I was trying to run already fails:
docker container run -p 80:80 nginx
The command successfully fetches the nginx/latest image from the Docker Hub registry and runs the new container, there is no indication in CMD of anything going wrong. When I browse to localhost:80 I get 503 (Service Unavailable). I'm doing this test on Windows 7.
I tried the same command on another computer (this time on macOS) and it worked as expected, no issues.
What might be a problem? I found some issues on SO similar to mine, but they were connected with the usage of nginx-proxy, which I don't use and don't even know what it is. I'm trying to run a normal http server.
//EDIT
When I try to bind my container to a different port, for example:
docker container run -p 4201:80 nginx
I get ERR_CONNECTION_REFUSED in Chrome, so basically connection can't be established, because destination does not exist. Why is that?
The reason why it didn't work is that on Windows, Docker publishes results on different IP than localhost. This IP given is at the top in Docker client console.
I cannot connect to the published port on the swarm that uses overlay networking. I am using Docker for Windows with Windows containers. Both Windows and Docker are fully upgraded. After Windows' 1709 update, I was hoping this issue would be resolved. I looked for information on the Internet to see if I was doing something wrong to no avail. I would like to know if anyone was successfully able to get it working.
On a side note, when I direct the port on my machine in docker run -p 80:80 without using swarm, "localhost" does not work as well. I think this is a known limitation though. Both issues work when I switch to Linux containers.
Expected behavior
I am running a dotnet kestrel web server service. I should be able to connect to my service using the published port.
Actual behavior
Firefox gives me timeout, opera straight away returns connection refused. Cannot telnet into it either. Container IP's assigned by the overlay network do not work either.
Information
docker service ls gives me this:
Ports cannot be seen there, is it because publish mode is host? Ports information is available in the output of docker service ps
And when I change the publish mode, I can scale it as well and the port information is seen in docker service ls albeit still cannot connect. the one below is without the publish mode=host parameter:
For more info, this is the output of the docker network ls I wonder if i need some sort of bridge network like in Linux.
Steps to reproduce the behavior
Initialise swarm
Start the service, in my case: a simple web service built using aspnetcore:latest image. I tried different parameters, even used a docker-stack.yml:
docker service create --name=web --publish mode=host,published=80,target=80 web:aspnetcorelatest in the case above, I was unable to scale it on the same machine, which is normal i guess
docker service create --name=web --publish published=85,target=80 web:aspnetcorelatest
Try to connect using one of http://localhost or another IP. I tried connecting over VPN, from another machine as well as Internet IP.