Related
I run into an issue with a docker and WSL2 configuration that was running ok for weeks...
I'm running a docker Apache2 web server on Ubuntu WSL2 with port forwarding.
I can see using nmap the 8080 Ubuntu port is open when docker image is running.
I can curl the web server from within the Ubuntu WSL2 using both 127.0.0.1:8080 and eth0 inet address (172.17.118.136:8080) and get the apache default page.
I have an issue when trying to access the web server from my Windows host with the Ubuntu eth0 inet address : connection timed out.
From my Windows host the ping of Ubuntu eth0 is ok, and when I run an Apache2 web server directly from Ubuntu WSL2 (no docker), my Windows host is able to connect to it using eth0 inet address.
You can use the following lsof command in WSL2 to find out on which port the docker is running and then use that port to connect:
sudo lsof -i -P | grep LISTEN
You will get a result like this:
The port for me is 49157:
so, I can access the Docker container which is inside WSL through Windows by http://localhost:49157 or by wsl inet ip -> http://172.31.255.167:49157
Note: if lsof is not installed then use the following command to install it:
sudo apt install lsof
I solved the issue thanks to jishi comment on
this thread : https://github.com/microsoft/WSL/issues/4983#issuecomment-602487077
Using the ipv6 localhost url [::1] worked for me.
I have an application that I have been deploying that gets the local ip address to open up ports for communication it does. It is running on a vagrant / virtualbox setup. Everything was good. I recently installed docker to run a DB container.
What I am finding is that the existing application when it tries to get its IP Address is finding the docker0 address 172.x.x.x and things start breaking. If I destroy the docker0 adapter (sudo ip link del docker0) things go back to working.
Is there a way so that when the java application asks for the ip address it does not return the docker address?
Thats how docker works. You could run the Container in the Host-Network so it will use the "VM" IP.
docker run my_container --net=host
Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 4 months ago.
The community reviewed whether to reopen this question 4 months ago and left it closed:
Original close reason(s) were not resolved
Improve this question
Docker is a software tool primarily used by programmers as it is the mechanism programmers use to produce container images.
My machine is on a private network with private DNS servers, and a private zone for DNS resolution. I can resolve hosts on this zone from my host machine, but I cannot resolve them from containers running on my host machine.
Host:
root#host:~# cat /etc/resolv.conf
# Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)
# DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN
nameserver 127.0.1.1
root#host:~# ping privatedomain.io
PING privatedomain.io (192.168.0.101) 56(84) bytes of data.
Container:
root#container:~# cat /etc/resolv.conf
# Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)
# DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN
nameserver 8.8.8.8
nameserver 8.8.4.4
root#container:~# ping privatedomain.io
ping: unknown host privatedomain.io
It's fairly obvious that Google's public DNS servers won't resolve my private DNS requests. I know I can force it with docker --dns 192.168.0.1, or set DOCKER_OPTS="--dns 192.168.0.1" in /etc/default/docker, but my laptop frequently switches networks. It seems like there should be a systematic way of solving this problem.
Docker populates /etc/resolv.conf by copying the host's /etc/resolv.conf, and filtering out any local nameservers such as 127.0.1.1. If there are no nameservers left after that, Docker will add Google's public DNS servers (8.8.8.8 and 8.8.4.4).
According to the Docker documentation:
Note: If you need access to a host’s localhost resolver, you must modify your DNS service on the host to listen on a non-localhost address that is reachable from within the container.
The DNS service on the host is dnsmasq, so if you make dnsmasq listen on your docker IP and add that to resolv.conf, docker will configure the containers to use that as the nameserver.
1 . Create/edit /etc/dnsmasq.conf† and add these lines:
interface=lo
interface=docker0
2 . Find your docker IP (in this case, 172.17.0.1):
root#host:~# ifconfig | grep -A2 docker0
docker0 Link encap:Ethernet HWaddr 02:42:bb:b4:4a:50
inet addr:172.17.0.1 Bcast:0.0.0.0 Mask:255.255.0.0
3 . Create/edit /etc/resolvconf/resolv.conf.d/tail and add this line:
nameserver 172.17.0.1
4 . Restart networking, update resolv.conf, restart docker:
sudo service network-manager restart
sudo resolvconf -u
sudo service docker restart
Your containers will now be able to resolve DNS from whatever DNS servers the host machine is using.
† The path may be /etc/dnsmasq.conf, /etc/dnsmasq.conf.d/docker.conf, /etc/NetworkManager/dnsmasq.conf, or /etc/NetworkManager/dnsmasq.d/docker.conf depending on your system and personal preferences.
For Ubuntu 18.04, and other systems that use systemd-resolved, it may be necessary to install dnsmasq and resolvconf. systemd-resolved is hard-coded to listen on 127.0.0.53, and Docker filters out any loopback address when reading resolv.conf.
1 . Install dnsmasq and resolvconf.
sudo apt update
sudo apt install dnsmasq resolvconf
2 . Find your docker IP (in this case, 172.17.0.1):
root#host:~# ifconfig | grep -A2 docker0
docker0 Link encap:Ethernet HWaddr 02:42:bb:b4:4a:50
inet addr:172.17.0.1 Bcast:0.0.0.0 Mask:255.255.0.0
3 . Edit /etc/dnsmasq.conf and add these lines:
interface=docker0
bind-interfaces
listen-address=172.17.0.1
4 . Create/edit /etc/resolvconf/resolv.conf.d/tail and add this line:
nameserver 172.17.0.1
5 . Restart networking, update resolv.conf, restart docker:
sudo service network-manager restart
sudo resolvconf -u
sudo service dnsmasq restart
sudo service docker restart
Your containers will now be able to resolve DNS from whatever DNS servers the host machine is using.
As you know, Docker copy host /etc/resolv.conf file to containers but removing any local nameserver.
My solution to this problem is to keep using systemd-resolvd and NetworkManager but add dnsmasq and use it to "forward" Docker containers DNS queries to systemd-resolvd.
Step by step guide:
Make /etc/resolv.conf a "real" file
sudo rm /etc/resolv.conf
sudo touch /etc/resolv.conf
Create file /etc/NetworkManager/conf.d/systemd-resolved-for-docker.conf to tell NetworkManager to inform systemd-resolvd but to not touch /etc/resolv.conf
[main]
# NetworkManager will push the DNS configuration to systemd-resolved
dns=systemd-resolved
# NetworkManager won’t ever write anything to /etc/resolv.conf
rc-manager=unmanaged
Install dnsmasq
sudo apt-get -y install dnsmasq
Configure dnsmasq in /etc/dnsmasq.conf for listening DNS queries comming from Docker and using systemd-resolvd name server
# Use interface docker0
interface=docker0
# Explicitly specify the address to listen on
listen-address=172.17.0.1
# Looks like docker0 interface is not available when dnsmasq service starts so it fails. This option makes dynamically created interfaces work in the same way as the default.
bind-dynamic
# Set systemd-resolved DNS server
server=127.0.0.53
Edit /etc/resolv.conf to use systemd-resolvd nameserver (127.0.0.53) and the host IP (172.17.0.1) in Docker network
# systemd-resolvd name server
nameserver 127.0.0.53
# docker host ip
nameserver 172.17.0.1
Restart services
sudo service network-manager restart
sudo service dnsmasq restart
sudo service docker restart
For more info see my post (in spanish) https://rubensa.wordpress.com/2020/02/07/docker-no-usa-los-mismos-dns-que-el-host/
I had problems with the DNS resolver in our docker containers. I tried a lot of different things, and in the end, I just figured that my CentOS VPS in Hostgator didn't have installed by default NetworkManager-tui (nmtui), I just installed and reboot it.
sudo yum install NetworkManager-tui
And reconfigured my resolv.conf with default DNS as 8.8.8.8.
nano /etc/resolv.conf
My case with many images from docker hub (nodered, syncthing and another):
container is running under not-root user
/etc/resolv.conf inside container has permissions 600 and owned by root
So my solution is very simple
root#container:~# chmod 644 /etc/resolv.conf
Profit! :))
If you are using a VPN, the VPN protocol might be appending to outbound packets beyond the configured MTU on your private network.
A typical MTU is 1500.
Try adding this content to /etc/docker/daemon.json
{
"mtu": 1300,
"dns": ["<whatever DNS server you need in your private network>"]
}
Then systemctl restart docker.
I have the same error message in my systemctl status docker.
I run a Nextcloud and a nextcloud nginx proxy container and used docker compose to install it. It worked for multiple months without big hickups, but on Friday it wasn't accessable. The server had shut down.
I restarted it, my icecast2 instance is working fine and was used this sunday for the service in our church. But the docker containers are gone. docker ps -a doesn't show any, I can't access the nextcloud via docker exec like I would do normally. And I have the error message:
No non-localhost DNS nameservers are left in resolv.conf. Using default external servers: [nameserver 8.8.8.8 nameserver 8.8.4.4]"
Feb 06 19:04:58 ncxxxxxxxxx dockerd[21551]: time="2022-02-06T19:04:58.894366765Z" level=info msg="IPv6 enabled; Adding default IPv6 external servers: [nameserver 2001:4860:4860::8888 nameserver 2001:4860:4860::8844]
My resolv.conf looks like this:
GNU nano 4.8
/etc/resolv.conf
nameserver 127.0.0.53
options edns0 trust-ad
search fritz.box
Based on answer from #rubensa, but simpler and more integrated IMHO:
Install dnsmasq
sudo apt-get -y install dnsmasq
Configure dnsmasq in /etc/dnsmasq.d/docker-dns-fix.conf for listening to DNS queries coming from Docker and using systemd-resolvd name server
# Use interface docker0
interface=docker0
# Explicitly specify the address to listen on
listen-address=172.17.0.1
# Looks like docker0 interface is not available when dnsmasq service starts so it fails. This option makes dynamically created interfaces work in the same way as the default.
bind-dynamic
# Set systemd-resolved DNS server
server=127.0.0.53
Tell Docker to use dnsmasq by editing/creating /etc/docker/daemon.json
{
"dns": ["172.17.0.1"]
}
Restart services
sudo service dnsmasq restart
sudo service docker restart
It was enough for Ubuntu 18.04 LTS:
sudo service network-manager restart
sudo resolvconf -u
sudo service dnsmasq restart
sudo service docker restart
I've been trying to run Docker build on various files which previously worked before, which are now no longer working.
As soon as the Docker file included any line that was to install software it would fail with a message saying that the package was not found.
RUN apt-get -y install supervisor nodejs npm
The common message which showed up in the logs was
Could not resolve 'archive.ubuntu.com'
Any idea why any software will not install?
Uncommenting DOCKER_OPTS="--dns 8.8.8.8 --dns 8.8.4.4" in /etc/default/docker as Matt Carrier suggested did NOT work for me. Nor did putting my corporation's DNS servers in that file. But, there's another way (read on).
First, let's verify the problem:
$ docker run --rm busybox nslookup google.com # takes a long time
nslookup: can't resolve 'google.com' # <--- appears after a long time
Server: 8.8.8.8
Address 1: 8.8.8.8
If the command appears to hang, but eventually spits out the error "can't resolve 'google.com'", then you have the same problem as me.
The nslookup command queries the DNS server 8.8.8.8 in order to turn the text address of 'google.com' into an IP address. Ironically, 8.8.8.8 is Google's public DNS server. If nslookup fails, public DNS servers like 8.8.8.8 might be blocked by your company (which I assume is for security reasons).
You'd think that adding your company's DNS servers to DOCKER_OPTS in /etc/default/docker should do the trick, but for whatever reason, it didn't work for me. I describe what worked for me below.
SOLUTION:
On the host (I'm using Ubuntu 16.04), find out the primary and secondary DNS server addresses:
$ nmcli dev show | grep 'IP4.DNS'
IP4.DNS[1]: 10.0.0.2
IP4.DNS[2]: 10.0.0.3
Using these addresses, create a file /etc/docker/daemon.json:
$ sudo su root
# cd /etc/docker
# touch daemon.json
Put this in /etc/docker/daemon.json:
{
"dns": ["10.0.0.2", "10.0.0.3"]
}
Exit from root:
# exit
Now restart docker:
$ sudo service docker restart
VERIFICATION:
Now check that adding the /etc/docker/daemon.json file allows you to resolve 'google.com' into an IP address:
$ docker run --rm busybox nslookup google.com
Server: 10.0.0.2
Address 1: 10.0.0.2
Name: google.com
Address 1: 2a00:1450:4009:811::200e lhr26s02-in-x200e.1e100.net
Address 2: 216.58.198.174 lhr25s10-in-f14.1e100.net
REFERENCES:
I based my solution on an article by Robin Winslow, who deserves all of the credit for the solution. Thanks, Robin!
"Fix Docker's networking DNS config." Robin Winslow. Retrieved 2016-11-09. https://robinwinslow.uk/2016/06/23/fix-docker-networking-dns/
After much headache I found the answer. Could not resolve 'archive.ubuntu.com' can be fixed by making the following changes:
Uncomment the following line in /etc/default/docker
DOCKER_OPTS="--dns 8.8.8.8 --dns 8.8.4.4"
Restart the Docker service
sudo service docker restart
Delete any images which have cached the invalid DNS settings.
Build again and the problem should be solved.
Credit goes to Andrew SB
I run into the same problem, but neither uncommenting /etc/default/docker dns entries nor editing the /etc/resolv.conf in the build container or the /etc/docker/daemon.json helps for me.
But after I build with the option --network=host the resolving was fine again.
docker build --network=host -t my-own-ubuntu-like-image .
Maybe this will help someone again.
I believe that Matt Carrier's answer is the correct solution for this problem. However, after implementing it, I still observed the same behavior: could not resolve 'archive.ubuntu.com'.
This led me to eventually find that the network I was connected to was blocking public DNS. The solution to this problem was to configure my Docker container to use the same name server that my host (the machine from which I was running Docker) was using.
How I triaged:
Since I was working through the Docker documentation, I already had an example image installed on my machine. I was able to start a new container to run that image and create a new bash session in that container: docker run -it docker/whalesay bash
Does the container have an Internet connection?: ping 172.217.4.238 (google.com)
Can the container resolve hostnames? ping google.com
In my case, the first ping resulted in responses, the second did not.
How I fixed:
Once I discovered that DNS was not working inside the container, I verified that I could duplicate the same behavior on the host. nslookup google.com resolved just fine on the host. But, nslookup google.com 8.8.8.8 or nsloookup google.com 8.8.4.4 timed out.
Next, I found the name server(s) that my host was using by running nm-tool (on Ubuntu 14.04). In the vein of fast feedback, I started up the example image again, and added the IP address of the name server to the container's resolv.conf file: sudo vi /etc/resolv.conf. Once saved, I attempted the ping again (ping google.com) and this time it worked!
Please note that the changes made to the container's resolv.conf are not persistent and will be lost across container restarts. In my case, the more appropriate solution was to add the IP address of my network's name server to the host's /etc/default/docker file.
After adding local dns ip to default docker file it started working for me... please find the below steps...
$ nm-tool # (will give you the dns IP)
DNS : 172.168.7.2
$ vim /etc/default/docker # (uncomment the DOCKER_OPTS and add DNS IP)
DOCKER_OPTS="--dns 172.168.7.2 --dns 8.8.8.8 --dns 8.8.4.4"
$ rm `docker ps --no-trunc -aq` # (remove all the containers to avoid DNS cache)
$ docker rmi $(docker images -q) # (remove all the images)
$ service docker restart #(restart the docker to pick up dns setting)
Now go ahead and build the docker... :)
For anyone who is also having this problem, I solved my problem by editing the /etc/default/docker file, as suggested by other answers and questions. However I had no idea what IP to use as the DNS.
It was only after a while I figured out I had to run ifconfig docker on the host to show the IP for the docker network interface.
docker0 Link encap:Ethernet Endereço de HW 02:42:69:ba:b4:07
inet end.: 172.17.0.1 Bcast:0.0.0.0 Masc:255.255.0.0
endereço inet6: fe80::42:69ff:feba:b407/64 Escopo:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Métrica:1
pacotes RX:8433 erros:0 descartados:0 excesso:0 quadro:0
Pacotes TX:9876 erros:0 descartados:0 excesso:0 portadora:0
colisões:0 txqueuelen:0
RX bytes:484195 (484.1 KB) TX bytes:24564528 (24.5 MB)
It was 172.17.0.1 in my case. Hope this helps anyone who is also having this issue.
I found this answer after some Googleing. I'm using Windows, so some of the above answers did not apply to my file system.
Basically run:
docker-machine ssh default
echo "nameserver 8.8.8.8" > /etc/resolv.conf
Which just overwrites the existing nameserver used with 8.8.8.8 I believe. It worked for me!
Based on some comments, you may have to be root. To do that, issue sudo -i.
I just wanted to add a late response for anyone coming across this issue from search engines.
Do NOT do this: I used to have an option in /etc/default/docker to set iptables=false. This was because ufw didn't work (everything was opened even though only 3 ports were allowed) so I blindly followed the answer to this question: Uncomplicated Firewall (UFW) is not blocking anything when using Docker and this, which was linked in the comments
I have a very low understanding of iptables rules / nat / routing in general, hence why I might have done something irrational.
Turns out that I've probably misconfigured it and killed DNS resolution inside my containers. When I ran an interactive container terminal: docker run -i -t ubuntu:14.04 /bin/bash
I had these results:
root#6b0d832700db:/# ping google.com
ping: unknown host google.com
root#6b0d832700db:/# cat /etc/resolv.conf
search online.net
nameserver 8.8.8.8
nameserver 8.8.4.4
root#6b0d832700db:/# ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=56 time=1.76 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=56 time=1.72 ms
Reverting all of my ufw configuration (before.rules), disabling ufw and removing iptables=false from /etc/default/docker restored the DNS resolution functionality of the containers.
I'm now looking forward to re-enable ufw functionality by following these instructions instead.
I have struggled for some time with this now as well, but here it is what solved it for me on Ubuntu 16.04 x64. I hope it saves someone's time, too.
In /etc/NetworkManager/NetworkManager.conf:
comment out
#dns=dnsmasq
Create (or modify) /etc/docker/daemon.json:
{
"dns": ["8.8.8.8"]
}
Restart docker with:
sudo service docker restart
I have the same issue, and tried the steps mentioned, but seems none works until refresh the network settings.
The steps:
As mentioned, add DOCKER_OPTS="--dns 8.8.8.8 --dns 8.8.4.4 --ip-masq=true" to /etc/default/docker.
Manually flush the PREROUTING table contents using the iptables -t nat -F POSTROUTING . After running this, restart docker and it will initialize the nat table with the new IP range.
Same issue for me (on Ubuntu Xenial).
docker run --dns ... for containers worked.
Updating docker daemon options for docker build (docker-compose etc.) did not work.
After analyzing the docker logs (journalctl -u docker.service) if found some warning about bad resolvconf applied.
Following that i found that our corporate nameservers were added to the network interfaces but not in resolvconf.
Applied this solution How do I configure my static DNS in interfaces? (askubuntu), i.e. adding nameservers to /etc/resolvconf/resolv.conf.d/tail
After updating resolvconf (or reboot).
bash
docker run --rm busybox nslookup google.com
worked instantly.
All my docker-compose builds are working now.
I got same issue today, I just added line below to /etc/default/docker
DOCKER_OPTS="--dns 172.18.20.13 --dns 172.20.100.29 --dns 8.8.8.8"
and then I restarted my Laptop.
In my case restarting docker daemon is not enough for me, I have to restart my Laptop to make it work.
Before spending too much time on any of the other solutions, simply restart Docker and try again.
Solved the problem for me, using Docker Desktop for Windows on Windows 10.
In my case, since my containers were in a cloud environment the MTU of the interfaces were not usual 1500 and was like 1450, so I had to configure my docker daemon to set the MTU to 1450 for containers.
{
"mtu": 1454
}
look at this : https://mlohr.com/docker-mtu/
In my case, firewall was the issue. Disabling it for the moment solved the issue. I use nftables. Stopping the service did the trick.
sudo systemctl stop nftables.service
With the recent updates, the following line in (/etc/docker/daemon.json) was the cause of the issue:
{
"bridge": "none"
}
Remove it, and restart the docker service with: sudo systemctl restart docker
OS (Ubuntu 20.04.3 LTS) and Docker (version 20.10.11, build dea9396)
On my system (macOS High Sierra 10.13.6 with Docker 2.1.0.1) this was due to a corporate proxy.
I solved this by two steps:
Manually configure proxy settings in Preferences>Proxies
Add the same settings to your config.json inside ~/.docker/config.json like:
"proxies":
{
"default":
{
"httpProxy": "MYPROXY",
"httpsProxy": "MYPROXY",
"noProxy": "MYPROXYWHITELIST"
}
}
I have dnsmasq in my system for dns resolution that had the nameservers to resolve the URL. Docker copies /etc/resolv.conf of the host system as it is into the container's /etc/resolv.conf and thus didn't have the right nameservers. From docs:
By default, a container inherits the DNS settings of the host, as
defined in the /etc/resolv.conf configuration file.
Adding the nameservers in /etc/resolv.conf of the host fixed the issue.
I had it working allright but now it stopped. I tried the following commands with no avail:
docker run -dns 8.8.8.8 base ping google.com
docker run base ping google.com
sysctl -w net.ipv4.ip_forward=1 - both on the host and on the container
All I get is unknown host google.com. Docker version 0.7.0
Any ideas?
P.S. ufw disabled as well
First thing to check is run cat /etc/resolv.conf in the docker container. If it has an invalid DNS server, such as nameserver 127.0.x.x, then the container will not be able to resolve the domain names into ip addresses, so ping google.com will fail.
Second thing to check is run cat /etc/resolv.conf on the host machine. Docker basically copies the host's /etc/resolv.conf to the container everytime a container is started. So if the host's /etc/resolv.conf is wrong, then so will the docker container.
If you have found that the host's /etc/resolv.conf is wrong, then you have 2 options:
Hardcode the DNS server in daemon.json. This is easy, but not ideal if you expect the DNS server to change.
Fix the hosts's /etc/resolv.conf. This is a little trickier, but it is generated dynamically, and you are not hardcoding the DNS server.
1. Hardcode DNS server in docker daemon.json
Edit /etc/docker/daemon.json
{
"dns": ["10.1.2.3", "8.8.8.8"]
}
Restart the docker daemon for those changes to take effect:
sudo systemctl restart docker
Now when you run/start a container, docker will populate /etc/resolv.conf with the values from daemon.json.
2. Fix the hosts's /etc/resolv.conf
A. Ubuntu 16.04 and earlier
For Ubuntu 16.04 and earlier, /etc/resolv.conf was dynamically generated by NetworkManager.
Comment out the line dns=dnsmasq (with a #) in /etc/NetworkManager/NetworkManager.conf
Restart the NetworkManager to regenerate /etc/resolv.conf :
sudo systemctl restart network-manager
Verify on the host: cat /etc/resolv.conf
B. Ubuntu 18.04 and later
Ubuntu 18.04 changed to use systemd-resolved to generate /etc/resolv.conf. Now by default it uses a local DNS cache 127.0.0.53. That will not work inside a container, so Docker will default to Google's 8.8.8.8 DNS server, which may break for people behind a firewall.
/etc/resolv.conf is actually a symlink (ls -l /etc/resolv.conf) which points to /run/systemd/resolve/stub-resolv.conf (127.0.0.53) by default in Ubuntu 18.04.
Just change the symlink to point to /run/systemd/resolve/resolv.conf, which lists the real DNS servers:
sudo ln -sf /run/systemd/resolve/resolv.conf /etc/resolv.conf
Verify on the host: cat /etc/resolv.conf
Now you should have a valid /etc/resolv.conf on the host for docker to copy into the containers.
Fixed by following this advice:
[...] can you try to reset everything?
pkill docker
iptables -t nat -F
ifconfig docker0 down
brctl delbr docker0
docker -d
It will force docker to recreate the bridge and reinit all the network rules
https://github.com/dotcloud/docker/issues/866#issuecomment-19218300
Seems the interface was 'hung' somehow.
Update for more recent versions of docker:
The above answer might still get the job done for you but it has been quite a long time since this answer was posted and docker is more polished now so make sure you try these first before going into mangling with iptables and all.
sudo service docker restart or (if you are in a linux distro that does not use upstart) sudo systemctl restart docker
The intended way to restart docker is not to do it manually but use the service or systemctl command:
service docker restart
or
systemctl restart docker
Updating this question with an answer for OSX (using Docker Machine)
If you are running Docker on OSX using Docker Machine, then the following worked for me:
docker-machine restart
<...wait for it to restart, which takes up to a minute...>
docker-machine env
eval $(docker-machine env)
Then (at least in my experience), if you ping google.com from a container all will be well.
Tried all answers, none worked for me.
After a few hours of trying everything else I could find, this did the trick:
reboot
-_-
I do not know what I am doing but that worked for me :
OTHER_BRIDGE=br-xxxxx # this is the other random docker bridge (`ip addr` to find)
service docker stop
ip link set dev $OTHER_BRIDGE down
ip link set dev docker0 down
ip link delete $OTHER_BRIDGE type bridge
ip link delete docker0 type bridge
service docker start && service docker stop
iptables -t nat -A POSTROUTING ! -o docker0 -s 172.17.0.0/16 -j MASQUERADE
iptables -t nat -A POSTROUTING ! -o docker0 -s 172.18.0.0/16 -j MASQUERADE
service docker start
I was using DOCKER_OPTS="--dns 8.8.8.8" and later discovered and that my container didn't have direct access to internet but could access my corporate intranet. I changed DOCKER_OPTS to the following:
DOCKER_OPTS="--dns <internal_corporate_dns_address"
replacing internal_corporate_dns_address with the IP address or FQDN of our DNS and restarted docker using
sudo service docker restart
and then spawned my container and checked that it had access to internet.
No internet access can also be caused by missing proxy settings. In that case, --network host may not work either. The proxy can be configured by setting the environment variables http_proxy and https_proxy:
docker run -e "http_proxy=YOUR-PROXY" \
-e "https_proxy=YOUR-PROXY"\
-e "no_proxy=localhost,127.0.0.1" ...
Do not forget to set no_proxy as well, or all requests (including those to localhost) will go through the proxy.
More information: Proxy Settings in the Archlinux Wiki.
I was stumped when this happened randomly for me for one of my containers, while the other containers were fine. The container was attached to at least one non-internal network, so there was nothing wrong with the Compose definition. Restarting the VM / docker daemon did not help. It was also not a DNS issue because the container could not even ping an external IP. What solved it for me was to recreate the docker network(s). In my case, docker-compose down && docker-compose up worked.
Compose
This forces the recreation of all networks of all the containers:
docker-compose down && docker-compose up
Swarm mode
I suppose you just remove and recreate the service, which recreates the service's network(s):
docker service rm some-service
docker service create ...
If the container's network(s) are external
Simply remove and recreate the external networks of that service:
docker network rm some-external-network
docker network create some-external-network
For me it was the host's firewall. I had to allow DNS on the host's firewall. And also had to restart docker after changing the host firewall setting.
for me, my problem was because of iptables-services was not installed, this worked for me (CentOS):
sudo yum install iptables-services
sudo service docker restart
Other answers have stated that the docker0 interface (bridge) can be the source of the problem. On Ubuntu 20.04 I observed that the interface was missing its IP address (to be checked with ip addr show dev docker0). Restarting Docker alone did not help. I had to delete the bridge interface manually.
sudo ip link delete docker0
sudo systemctl restart docker
You may have started your docker with dns options --dns 172.x.x.x
I had the same error and removed the options from /etc/default/docker
The lines:
# Use DOCKER_OPTS to modify the daemon startup options.
DOCKER_OPTS="--dns 172.x.x.x"
On centos 8,
My problem was that I did not install & start iptables before starting docker service. Make sure iptables service is up and running before you start docker service.
Sharing a simple and working solution for posterity. When we run a docker container without explicitly mentioning the --network flag, it connects to its default bridge network which prohibits connecting to the outside world. To resolve this issue, we have to create our own bridge network(user-defined bridge) and have to explicitly mention it with the docker run command.
docker network create --driver bridge mynetwork
docker run -it --network mynetwork image:version
it help me:
sudo ip link delete docker0
sudo systemctl stop docker.socket
sudo systemctl stop docker.service
sudo systemctl start docker.socket
sudo systemctl start docker.service
NOTE: after this, interface docker0 must have ip adress smth like that:
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
If you're on OSX, you might need to restart your machine after installing Docker. This has been an issue at times.
For me it was an iptables forwarding rule. For some reason the following rule, when coupled with docker's iptables rules, caused all outbound traffic from containers to hit localhost:8080:
iptables -t nat -A PREROUTING -p tcp --dport 80 -j REDIRECT --to-ports 8080
iptables -t nat -I OUTPUT -p tcp -d 127.0.0.1 --dport 80 -j REDIRECT --to-ports 8080
I had the problem on Ubuntu 18.04. However the problem was with the DNS. I was in a corporate network that has its own DNS server and block other DNS servers. This is to block some websites (porn, torrents, ... so on )
To resolve your problem
find your DNS on host machine
use --dns your_dns as suggested
by #jobin
docker run --dns your_dns -it --name cowsay --hostname cowsay debian bash
For Ubuntu 19.04 using openconnect 8.3 for VPN, I had to symlink /etc/resolve.conf to the one in systemd (opposite of answerby wisbucky )
sudo ln -sf /etc/resolv.conf /run/systemd/resolve/resolv.conf
Steps to debug
Connect to Company VPN
Look for correct VPN settings in either /etc/resolv.conf or /run/systemd/resolve/resolv.conf
Whichever has the correct DNS settings, we'll symlink that to the other file
( Hint: Place one with correct settings on the left of assignment )
Docker version: Docker version 19.03.0-rc2, build f97efcc
for me, using centos 7.4, it was not issue of /etc/resolve.conf, iptables, iptables nat rules nor docker itself. The issue is the host missing the package bridge-utils which docker require to build the bridge using command brctl. yum install -y bridge-utils and restart docker, solve the problem.
On windows (8.1) I killed the virtualbox interface (via taskmgr) and it solved the issue.
Originally my docker container was able to reach the external internet (This is a docker service/container running on an Amazon EC2).
Since my app is an API, I followed up the creation of my container (it succeeded in pulling all the packages it needed) with updating my IP Tables to route all traffic from port 80 to the port that my API (running on docker) was listening on.
Then, later when I tried rebuilding the container it failed. After much struggle, I discovered that my previous step (setting the IPTable port forwarding rule) messed up the docker's external networking capability.
Solution: Stop your IPTable service:
sudo service iptables stop
Restart The Docker Daemon:
sudo service docker restart
Then, try rebuilding your container. Hope this helps.
Follow Up
I completely overlooked that I did not need to mess with the IP Tables to forward incoming traffic to 80 to the port that the API running on docker was running on. Instead, I just aliased port 80 to the port the API in docker was running on:
docker run -d -p 80:<api_port> <image>:<tag> <command to start api>
Just adding this here in case someone runs into this issue within a virtualbox container running docker. I reconfigured the virtualbox network to bridged instead of nat, and the problem went away.
There are lot of good answer already. I faced similar problem in my orange pi pc running armbian recently. Docker container was blocked to internet. This command solve the problem in my case. So I like to share it
docker run --security-opt seccomp=unconfined imageName
I've tried most answers in here, but the only thing that worked was re-creating the network:
$ docker network rm the-network
$ docker network create --driver=bridge the-network
I also needed to re-create the docker container that used it:
$ sudo docker create --name the-name --network the-network
Then it started with internet access.
I am on Arch Linux and after trying all the above answers I realized that I had a firewall enabled in my machine, nftables, and disabling it did the trick. I did :
sudo systemctl disable nftables
sudo systemctl stop nftables
sudo reboot
My network cards:
➜ ~ ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: enp1s0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc fq_codel state DOWN mode DEFAULT group default qlen 1000
link/ether 68:f7:28:84:e7:fe brd ff:ff:ff:ff:ff:ff
3: wlp2s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DORMANT group default qlen 1000
link/ether d0:7e:35:d2:42:6d brd ff:ff:ff:ff:ff:ff
4: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default
link/ether 02:42:43:3f:ff:94 brd ff:ff:ff:ff:ff:ff
5: br-c51881f83e32: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default
link/ether 02:42:ae:34:49:c3 brd ff:ff:ff:ff:ff:ff
6: br-c5b2a1d25a86: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default
link/ether 02:42:72:d3:6f:4d brd ff:ff:ff:ff:ff:ff
8: veth56f42a2#if7: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT group default
link/ether 8e:70:36:10:4e:83 brd ff:ff:ff:ff:ff:ff link-netnsid 0
and my firewal configuration, /etc/nftables.conf, which I now disabled and will futurely try to improve so I can have the docker0 network card rules setup correctly:
#!/usr/bin/nft -f
# vim:set ts=2 sw=2 et:
# IPv4/IPv6 Simple & Safe firewall ruleset.
# More examples in /usr/share/nftables/ and /usr/share/doc/nftables/examples/.
table inet filter
delete table inet filter
table inet filter {
chain input {
type filter hook input priority filter
policy drop
ct state invalid drop comment "early drop of invalid connections"
ct state {established, related} accept comment "allow tracked connections"
iifname lo accept comment "allow from loopback"
ip protocol icmp accept comment "allow icmp"
meta l4proto ipv6-icmp accept comment "allow icmp v6"
#tcp dport ssh accept comment "allow sshd"
pkttype host limit rate 5/second counter reject with icmpx type admin-prohibited
counter
}
chain forward {
type filter hook forward priority filter
policy drop
}
If you're running Docker rootless and facing this issue, during its installation, iptables may not be configured properly, mainly because of using --skip-iptables option when Docker complained about iptables:
[ERROR] Missing system requirements. Run the following commands to
[ERROR] install the requirements and run this tool again.
[ERROR] Alternatively iptables checks can be disabled with --skip-iptables .
########## BEGIN ##########
sudo sh -eux <<EOF
# Load ip_tables module
modprobe ip_tables
EOF
########## END ##########
Let's check if this is the issue: is ip_tables kernel module loaded?
sudo modprobe ip_tables
If there is no output, this answer may not help you (you could try anyway). Otherwise, the output's something like this:
modprobe: FATAL: Module ip_tables not found in directory /lib/modules/5.18.9-200.fc36.x86_64
Let's fix it!
First, uninstall Docker rootless (no need to stop the service via systemctl, the script handles that):
dockerd-rootless-setuptool.sh uninstall --skip-iptables
Ensure iptables package is installed, although it's shipped by default by major distributions.
Now, make ip_tables module visible to modprobe and install it (thanks to this):
sudo depmod
sudo modprobe ip_tables
Now, re-install Docker rootless:
dockerd-rootless-setuptool.sh install
If it doesn't bother about iptables, you're done and problem should be fixed. Don't forget to enable the service (i.e. systemctl enable --user --now docker)!
For me iwas facing the same issue as user of redhat/centos/fedora using podman
firewall-cmd --zone=public --add-masquerade
firewall-cmd --permanent --zone=public --add-masquerade
for more firewalld and podman (or docker) – no internet in the container and could not resolve host
I also encountered such an issue while trying to set up a project using Docker-Compose on Ubuntu.
The Docker had no access to internet at all, when I tried to ping any IP address or nslookup some URL - it failed all the time.
I tried all the possible solutions with DNS resolution described above to no avail.
I spent the whole day trying to find out what the heck is going on, and finally found out that the cause of all the trouble was the antivirus, in particular it's firewall which for some reason blocked Docker from getting the IP address and port.
When I disabled it - everything worked fine.
So, if you have an antivirus installed and nothing helps fix the issue - the problem could be the firewall of the antivirus.