Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 4 months ago.
The community reviewed whether to reopen this question 4 months ago and left it closed:
Original close reason(s) were not resolved
Improve this question
Docker is a software tool primarily used by programmers as it is the mechanism programmers use to produce container images.
My machine is on a private network with private DNS servers, and a private zone for DNS resolution. I can resolve hosts on this zone from my host machine, but I cannot resolve them from containers running on my host machine.
Host:
root#host:~# cat /etc/resolv.conf
# Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)
# DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN
nameserver 127.0.1.1
root#host:~# ping privatedomain.io
PING privatedomain.io (192.168.0.101) 56(84) bytes of data.
Container:
root#container:~# cat /etc/resolv.conf
# Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)
# DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN
nameserver 8.8.8.8
nameserver 8.8.4.4
root#container:~# ping privatedomain.io
ping: unknown host privatedomain.io
It's fairly obvious that Google's public DNS servers won't resolve my private DNS requests. I know I can force it with docker --dns 192.168.0.1, or set DOCKER_OPTS="--dns 192.168.0.1" in /etc/default/docker, but my laptop frequently switches networks. It seems like there should be a systematic way of solving this problem.
Docker populates /etc/resolv.conf by copying the host's /etc/resolv.conf, and filtering out any local nameservers such as 127.0.1.1. If there are no nameservers left after that, Docker will add Google's public DNS servers (8.8.8.8 and 8.8.4.4).
According to the Docker documentation:
Note: If you need access to a host’s localhost resolver, you must modify your DNS service on the host to listen on a non-localhost address that is reachable from within the container.
The DNS service on the host is dnsmasq, so if you make dnsmasq listen on your docker IP and add that to resolv.conf, docker will configure the containers to use that as the nameserver.
1 . Create/edit /etc/dnsmasq.conf† and add these lines:
interface=lo
interface=docker0
2 . Find your docker IP (in this case, 172.17.0.1):
root#host:~# ifconfig | grep -A2 docker0
docker0 Link encap:Ethernet HWaddr 02:42:bb:b4:4a:50
inet addr:172.17.0.1 Bcast:0.0.0.0 Mask:255.255.0.0
3 . Create/edit /etc/resolvconf/resolv.conf.d/tail and add this line:
nameserver 172.17.0.1
4 . Restart networking, update resolv.conf, restart docker:
sudo service network-manager restart
sudo resolvconf -u
sudo service docker restart
Your containers will now be able to resolve DNS from whatever DNS servers the host machine is using.
† The path may be /etc/dnsmasq.conf, /etc/dnsmasq.conf.d/docker.conf, /etc/NetworkManager/dnsmasq.conf, or /etc/NetworkManager/dnsmasq.d/docker.conf depending on your system and personal preferences.
For Ubuntu 18.04, and other systems that use systemd-resolved, it may be necessary to install dnsmasq and resolvconf. systemd-resolved is hard-coded to listen on 127.0.0.53, and Docker filters out any loopback address when reading resolv.conf.
1 . Install dnsmasq and resolvconf.
sudo apt update
sudo apt install dnsmasq resolvconf
2 . Find your docker IP (in this case, 172.17.0.1):
root#host:~# ifconfig | grep -A2 docker0
docker0 Link encap:Ethernet HWaddr 02:42:bb:b4:4a:50
inet addr:172.17.0.1 Bcast:0.0.0.0 Mask:255.255.0.0
3 . Edit /etc/dnsmasq.conf and add these lines:
interface=docker0
bind-interfaces
listen-address=172.17.0.1
4 . Create/edit /etc/resolvconf/resolv.conf.d/tail and add this line:
nameserver 172.17.0.1
5 . Restart networking, update resolv.conf, restart docker:
sudo service network-manager restart
sudo resolvconf -u
sudo service dnsmasq restart
sudo service docker restart
Your containers will now be able to resolve DNS from whatever DNS servers the host machine is using.
As you know, Docker copy host /etc/resolv.conf file to containers but removing any local nameserver.
My solution to this problem is to keep using systemd-resolvd and NetworkManager but add dnsmasq and use it to "forward" Docker containers DNS queries to systemd-resolvd.
Step by step guide:
Make /etc/resolv.conf a "real" file
sudo rm /etc/resolv.conf
sudo touch /etc/resolv.conf
Create file /etc/NetworkManager/conf.d/systemd-resolved-for-docker.conf to tell NetworkManager to inform systemd-resolvd but to not touch /etc/resolv.conf
[main]
# NetworkManager will push the DNS configuration to systemd-resolved
dns=systemd-resolved
# NetworkManager won’t ever write anything to /etc/resolv.conf
rc-manager=unmanaged
Install dnsmasq
sudo apt-get -y install dnsmasq
Configure dnsmasq in /etc/dnsmasq.conf for listening DNS queries comming from Docker and using systemd-resolvd name server
# Use interface docker0
interface=docker0
# Explicitly specify the address to listen on
listen-address=172.17.0.1
# Looks like docker0 interface is not available when dnsmasq service starts so it fails. This option makes dynamically created interfaces work in the same way as the default.
bind-dynamic
# Set systemd-resolved DNS server
server=127.0.0.53
Edit /etc/resolv.conf to use systemd-resolvd nameserver (127.0.0.53) and the host IP (172.17.0.1) in Docker network
# systemd-resolvd name server
nameserver 127.0.0.53
# docker host ip
nameserver 172.17.0.1
Restart services
sudo service network-manager restart
sudo service dnsmasq restart
sudo service docker restart
For more info see my post (in spanish) https://rubensa.wordpress.com/2020/02/07/docker-no-usa-los-mismos-dns-que-el-host/
I had problems with the DNS resolver in our docker containers. I tried a lot of different things, and in the end, I just figured that my CentOS VPS in Hostgator didn't have installed by default NetworkManager-tui (nmtui), I just installed and reboot it.
sudo yum install NetworkManager-tui
And reconfigured my resolv.conf with default DNS as 8.8.8.8.
nano /etc/resolv.conf
My case with many images from docker hub (nodered, syncthing and another):
container is running under not-root user
/etc/resolv.conf inside container has permissions 600 and owned by root
So my solution is very simple
root#container:~# chmod 644 /etc/resolv.conf
Profit! :))
If you are using a VPN, the VPN protocol might be appending to outbound packets beyond the configured MTU on your private network.
A typical MTU is 1500.
Try adding this content to /etc/docker/daemon.json
{
"mtu": 1300,
"dns": ["<whatever DNS server you need in your private network>"]
}
Then systemctl restart docker.
I have the same error message in my systemctl status docker.
I run a Nextcloud and a nextcloud nginx proxy container and used docker compose to install it. It worked for multiple months without big hickups, but on Friday it wasn't accessable. The server had shut down.
I restarted it, my icecast2 instance is working fine and was used this sunday for the service in our church. But the docker containers are gone. docker ps -a doesn't show any, I can't access the nextcloud via docker exec like I would do normally. And I have the error message:
No non-localhost DNS nameservers are left in resolv.conf. Using default external servers: [nameserver 8.8.8.8 nameserver 8.8.4.4]"
Feb 06 19:04:58 ncxxxxxxxxx dockerd[21551]: time="2022-02-06T19:04:58.894366765Z" level=info msg="IPv6 enabled; Adding default IPv6 external servers: [nameserver 2001:4860:4860::8888 nameserver 2001:4860:4860::8844]
My resolv.conf looks like this:
GNU nano 4.8
/etc/resolv.conf
nameserver 127.0.0.53
options edns0 trust-ad
search fritz.box
Based on answer from #rubensa, but simpler and more integrated IMHO:
Install dnsmasq
sudo apt-get -y install dnsmasq
Configure dnsmasq in /etc/dnsmasq.d/docker-dns-fix.conf for listening to DNS queries coming from Docker and using systemd-resolvd name server
# Use interface docker0
interface=docker0
# Explicitly specify the address to listen on
listen-address=172.17.0.1
# Looks like docker0 interface is not available when dnsmasq service starts so it fails. This option makes dynamically created interfaces work in the same way as the default.
bind-dynamic
# Set systemd-resolved DNS server
server=127.0.0.53
Tell Docker to use dnsmasq by editing/creating /etc/docker/daemon.json
{
"dns": ["172.17.0.1"]
}
Restart services
sudo service dnsmasq restart
sudo service docker restart
It was enough for Ubuntu 18.04 LTS:
sudo service network-manager restart
sudo resolvconf -u
sudo service dnsmasq restart
sudo service docker restart
Related
I have a Docker container that uses the host network and when I run the container it takes the same resolv.conf of the host machine.
docker run -ti --network host ubuntu:18.04 /bin/bash
I have configured a VPN interface in my host that use an additional DNS server 10.10.0.5 and one search domain using the DNS server myvpndomain.com.
But the problem is when I start the container without being connected to the VPN it takes my usual /etc/resolv.conf. But once the container is started if I turn on my VPN interface I can see all machines on the VPN network including the VPN's DNS server: 10.10.0.5 (I can ping to it), but the DNS resolution configuration doesn't get updated automatically from host, I need to restart the container to get the new DNS configuration.
DNS configuration in the host machine after connected to VPN:
cat /etc/resolv.conf
search myvpndomain.com
nameserver 10.10.0.5
nameserver 80.58.61.250
nameserver 80.58.61.254
DNS configuration inside the container the container after connected to VPN:
cat /etc/resolv.conf
nameserver 80.58.61.250
nameserver 80.58.61.254
To update the DNS configuration after connecting my VPN I tried two solutions so far:
1) Adding a bind mount to host /etc/resolv.conf file into the container -v /etc/resolv.conf:/etc/resolv.conf to run command, but it does not work, I don't know why even updating host resolv.conf, the mounted resolv.conf in container does not update.
2) Adding --dns 127.0.0.53 --dns-search myvpndomain.com to run command, this works well as it uses the systemd-resolver and also adds the required search domain. But I would want to not rely on systemd-resolver to accomplish this.
Do you know a more clean solution that does not involve to use the systemd-resolver?
Maybe use Docker internal DNS 127.0.0.11 + dnsmasq?
PD: It's mandatory to use the host network the container --network host
The symptom is: the host machine has proper network access, but programs running within containers can't resolve DNS names (which may appear to be "can't access the network" before investigating more).
$ sudo docker run -ti mmoy/ubuntu-netutils /bin/bash
root#082bd4ead733:/# ping www.example.com
... nothing happens (timeout) ... ^C
root#082bd4ead733:/# host www.example.com
... nothing happens (timeout) ... ^C
(The docker image mmoy/ubuntu-netutils is a simple image based on Ubuntu with ping and host included, convenient here since the network is broken and we can't apt install these tools)
The issue comes from the fact that docker automatically configured Google's public DNS as DNS server within the container:
root#082bd4ead733:/# cat /etc/resolv.conf
# Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)
# DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN
nameserver 8.8.8.8
nameserver 8.8.4.4
This just works in many configurations, but obviously doesn't when the host runs on a network where Google's public DNS are filtered by some firewall rules.
The reason this happened is:
Docker first tries configuring the same DNS server(s) on the host and within the container.
The host runs dnsmasq, a DNS caching service. dnsmasq acts as a proxy for DNS requests, hence the apparent DNS server in the host's /etc/resolve.conf is nameserver 127.0.1.1, i.e. localhost.
The host's dnsmasq listens only for requests comming from localhost and blocks requests coming from the docker container.
Since using 127.0.1.1 within docker doesn't work, docker falls back to Google's public DNS, which do not work either.
There may be several reasons why DNS is broken within docker containers. This question (and answers) covers the case where:
dnsmasq is used. To check whether this is the case:
Run ps -e | grep dnsmasq on the host. If the output is empty, you're not running dnsmasq.
Check the host's resolv.conf, it probably contains an entry like nameserver 127.0.1.1. If it contains nameserver 127.0.0.53, you're probably running systemd-resolved instead of dnsmasq. If so, you won't be able to use the solution forwading DNS requests to dnsmasq (the one using listen-address=172.17.0.1). systemd-resolved versions earlier than 247 hardcoded the fact that it listens only on the 'lo' interface hence there's no easy way to adapt this solution with these versions. Other answers below will work with systemd-resolved.
Google's public DNS is filtered. Run host www.example.com 8.8.8.8. If it fails or times out, then you are in this situation.
What are the solutions to get a proper DNS configuration in this configuration?
A clean solution is to configure docker+dnsmasq so than DNS requests from the docker container are forwarded to the dnsmasq daemon running on the host.
For that, you need to configure dnsmasq to listen to the network interface used by docker, by adding a file /etc/NetworkManager/dnsmasq.d/docker-bridge.conf:
$ cat /etc/NetworkManager/dnsmasq.d/docker-bridge.conf
listen-address=172.17.0.1
Then restart network manager to have the configuration file taken into account:
sudo service network-manager restart
Once this is done, you can add 172.17.0.1, i.e. the host's IP address from within docker, to the list of DNS servers. This can be done either using the command-line:
$ sudo docker run -ti --dns 172.17.0.1 mmoy/ubuntu-netutils bash
root#7805c7d153cc:/# ping www.example.com
PING www.example.com (93.184.216.34) 56(84) bytes of data.
64 bytes from 93.184.216.34: icmp_seq=1 ttl=54 time=86.6 ms
... or through docker's configuration file /etc/docker/daemon.json (create it if it doesn't exist):
$ cat /etc/docker/daemon.json
{
"dns": [
"172.17.0.1",
"8.8.8.8",
"8.8.4.4"
]
}
(this will fall back to Google's public DNS if dnsmasq fails)
You need to restart docker to have the configuration file taken into account:
sudo service docker restart
Then you can use docker as usual:
$ sudo docker run -ti mmoy/ubuntu-netutils bash
root#344a983908cb:/# ping www.example.com
PING www.example.com (93.184.216.34) 56(84) bytes of data.
64 bytes from 93.184.216.34: icmp_seq=1 ttl=54 time=86.3 ms
A brutal and unsafe solution is to avoid containerization of the network, and use the same network on the host and on the container. This is unsafe because this gives access to all the network resources of the host to the container, but if you do not need this isolation this may be acceptable.
To do so, just add --network host to the command-line, e.g.
$ sudo docker run -ti --network host mmoy/ubuntu-netutils /bin/bash
root#ubuntu1604:/# ping www.example.com
PING www.example.com (93.184.216.34) 56(84) bytes of data.
64 bytes from 93.184.216.34: icmp_seq=1 ttl=55 time=86.5 ms
64 bytes from 93.184.216.34: icmp_seq=2 ttl=55 time=86.5 ms
One way is to use a user defined network for your container. In that case the container's /etc/resolv.conf will have the nameserver 127.0.0.11 (a.k.a. the Docker's embedded DNS server), which can forward DNS requests to the host's loopback address properly.
$ cat /etc/resolv.conf
nameserver 127.0.0.1
$ docker run --rm alpine cat /etc/resolv.conf
nameserver 8.8.8.8
nameserver 8.8.4.4
$ docker network create demo
557079c79ddf6be7d6def935fa0c1c3c8290a0db4649c4679b84f6363e3dd9a0
$ docker run --rm --net demo alpine cat /etc/resolv.conf
nameserver 127.0.0.11
options ndots:0
If you use docker-compose, it will set up a custom network for your services automatically (with a file format v2+). Note, however, that while docker-compose runs containers in a user-defined network, it still builds them in the default network. To use a custom network for builds you can specify the network parameter in the build configuration (requires file format v3.4+).
I just had to deal with this last night and eventually remembered that docker run has a set of options for handling it. I used --dns to specify the DNS server I want the container to use. Works like a champ and no need to hack my docker host. There are other options for the domain name and search suffixes.
Since the automatic DNS discovery is guilty here, you may override the default setting in docker's configuration.
First, get the IP of the DNS server dnsmasq is using with e.g.:
$ sudo kill -USR1 `pidof dnsmasq`
$ sudo tail /var/log/syslog
[...]
Apr 24 13:20:19 host dnsmasq[2537]: server xx.yy.zz.tt1#53: queries sent 0, retried or failed 0
Apr 24 13:20:19 host dnsmasq[2537]: server xx.yy.zz.tt2#53: queries sent 0, retried or failed 0
The IP addresses correspond to the xx.yy.zz.tt placeholders above.
Alternatively, if your system is using systemd-resolve instead of dnsmasq, run:
$ resolvectl status | grep 'Current DNS'
Current DNS Server: xx.yy.zz.tt
You can set the DNS at docker run time with the --dns option:
$ sudo docker run --dns xx.yy.zz.tt1 --dns xx.yy.zz.tt2 -ti mmoy/ubuntu-netutils bash
root#6c5d08df5dfd:/# ping www.example.com
PING www.example.com (93.184.216.34) 56(84) bytes of data.
64 bytes from 93.184.216.34: icmp_seq=1 ttl=54 time=86.6 ms
64 bytes from 93.184.216.34: icmp_seq=2 ttl=54 time=86.6 ms
One advantage of this solution is that there is no configuration file involved, hence no risk of forgetting about the configuration and running into troubles later because of a specific config: you're getting this DNS configuration if and only if you type the --dns option.
A drawback is that you won't get any DNS caching in the containers, hence DNS resolution will be slower.
Alternatively you may set it permanently in Docker's configuration file, /etc/docker/daemon.json (create it, on the host, if it doesn't exist):
$ cat /etc/docker/daemon.json
{
"dns": ["xx.yy.zz.tt1", "xx.yy.zz.tt2"]
}
You need to restart the docker daemon to take the daemon.json file into account:
sudo service docker restart
Then you can check the configuration:
$ sudo docker run -ti mmoy/ubuntu-netutils bash
root#56c74d3bd94b:/# cat /etc/resolv.conf
nameserver xx.yy.zz.tt1
nameserver xx.yy.zz.tt2
root#56c74d3bd94b:/# ping www.example.com
PING www.example.com (93.184.216.34) 56(84) bytes of data.
64 bytes from 93.184.216.34: icmp_seq=1 ttl=54 time=86.5 ms
Note that this hardcodes the DNS IP in your configuration files. This is strongly discouraged if your machine is a laptop that connects to different networks, and may be problematic if your internet service provider changes the IP of the DNS servers.
Since dnsmasq is the issue, one option is to disable it on the host. This works, but will disable DNS caching for all applications running on the host, hence is a really bad idea if the host is used for applications other than docker.
If you're sure you want to go this way, uninstall dnsmasq, e.g. on Debian-based systems like Ubuntu, run apt remove dnsmasq.
You may then check that /etc/resolv.conf within the container points to the DNS server used by the host.
I had problems with the DNS resolver in our docker containers. I tried a lot of different things, and in the end, I just figured that my VPS in Hostgator didn't have installed by default NetworkManager-tui (nmtui), I just installed and reboot it.
sudo yum install NetworkManager-tui
And reconfigured my resolv.conf with default DNS as 8.8.8.8.
nano /etc/resolv.conf
There are lots of ways in which Docker containers can get confused about DNS settings (just search SO or the wider internet for "Docker DNS" to see what I mean), and one of the common workarounds suggested is to:
Set up dnsmasq as a local DNS resolver on the host system
Bind it to the docker0 network interface
Configure Docker to use the docker0 IP address for DNS resolution
However, attempting to apply this workaround naively on many modern Linux systems will send you down a rabbithole of Linux networking and process management complexity, as systemd assures you that dnsmasq isn't running, but netstat tells you that it is, and actually attempting to start dnsmasq fails with the complaint that port 53 is already in use.
So, how do you reliably give your containers access to a local resolver running on the host, even if the system already has one running by default?
The problem here is that many modern Linux systems run dnsmasq implicitly, so what you're now aiming to do is to set up a second instance specifically for Docker to use. There are actually 3 settings needed to do that correctly:
--interface=docker0 to listen on the default Docker network interface
--except-interface=lo to skip the implicit addition of the loopback interface
--bind-interfaces to turn off a dnsmasq feature where it still listens on all interfaces by default, even when its only processing traffic for one of them
Setting up a dedicated dnsmasq instance
Rather than changing the settings of the default system wide dnsmasq instance, these instructions show setting up a dedicated dnsmasq instance with systemd, on a system which already defines a default dnsmasq service:
$ sudo cp /usr/lib/systemd/system/dnsmasq.service /etc/systemd/system/dnsmasq-docker.service
$ sudoedit /etc/systemd/system/dnsmasq-docker.service
First, we copy the default service settings to a dedicated service file. We then edit that service file, and look for the service definition section, which should be something like this:
[Service]
ExecStart=/usr/sbin/dnsmasq -k
We edit that section to define our additional options:
[Service]
ExecStart=/usr/sbin/dnsmasq -k --interface=docker0 --except-interface=lo --bind-interfaces
The entire file is actually pretty short:
[Unit]
Description=DNS caching server.
After=network.target
After=docker.service
Wants=docker.service
[Service]
ExecStart=/usr/sbin/dnsmasq -k --interface=docker0 --except-interface=lo --bind-interfaces
[Install]
WantedBy=multi-user.target
The [Unit] section tells systemd to wait until after both the network stack and the main docker daemon are available to start this service, while [Install] indicates which system state target to add the service to when enabling it.
We then configure our new service to start on system boot, and also start it explicitly for immediate use:
$ sudo systemctl enable dnsmasq-docker
$ sudo systemctl start dnsmasq-docker
As the final step in getting the service running, we check it has actually started as expected:
$ sudo systemctl status dnsmasq-docker
The two key lines we're looking for in that output are:
Loaded: loaded (/etc/systemd/system/dnsmasq-docker.service; enabled; vendor preset: disabled)
Active: active (running) since <date & time>
On the first line, note the "enabled" status, while on the second, the "active (running)" status. If the service hasn't started correctly, then the additional diagnostic information will hopefully explain why (although it can be unfortunately cryptic at times, hence this post).
Note: This configuration may fail to start dnsmasq-docker on system restart with an error about the docker0 interface not being defined. While waiting for docker.service seems to be pretty reliable in avoiding that problem, if name resolution from docker containers isn't working after a system restart, then try running:
$ sudo systemctl start dnsmasq-docker
Configuring the host firewall
To be able to use the resolver from local Docker containers, we also need to drop the network firewall between the host and systems running in containers:
sudo firewall-cmd --permanent --zone=trusted --change-interface=docker0
sudo firewall-cmd --reload
(This would be an absolutely terrible idea on a production container host, but can be a helpful risk-vs-convenience trade-off on a developer workstation)
Configuring Docker using a systemd environment file
Now that we have our local resolver running, we need to configure Docker to use it by default. Docker needs the IP address of the docker0 interface rather than the interface name, so we use ifconfig to retrieve that:
$ ifconfig docker0 | grep inet
inet 172.17.0.1 netmask 255.255.0.0 broadcast 0.0.0.0
So, for my system, the host's interface on the default docker0 bridge is accessible as 172.17.0.1 (Appending | cut -f 10 -d ' ' to that command should filter the output to just the IP address)
Since I'm assuming a systemd-based Linux with a system provided Docker package, we'll query the system package's service file to find out how the service is being started:
$ cat /usr/lib/systemd/system/docker.service
The first thing we're looking for is the exact command used to start the daemon, which should look something like this:
ExecStart=/usr/bin/docker daemon \
$OPTIONS \
$DOCKER_STORAGE_OPTIONS \
$DOCKER_NETWORK_OPTIONS \
$INSECURE_REGISTRY
The second part we're looking for is whether or not the service is configured to use an environment file, as indicated by one of more lines like this:
EnvironmentFile=-/etc/sysconfig/docker
When an environment file is in use (as it is on Fedora 23), then the way to change the Docker daemon settings is to edit that file and update the relevant environment variable:
$ sudoedit /etc/sysconfig/docker
The existing OPTIONS entry on Fedora 23 looks like this:
OPTIONS='--selinux-enabled --log-driver=journald'
To change the default DNS resolution settings, we amend it to look like this:
OPTIONS='--selinux-enabled --log-driver=journald --dns=172.17.0.1'
And then restart the Docker daemon:
$ sudo systemctl restart docker
With this change implemented, Docker containers should now be reliably able to access any systems your host system can access (including via VPN tunnels, which was my own reason for needing to figure this out)
You can run curl inside a container to check name resolution is working correctly:
docker run -it centos curl google.com
Replace google.com with whichever hostname was giving you problems (as you should have only ended up finding this answer if you had a name resolution problem when running a process inside a Docker container)
Configuring Docker using a systemd drop-in file
(Caveat: since my system uses an environment file, I haven't been able to test the drop-in file based approach below, but it should work - I've included it since the Docker documentation seems to indicate they now prefer the use of systemd drop-in files to the use of environment files)
If the system service file doesn't use EnvironmentFile, then the entire ExecStart entry can be replaced by using a drop-in configuration file:
$ sudo mkdir -p /etc/systemd/system/docker.service.d
$ sudoedit /etc/systemd/system/docker.service.d/daemon.conf
We then tell Docker to clear the existing ExecStart entry and replace it with our new one with the additional settings:
[Service]
ExecStart=
ExecStart=/usr/bin/docker daemon \
$OPTIONS \
--dns 172.17.0.1 \
$DOCKER_STORAGE_OPTIONS \
$DOCKER_NETWORK_OPTIONS \
$INSECURE_REGISTRY
We then tell systemd to load that configuration change and restart Docker:
$ sudo systemctl daemon-reload
$ sudo systemctl restart docker
References:
Docker systemd config reference: https://docs.docker.com/engine/admin/systemd/
systemd service file reference: https://www.freedesktop.org/software/systemd/man/systemd.exec.html
dnsmasq reference: http://www.thekelleys.org.uk/dnsmasq/docs/dnsmasq-man.html
firewalld reference: https://fedoraproject.org/wiki/FirewallD
Setting up dnsmasq without an existing local resolver on the host: http://docs.blowb.org/setup-host/dnsmasq.html
You can use the host's local DNS resolver (e.g. dnsmasq) from your Docker containers if they are on a custom network. In that case a container's /etc/resolv.conf will have the nameserver 127.0.0.11 (a.k.a. the Docker's embedded DNS server), which can forward DNS requests to the host's loopback address properly.
$ cat /etc/resolv.conf
nameserver 127.0.0.1
$ docker run --rm alpine cat /etc/resolv.conf
nameserver 8.8.8.8
nameserver 8.8.4.4
$ docker network create demo
557079c79ddf6be7d6def935fa0c1c3c8290a0db4649c4679b84f6363e3dd9a0
$ docker run --rm --net demo alpine cat /etc/resolv.conf
nameserver 127.0.0.11
options ndots:0
If you use docker-compose, it will set up a custom network for your services automatically (with a file format v2+). Note, however, that while docker-compose runs containers in a user-defined network, it still builds them in the default bridge network. To use a custom network for builds you can specify the network parameter in the build configuration (requires file format v3.4+).
I've been trying to run Docker build on various files which previously worked before, which are now no longer working.
As soon as the Docker file included any line that was to install software it would fail with a message saying that the package was not found.
RUN apt-get -y install supervisor nodejs npm
The common message which showed up in the logs was
Could not resolve 'archive.ubuntu.com'
Any idea why any software will not install?
Uncommenting DOCKER_OPTS="--dns 8.8.8.8 --dns 8.8.4.4" in /etc/default/docker as Matt Carrier suggested did NOT work for me. Nor did putting my corporation's DNS servers in that file. But, there's another way (read on).
First, let's verify the problem:
$ docker run --rm busybox nslookup google.com # takes a long time
nslookup: can't resolve 'google.com' # <--- appears after a long time
Server: 8.8.8.8
Address 1: 8.8.8.8
If the command appears to hang, but eventually spits out the error "can't resolve 'google.com'", then you have the same problem as me.
The nslookup command queries the DNS server 8.8.8.8 in order to turn the text address of 'google.com' into an IP address. Ironically, 8.8.8.8 is Google's public DNS server. If nslookup fails, public DNS servers like 8.8.8.8 might be blocked by your company (which I assume is for security reasons).
You'd think that adding your company's DNS servers to DOCKER_OPTS in /etc/default/docker should do the trick, but for whatever reason, it didn't work for me. I describe what worked for me below.
SOLUTION:
On the host (I'm using Ubuntu 16.04), find out the primary and secondary DNS server addresses:
$ nmcli dev show | grep 'IP4.DNS'
IP4.DNS[1]: 10.0.0.2
IP4.DNS[2]: 10.0.0.3
Using these addresses, create a file /etc/docker/daemon.json:
$ sudo su root
# cd /etc/docker
# touch daemon.json
Put this in /etc/docker/daemon.json:
{
"dns": ["10.0.0.2", "10.0.0.3"]
}
Exit from root:
# exit
Now restart docker:
$ sudo service docker restart
VERIFICATION:
Now check that adding the /etc/docker/daemon.json file allows you to resolve 'google.com' into an IP address:
$ docker run --rm busybox nslookup google.com
Server: 10.0.0.2
Address 1: 10.0.0.2
Name: google.com
Address 1: 2a00:1450:4009:811::200e lhr26s02-in-x200e.1e100.net
Address 2: 216.58.198.174 lhr25s10-in-f14.1e100.net
REFERENCES:
I based my solution on an article by Robin Winslow, who deserves all of the credit for the solution. Thanks, Robin!
"Fix Docker's networking DNS config." Robin Winslow. Retrieved 2016-11-09. https://robinwinslow.uk/2016/06/23/fix-docker-networking-dns/
After much headache I found the answer. Could not resolve 'archive.ubuntu.com' can be fixed by making the following changes:
Uncomment the following line in /etc/default/docker
DOCKER_OPTS="--dns 8.8.8.8 --dns 8.8.4.4"
Restart the Docker service
sudo service docker restart
Delete any images which have cached the invalid DNS settings.
Build again and the problem should be solved.
Credit goes to Andrew SB
I run into the same problem, but neither uncommenting /etc/default/docker dns entries nor editing the /etc/resolv.conf in the build container or the /etc/docker/daemon.json helps for me.
But after I build with the option --network=host the resolving was fine again.
docker build --network=host -t my-own-ubuntu-like-image .
Maybe this will help someone again.
I believe that Matt Carrier's answer is the correct solution for this problem. However, after implementing it, I still observed the same behavior: could not resolve 'archive.ubuntu.com'.
This led me to eventually find that the network I was connected to was blocking public DNS. The solution to this problem was to configure my Docker container to use the same name server that my host (the machine from which I was running Docker) was using.
How I triaged:
Since I was working through the Docker documentation, I already had an example image installed on my machine. I was able to start a new container to run that image and create a new bash session in that container: docker run -it docker/whalesay bash
Does the container have an Internet connection?: ping 172.217.4.238 (google.com)
Can the container resolve hostnames? ping google.com
In my case, the first ping resulted in responses, the second did not.
How I fixed:
Once I discovered that DNS was not working inside the container, I verified that I could duplicate the same behavior on the host. nslookup google.com resolved just fine on the host. But, nslookup google.com 8.8.8.8 or nsloookup google.com 8.8.4.4 timed out.
Next, I found the name server(s) that my host was using by running nm-tool (on Ubuntu 14.04). In the vein of fast feedback, I started up the example image again, and added the IP address of the name server to the container's resolv.conf file: sudo vi /etc/resolv.conf. Once saved, I attempted the ping again (ping google.com) and this time it worked!
Please note that the changes made to the container's resolv.conf are not persistent and will be lost across container restarts. In my case, the more appropriate solution was to add the IP address of my network's name server to the host's /etc/default/docker file.
After adding local dns ip to default docker file it started working for me... please find the below steps...
$ nm-tool # (will give you the dns IP)
DNS : 172.168.7.2
$ vim /etc/default/docker # (uncomment the DOCKER_OPTS and add DNS IP)
DOCKER_OPTS="--dns 172.168.7.2 --dns 8.8.8.8 --dns 8.8.4.4"
$ rm `docker ps --no-trunc -aq` # (remove all the containers to avoid DNS cache)
$ docker rmi $(docker images -q) # (remove all the images)
$ service docker restart #(restart the docker to pick up dns setting)
Now go ahead and build the docker... :)
For anyone who is also having this problem, I solved my problem by editing the /etc/default/docker file, as suggested by other answers and questions. However I had no idea what IP to use as the DNS.
It was only after a while I figured out I had to run ifconfig docker on the host to show the IP for the docker network interface.
docker0 Link encap:Ethernet Endereço de HW 02:42:69:ba:b4:07
inet end.: 172.17.0.1 Bcast:0.0.0.0 Masc:255.255.0.0
endereço inet6: fe80::42:69ff:feba:b407/64 Escopo:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Métrica:1
pacotes RX:8433 erros:0 descartados:0 excesso:0 quadro:0
Pacotes TX:9876 erros:0 descartados:0 excesso:0 portadora:0
colisões:0 txqueuelen:0
RX bytes:484195 (484.1 KB) TX bytes:24564528 (24.5 MB)
It was 172.17.0.1 in my case. Hope this helps anyone who is also having this issue.
I found this answer after some Googleing. I'm using Windows, so some of the above answers did not apply to my file system.
Basically run:
docker-machine ssh default
echo "nameserver 8.8.8.8" > /etc/resolv.conf
Which just overwrites the existing nameserver used with 8.8.8.8 I believe. It worked for me!
Based on some comments, you may have to be root. To do that, issue sudo -i.
I just wanted to add a late response for anyone coming across this issue from search engines.
Do NOT do this: I used to have an option in /etc/default/docker to set iptables=false. This was because ufw didn't work (everything was opened even though only 3 ports were allowed) so I blindly followed the answer to this question: Uncomplicated Firewall (UFW) is not blocking anything when using Docker and this, which was linked in the comments
I have a very low understanding of iptables rules / nat / routing in general, hence why I might have done something irrational.
Turns out that I've probably misconfigured it and killed DNS resolution inside my containers. When I ran an interactive container terminal: docker run -i -t ubuntu:14.04 /bin/bash
I had these results:
root#6b0d832700db:/# ping google.com
ping: unknown host google.com
root#6b0d832700db:/# cat /etc/resolv.conf
search online.net
nameserver 8.8.8.8
nameserver 8.8.4.4
root#6b0d832700db:/# ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=56 time=1.76 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=56 time=1.72 ms
Reverting all of my ufw configuration (before.rules), disabling ufw and removing iptables=false from /etc/default/docker restored the DNS resolution functionality of the containers.
I'm now looking forward to re-enable ufw functionality by following these instructions instead.
I have struggled for some time with this now as well, but here it is what solved it for me on Ubuntu 16.04 x64. I hope it saves someone's time, too.
In /etc/NetworkManager/NetworkManager.conf:
comment out
#dns=dnsmasq
Create (or modify) /etc/docker/daemon.json:
{
"dns": ["8.8.8.8"]
}
Restart docker with:
sudo service docker restart
I have the same issue, and tried the steps mentioned, but seems none works until refresh the network settings.
The steps:
As mentioned, add DOCKER_OPTS="--dns 8.8.8.8 --dns 8.8.4.4 --ip-masq=true" to /etc/default/docker.
Manually flush the PREROUTING table contents using the iptables -t nat -F POSTROUTING . After running this, restart docker and it will initialize the nat table with the new IP range.
Same issue for me (on Ubuntu Xenial).
docker run --dns ... for containers worked.
Updating docker daemon options for docker build (docker-compose etc.) did not work.
After analyzing the docker logs (journalctl -u docker.service) if found some warning about bad resolvconf applied.
Following that i found that our corporate nameservers were added to the network interfaces but not in resolvconf.
Applied this solution How do I configure my static DNS in interfaces? (askubuntu), i.e. adding nameservers to /etc/resolvconf/resolv.conf.d/tail
After updating resolvconf (or reboot).
bash
docker run --rm busybox nslookup google.com
worked instantly.
All my docker-compose builds are working now.
I got same issue today, I just added line below to /etc/default/docker
DOCKER_OPTS="--dns 172.18.20.13 --dns 172.20.100.29 --dns 8.8.8.8"
and then I restarted my Laptop.
In my case restarting docker daemon is not enough for me, I have to restart my Laptop to make it work.
Before spending too much time on any of the other solutions, simply restart Docker and try again.
Solved the problem for me, using Docker Desktop for Windows on Windows 10.
In my case, since my containers were in a cloud environment the MTU of the interfaces were not usual 1500 and was like 1450, so I had to configure my docker daemon to set the MTU to 1450 for containers.
{
"mtu": 1454
}
look at this : https://mlohr.com/docker-mtu/
In my case, firewall was the issue. Disabling it for the moment solved the issue. I use nftables. Stopping the service did the trick.
sudo systemctl stop nftables.service
With the recent updates, the following line in (/etc/docker/daemon.json) was the cause of the issue:
{
"bridge": "none"
}
Remove it, and restart the docker service with: sudo systemctl restart docker
OS (Ubuntu 20.04.3 LTS) and Docker (version 20.10.11, build dea9396)
On my system (macOS High Sierra 10.13.6 with Docker 2.1.0.1) this was due to a corporate proxy.
I solved this by two steps:
Manually configure proxy settings in Preferences>Proxies
Add the same settings to your config.json inside ~/.docker/config.json like:
"proxies":
{
"default":
{
"httpProxy": "MYPROXY",
"httpsProxy": "MYPROXY",
"noProxy": "MYPROXYWHITELIST"
}
}
I have dnsmasq in my system for dns resolution that had the nameservers to resolve the URL. Docker copies /etc/resolv.conf of the host system as it is into the container's /etc/resolv.conf and thus didn't have the right nameservers. From docs:
By default, a container inherits the DNS settings of the host, as
defined in the /etc/resolv.conf configuration file.
Adding the nameservers in /etc/resolv.conf of the host fixed the issue.
I had it working allright but now it stopped. I tried the following commands with no avail:
docker run -dns 8.8.8.8 base ping google.com
docker run base ping google.com
sysctl -w net.ipv4.ip_forward=1 - both on the host and on the container
All I get is unknown host google.com. Docker version 0.7.0
Any ideas?
P.S. ufw disabled as well
First thing to check is run cat /etc/resolv.conf in the docker container. If it has an invalid DNS server, such as nameserver 127.0.x.x, then the container will not be able to resolve the domain names into ip addresses, so ping google.com will fail.
Second thing to check is run cat /etc/resolv.conf on the host machine. Docker basically copies the host's /etc/resolv.conf to the container everytime a container is started. So if the host's /etc/resolv.conf is wrong, then so will the docker container.
If you have found that the host's /etc/resolv.conf is wrong, then you have 2 options:
Hardcode the DNS server in daemon.json. This is easy, but not ideal if you expect the DNS server to change.
Fix the hosts's /etc/resolv.conf. This is a little trickier, but it is generated dynamically, and you are not hardcoding the DNS server.
1. Hardcode DNS server in docker daemon.json
Edit /etc/docker/daemon.json
{
"dns": ["10.1.2.3", "8.8.8.8"]
}
Restart the docker daemon for those changes to take effect:
sudo systemctl restart docker
Now when you run/start a container, docker will populate /etc/resolv.conf with the values from daemon.json.
2. Fix the hosts's /etc/resolv.conf
A. Ubuntu 16.04 and earlier
For Ubuntu 16.04 and earlier, /etc/resolv.conf was dynamically generated by NetworkManager.
Comment out the line dns=dnsmasq (with a #) in /etc/NetworkManager/NetworkManager.conf
Restart the NetworkManager to regenerate /etc/resolv.conf :
sudo systemctl restart network-manager
Verify on the host: cat /etc/resolv.conf
B. Ubuntu 18.04 and later
Ubuntu 18.04 changed to use systemd-resolved to generate /etc/resolv.conf. Now by default it uses a local DNS cache 127.0.0.53. That will not work inside a container, so Docker will default to Google's 8.8.8.8 DNS server, which may break for people behind a firewall.
/etc/resolv.conf is actually a symlink (ls -l /etc/resolv.conf) which points to /run/systemd/resolve/stub-resolv.conf (127.0.0.53) by default in Ubuntu 18.04.
Just change the symlink to point to /run/systemd/resolve/resolv.conf, which lists the real DNS servers:
sudo ln -sf /run/systemd/resolve/resolv.conf /etc/resolv.conf
Verify on the host: cat /etc/resolv.conf
Now you should have a valid /etc/resolv.conf on the host for docker to copy into the containers.
Fixed by following this advice:
[...] can you try to reset everything?
pkill docker
iptables -t nat -F
ifconfig docker0 down
brctl delbr docker0
docker -d
It will force docker to recreate the bridge and reinit all the network rules
https://github.com/dotcloud/docker/issues/866#issuecomment-19218300
Seems the interface was 'hung' somehow.
Update for more recent versions of docker:
The above answer might still get the job done for you but it has been quite a long time since this answer was posted and docker is more polished now so make sure you try these first before going into mangling with iptables and all.
sudo service docker restart or (if you are in a linux distro that does not use upstart) sudo systemctl restart docker
The intended way to restart docker is not to do it manually but use the service or systemctl command:
service docker restart
or
systemctl restart docker
Updating this question with an answer for OSX (using Docker Machine)
If you are running Docker on OSX using Docker Machine, then the following worked for me:
docker-machine restart
<...wait for it to restart, which takes up to a minute...>
docker-machine env
eval $(docker-machine env)
Then (at least in my experience), if you ping google.com from a container all will be well.
Tried all answers, none worked for me.
After a few hours of trying everything else I could find, this did the trick:
reboot
-_-
I do not know what I am doing but that worked for me :
OTHER_BRIDGE=br-xxxxx # this is the other random docker bridge (`ip addr` to find)
service docker stop
ip link set dev $OTHER_BRIDGE down
ip link set dev docker0 down
ip link delete $OTHER_BRIDGE type bridge
ip link delete docker0 type bridge
service docker start && service docker stop
iptables -t nat -A POSTROUTING ! -o docker0 -s 172.17.0.0/16 -j MASQUERADE
iptables -t nat -A POSTROUTING ! -o docker0 -s 172.18.0.0/16 -j MASQUERADE
service docker start
I was using DOCKER_OPTS="--dns 8.8.8.8" and later discovered and that my container didn't have direct access to internet but could access my corporate intranet. I changed DOCKER_OPTS to the following:
DOCKER_OPTS="--dns <internal_corporate_dns_address"
replacing internal_corporate_dns_address with the IP address or FQDN of our DNS and restarted docker using
sudo service docker restart
and then spawned my container and checked that it had access to internet.
No internet access can also be caused by missing proxy settings. In that case, --network host may not work either. The proxy can be configured by setting the environment variables http_proxy and https_proxy:
docker run -e "http_proxy=YOUR-PROXY" \
-e "https_proxy=YOUR-PROXY"\
-e "no_proxy=localhost,127.0.0.1" ...
Do not forget to set no_proxy as well, or all requests (including those to localhost) will go through the proxy.
More information: Proxy Settings in the Archlinux Wiki.
I was stumped when this happened randomly for me for one of my containers, while the other containers were fine. The container was attached to at least one non-internal network, so there was nothing wrong with the Compose definition. Restarting the VM / docker daemon did not help. It was also not a DNS issue because the container could not even ping an external IP. What solved it for me was to recreate the docker network(s). In my case, docker-compose down && docker-compose up worked.
Compose
This forces the recreation of all networks of all the containers:
docker-compose down && docker-compose up
Swarm mode
I suppose you just remove and recreate the service, which recreates the service's network(s):
docker service rm some-service
docker service create ...
If the container's network(s) are external
Simply remove and recreate the external networks of that service:
docker network rm some-external-network
docker network create some-external-network
For me it was the host's firewall. I had to allow DNS on the host's firewall. And also had to restart docker after changing the host firewall setting.
for me, my problem was because of iptables-services was not installed, this worked for me (CentOS):
sudo yum install iptables-services
sudo service docker restart
Other answers have stated that the docker0 interface (bridge) can be the source of the problem. On Ubuntu 20.04 I observed that the interface was missing its IP address (to be checked with ip addr show dev docker0). Restarting Docker alone did not help. I had to delete the bridge interface manually.
sudo ip link delete docker0
sudo systemctl restart docker
You may have started your docker with dns options --dns 172.x.x.x
I had the same error and removed the options from /etc/default/docker
The lines:
# Use DOCKER_OPTS to modify the daemon startup options.
DOCKER_OPTS="--dns 172.x.x.x"
On centos 8,
My problem was that I did not install & start iptables before starting docker service. Make sure iptables service is up and running before you start docker service.
Sharing a simple and working solution for posterity. When we run a docker container without explicitly mentioning the --network flag, it connects to its default bridge network which prohibits connecting to the outside world. To resolve this issue, we have to create our own bridge network(user-defined bridge) and have to explicitly mention it with the docker run command.
docker network create --driver bridge mynetwork
docker run -it --network mynetwork image:version
it help me:
sudo ip link delete docker0
sudo systemctl stop docker.socket
sudo systemctl stop docker.service
sudo systemctl start docker.socket
sudo systemctl start docker.service
NOTE: after this, interface docker0 must have ip adress smth like that:
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
If you're on OSX, you might need to restart your machine after installing Docker. This has been an issue at times.
For me it was an iptables forwarding rule. For some reason the following rule, when coupled with docker's iptables rules, caused all outbound traffic from containers to hit localhost:8080:
iptables -t nat -A PREROUTING -p tcp --dport 80 -j REDIRECT --to-ports 8080
iptables -t nat -I OUTPUT -p tcp -d 127.0.0.1 --dport 80 -j REDIRECT --to-ports 8080
I had the problem on Ubuntu 18.04. However the problem was with the DNS. I was in a corporate network that has its own DNS server and block other DNS servers. This is to block some websites (porn, torrents, ... so on )
To resolve your problem
find your DNS on host machine
use --dns your_dns as suggested
by #jobin
docker run --dns your_dns -it --name cowsay --hostname cowsay debian bash
For Ubuntu 19.04 using openconnect 8.3 for VPN, I had to symlink /etc/resolve.conf to the one in systemd (opposite of answerby wisbucky )
sudo ln -sf /etc/resolv.conf /run/systemd/resolve/resolv.conf
Steps to debug
Connect to Company VPN
Look for correct VPN settings in either /etc/resolv.conf or /run/systemd/resolve/resolv.conf
Whichever has the correct DNS settings, we'll symlink that to the other file
( Hint: Place one with correct settings on the left of assignment )
Docker version: Docker version 19.03.0-rc2, build f97efcc
for me, using centos 7.4, it was not issue of /etc/resolve.conf, iptables, iptables nat rules nor docker itself. The issue is the host missing the package bridge-utils which docker require to build the bridge using command brctl. yum install -y bridge-utils and restart docker, solve the problem.
On windows (8.1) I killed the virtualbox interface (via taskmgr) and it solved the issue.
Originally my docker container was able to reach the external internet (This is a docker service/container running on an Amazon EC2).
Since my app is an API, I followed up the creation of my container (it succeeded in pulling all the packages it needed) with updating my IP Tables to route all traffic from port 80 to the port that my API (running on docker) was listening on.
Then, later when I tried rebuilding the container it failed. After much struggle, I discovered that my previous step (setting the IPTable port forwarding rule) messed up the docker's external networking capability.
Solution: Stop your IPTable service:
sudo service iptables stop
Restart The Docker Daemon:
sudo service docker restart
Then, try rebuilding your container. Hope this helps.
Follow Up
I completely overlooked that I did not need to mess with the IP Tables to forward incoming traffic to 80 to the port that the API running on docker was running on. Instead, I just aliased port 80 to the port the API in docker was running on:
docker run -d -p 80:<api_port> <image>:<tag> <command to start api>
Just adding this here in case someone runs into this issue within a virtualbox container running docker. I reconfigured the virtualbox network to bridged instead of nat, and the problem went away.
There are lot of good answer already. I faced similar problem in my orange pi pc running armbian recently. Docker container was blocked to internet. This command solve the problem in my case. So I like to share it
docker run --security-opt seccomp=unconfined imageName
I've tried most answers in here, but the only thing that worked was re-creating the network:
$ docker network rm the-network
$ docker network create --driver=bridge the-network
I also needed to re-create the docker container that used it:
$ sudo docker create --name the-name --network the-network
Then it started with internet access.
I am on Arch Linux and after trying all the above answers I realized that I had a firewall enabled in my machine, nftables, and disabling it did the trick. I did :
sudo systemctl disable nftables
sudo systemctl stop nftables
sudo reboot
My network cards:
➜ ~ ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: enp1s0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc fq_codel state DOWN mode DEFAULT group default qlen 1000
link/ether 68:f7:28:84:e7:fe brd ff:ff:ff:ff:ff:ff
3: wlp2s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DORMANT group default qlen 1000
link/ether d0:7e:35:d2:42:6d brd ff:ff:ff:ff:ff:ff
4: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default
link/ether 02:42:43:3f:ff:94 brd ff:ff:ff:ff:ff:ff
5: br-c51881f83e32: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default
link/ether 02:42:ae:34:49:c3 brd ff:ff:ff:ff:ff:ff
6: br-c5b2a1d25a86: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default
link/ether 02:42:72:d3:6f:4d brd ff:ff:ff:ff:ff:ff
8: veth56f42a2#if7: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT group default
link/ether 8e:70:36:10:4e:83 brd ff:ff:ff:ff:ff:ff link-netnsid 0
and my firewal configuration, /etc/nftables.conf, which I now disabled and will futurely try to improve so I can have the docker0 network card rules setup correctly:
#!/usr/bin/nft -f
# vim:set ts=2 sw=2 et:
# IPv4/IPv6 Simple & Safe firewall ruleset.
# More examples in /usr/share/nftables/ and /usr/share/doc/nftables/examples/.
table inet filter
delete table inet filter
table inet filter {
chain input {
type filter hook input priority filter
policy drop
ct state invalid drop comment "early drop of invalid connections"
ct state {established, related} accept comment "allow tracked connections"
iifname lo accept comment "allow from loopback"
ip protocol icmp accept comment "allow icmp"
meta l4proto ipv6-icmp accept comment "allow icmp v6"
#tcp dport ssh accept comment "allow sshd"
pkttype host limit rate 5/second counter reject with icmpx type admin-prohibited
counter
}
chain forward {
type filter hook forward priority filter
policy drop
}
If you're running Docker rootless and facing this issue, during its installation, iptables may not be configured properly, mainly because of using --skip-iptables option when Docker complained about iptables:
[ERROR] Missing system requirements. Run the following commands to
[ERROR] install the requirements and run this tool again.
[ERROR] Alternatively iptables checks can be disabled with --skip-iptables .
########## BEGIN ##########
sudo sh -eux <<EOF
# Load ip_tables module
modprobe ip_tables
EOF
########## END ##########
Let's check if this is the issue: is ip_tables kernel module loaded?
sudo modprobe ip_tables
If there is no output, this answer may not help you (you could try anyway). Otherwise, the output's something like this:
modprobe: FATAL: Module ip_tables not found in directory /lib/modules/5.18.9-200.fc36.x86_64
Let's fix it!
First, uninstall Docker rootless (no need to stop the service via systemctl, the script handles that):
dockerd-rootless-setuptool.sh uninstall --skip-iptables
Ensure iptables package is installed, although it's shipped by default by major distributions.
Now, make ip_tables module visible to modprobe and install it (thanks to this):
sudo depmod
sudo modprobe ip_tables
Now, re-install Docker rootless:
dockerd-rootless-setuptool.sh install
If it doesn't bother about iptables, you're done and problem should be fixed. Don't forget to enable the service (i.e. systemctl enable --user --now docker)!
For me iwas facing the same issue as user of redhat/centos/fedora using podman
firewall-cmd --zone=public --add-masquerade
firewall-cmd --permanent --zone=public --add-masquerade
for more firewalld and podman (or docker) – no internet in the container and could not resolve host
I also encountered such an issue while trying to set up a project using Docker-Compose on Ubuntu.
The Docker had no access to internet at all, when I tried to ping any IP address or nslookup some URL - it failed all the time.
I tried all the possible solutions with DNS resolution described above to no avail.
I spent the whole day trying to find out what the heck is going on, and finally found out that the cause of all the trouble was the antivirus, in particular it's firewall which for some reason blocked Docker from getting the IP address and port.
When I disabled it - everything worked fine.
So, if you have an antivirus installed and nothing helps fix the issue - the problem could be the firewall of the antivirus.