This confuses me as it's just started happening...
My workstation (Ubuntu 18.04 running docker 20.10.17) can resolve a specific internet host without issue. The container that my workstation hosts, however, is unable to resolve the same host.
Any other internet host appears to resolve without any issue within the container : actually, from what I can see, it's unable to resolve any host assigned to that specific domain name (eg flora.com) but only in the container - on the host it's just fine.
I've read that docker takes a copy of /etc/resolv.conf from the host and places it into the container, but I don't think that's correct as my container's /etc/resolv.conf reads as follows:
search lan
nameserver 127.0.0.11
options edns0 ndots:0
whereas my workstation's /etc/resolv.conf read:
# This file is managed by man:systemd-resolved(8). Do not edit.
...
# operation for /etc/resolv.conf.
nameserver 127.0.0.53
options edns0
search lan
Within my docker-compose.yml I have my containers using a defined bridged network (which has been working just fine for the last three years):
console-vpacs:
...
depends_on: [console-mysql]
...
networks:
- dev-vlt-console
...
networks:
dev-vlt-console:
name: dev-vlt-console
driver: bridge
I'm at a bit of a loss where to go from here as I've :
restarted the docker service a number of time
deleted the docker0 interface, restarted the service
searched SO and google and nothing appears to be working (the only thing I've not done is to remove docker and reinstall)
Can anyone give any insight?
A complete reinstall of docker resolved this issue. Very odd that it happened, but I'm still running 20.10.17 so there was no update there ... but I'm back up and running again
I have setup coredns to run in a container and everything is working. I would like to force all containers going forward to use this DNS from the this container. The DNS server I installed was coredns.
I know I can use "dns" from a docker-compose but this requires a IP address and my container doesn't have a fixed IP.
Is there some way to force all container to use this specific container as their "port 53" dns server
Thanks in advance
no, if you are not able to setup static IP for coredns you must manually update /etc/resolv.conf in every container after each build/reboot or run docker with --dns parameter
with static you can setup global dns for all in /etc/docker/daemon.json and restart docker service
I'm trying to run a Thingsboard PE docker-compose cluster (basic configuration) on an Azure Linux VM (ubuntu 20.04). The main "monolith" container shuts down after about a minute and the logs report it can't access the license server. I'm assuming it's shutting down because of license server access, and assuming the problem is that the container can't access the internet (but any advice on further troubleshooting would be appreciated).
Within the container cat /etc/resolv.conf returns:
search 1lt4eb1hmraebffqmvlsi2dp5g.px.internal.cloudapp.net
nameserver 127.0.0.11
options ndots:0
On the host it's:
nameserver 168.63.129.16
search 1lt4eb1hmraebffqmvlsi2dp5g.px.internal.cloudapp.net
There's no problem with internet access from the host and I can ping Google's dns servers.
I've read a lot of posts/advice on setting DNS server settings for docker containers and (separately) tried the following but the service still fails:
Added Google dns entries to docker-compose.yml
Added Google dns to /etc/docker/daemon.json
Added Google dns to /etc/default/docker
Updated /etc/resolv.conf symlink with: sudo ln -sf /run/systemd/resolve/resolv.conf
It looks like I can't edit iptables on the Azure VM (but if it's possible please let me know).
If anyone has experienced the same or similar issue I'd be interested to know how you resolved it.
For anyone else having this problem, after researching further it seems the issue was actually the "options ndots:0" configuration which isn't compatible with the default host DNS config on Ubuntu 20.04. I added the below setting to /etc/docker/daemon.json and after that the containers were able to access the internet. It might be possible to limit this to specific containers by adding the setting to the service in docker-compose.yml, but I haven't tried that yet.
{
"dns-opts":["ndots:1"]
}
When running containers on startup I noticed some were using resolv.conf before systemd-resolved had updated it from the default using DHCP. This meant that containers that started too early after boot could not resolve anything and needed to be restarted to use the proper DNS settings. This is happening for different reasons for both rkt and Docker; Docker's method for updating resolv.conf inside containers is not compatible with the overlay filesystem driver and since systemd-resolved does not update the file in-place (rather creates a temporary one and renames) rkt's bind mounting does not update what the container sees.
Currently I am using a hacky systemd.unit to delay the network-online.target which docker.service and my rkt pods depend on.
[Unit]
Description=Wait for DNS
[Service]
Type=oneshot
RemainAfterExit=true
ExecStart=/bin/sh -c 'while ! getent ahosts google.com >dev/null; do sleep 1; done'
[Install]
WantedBy=network-online.target
But this significantly delays my start-up time
# systemd-analyze blame
18.068s wait-for-dns.service
...
and if resolv.conf changes again it won't help. So I was wondering if there's a more elegant solution to my problem. Idealy I'd like to be able to trigger a resolv.conf update in both rkt and Docker containers every time it changes.
Run containers on a user defined network so they will use the embedded DNS server that will forward lookups to the systems DNS.
The default docker0 bridge has some special rules that were left in place for legacy support. Using a mounted /etc/resolv.conf is one of those legacy things.
If rkt doesn't support the same type of DNS then the general solution could be to setup a DNS server like Unbound to be a local forwarding resolver. Then containers have a static DNS server to reference.
I'm trying to figure out a problem with identical docker containers being run on different hosts, where one container can find/ping/nslookup a domain on a private network, and another can't. One host is OSX 10.11, the other is Ubuntu 16.04. Both are running docker 1.12. I'm using docker-compose to bring up my application, and I'm hoping to figure out what is going on and how to fix it, or some configuration changes I could make without resorting to hardcoding domains or ip addresses that would make the container behave the same on both hosts.
On my OSX box, I have the following dns nameservers set automatically by my domain:
osx:$ cat /etc/resolv.conf
domain redacted.lan
nameserver 172.16.20.19
nameserver 10.43.0.11
I'm aware that resolv.conf isn't used by most OSX tools, but System Preferences > Network shows the same settings.
I have similar settings on my Ubuntu 16 box as well (command from https://askubuntu.com/questions/152593/command-line-to-list-dns-servers-used-by-my-system):
ubu:$ cat /etc/resolv.conf
nameserver 127.0.1.1
search redacted.lan
ubu:$ nmcli device show eno1 | grep IP4.DNS
IP4.DNS[1]: 172.16.20.19
IP4.DNS[2]: 10.43.0.11
Then, on both OSX and Ubuntu, I start my container with this:
$ docker run -it redacted_web bash
And then I run these commands to diagnose my problem:
$ apt-get update
$ apt-get install -y dnsutils
$ cat /etc/resolv.conf
$ nslookup redacted.lan
On OSX, the output from the last 2 commands is:
root#d19f49322fda:/app# cat /etc/resolv.conf
search local
nameserver 192.168.65.1
root#d19f49322fda:/app# nslookup redacted.lan
Server: 192.168.65.1
Address: 192.168.65.1#53
Name: redacted.lan
Address: 172.18.0.23
On Ubuntu, the output is:
root#91e82d652e07:/app# cat /etc/resolv.conf
# Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)
# DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN
search redacted.lan
nameserver 8.8.8.8
nameserver 8.8.4.4
root#91e82d652e07:/app# nslookup redacted.lan
Server: 8.8.8.8
Address: 8.8.8.8#53
** server can't find redacted.lan: NXDOMAIN
Possible differences I can think of:
On OSX there is a vm running docker, where as on ubuntu it's native
On Ubuntu, docker is run with sudo, possibly picking up different configuration settings
Updated Answer (2017-06-20)
Newer versions of Ubuntu (17.04+) don't use dnsmasq (it now uses systemd-resolved). You'll run into a similar problem with host resolution, but the original solution here no longer works. In fact, Docker containers can't even communicate with systemd-resolvd because it's running its own DNS in a location that's unreachable from within a container.
A good solution on newer versions of Ubuntu is to put the following configuration in /etc/docker/daemon.json (create this file if it doesn't exist):
{
"dns": ["172.16.20.19", "10.43.0.11"],
"dns-search": ["redacted.lan"]
}
This allows you to configure the DNS servers and search domains. The DNS IPs above are from the original question, but you can use your own custom ones too. You probably want to match the DNS config on your host machine. Search domain is optional, and you could entirely omit that line (careful with your commas!). Again, you probably want to match your host machine.
Essentially, what these daemon.json options do, is automatically inject the DNS config and search domain into the config files inside the container for any container that is started on that daemon. This is necessary because you cannot use systemd-resolved from the host to resolve DNS within the container due to limitations of the way systemd-resolved works. The docs are here and here.
Original Answer
The problem is that the host is using dnsmasq to resolve the private IP and Docker is not using dnsmasq on the host.
The simple fix is to turn off dnsmasq on the host machine.
Run sudo vi /etc/NetworkManager/NetworkManager.conf
Comment out this line: #dns=dnsmasq
Run sudo service network-manager restart
Now, you should be able to use the docker container and it will resolve your private DNS correctly.
Check your startup scripts for the docker daemon, it includes the following option to adjust the DNS used when creating containers:
$ dockerd --help
# ...
--dns=[] DNS server to use
--dns-opt=[] DNS options to use
--dns-search=[] DNS search domains to use
On Ubuntu, I believe these settings are in /etc/default/docker which is read by /etc/init.d/docker. Leaving these unset should default to the /etc/resolv.conf values.
Update: Docker's DNS networking documentation has a lot of detail that should point you in the right direction.
if there are no more nameserver entries left in the container’s /etc/resolv.conf file, the daemon adds public Google DNS nameservers (8.8.8.8 and 8.8.4.4) to the container’s DNS configuration.
This looks like it's probably happening in your situation. The filters are looking for local addresses, which I wouldn't think is happening in your situation with a 172.16.0.0/12 private addr, but it's possible.
You might wonder what happens when the host machine’s /etc/resolv.conf
file changes. The docker daemon has a file change notifier active
which will watch for changes to the host DNS configuration.
Note: The file change notifier relies on the Linux kernel’s inotify
feature. Because this feature is currently incompatible with the
overlay filesystem driver, a Docker daemon using “overlay” will not be
able to take advantage of the /etc/resolv.conf auto-update feature.
If this is happening, then giving the daemon a restart would likely resolve it and would indicate you want Docker to start after the NetworkManager.
Update #2: looking over this github issue it's possible that they also included 172.16.0.0/12 to avoid conflicts with docker's bridged networks. If you can be sure to avoid using the same network inside docker, then passing the dns server in /etc/defaults/docker would likely force the correct behavior. There's also a mention at the end of the post about lxc causing conflicts, so if you have that installed and can remove it, give that a try first.