I'm trying to debug a container dns resolving issue on ubuntu linux docker. As described here https://docs.docker.com/config/containers/container-networking/#dns-services docker usage a embedded DNS server inside container. Are there any commands that list Docker’s embedded DNS server entries ? ( like entries in /etc/resolv.conf)
I have tried docker inspect and docker network inspect.
Also tried starting dockerd is debug mode but have not found anything useful.
It does show some config file read like below.
INFO[2020-07-13T14:39:58.517777580+05:45] detected 127.0.0.53 nameserver, assuming systemd-resolved, so using resolv.conf: /run/systemd/resolve/resolv.conf
But I wanted to list the runtime dns entries of dockerd network with dns addresss 127.0.0.11. Is it possible ?
It is possible, but you have to parse the JSON printed by docker network inspect.
Run docker network ls to get the running networks names, and then docker network inspect NETWORK_NAME to see the containers in it.
Look for the "Containers" keyword in the JSON, it is a list of connected devices. Look for the instance with the "IPv4Address": "127.0.0.11/24" entry, the "Name" key is the DNS name.
I.E. countly_countly-endpoint is the DNS name that resolves to ip 10.0.8.4/24:
"lb-countly_countly": {
"Name": "countly_countly-endpoint",
"EndpointID": "9f7abf354b5fbeed0be6483b53516641f6c6bbd37ab991544423f5aeb8bdb771",
"MacAddress": "02:42:0a:00:08:04",
"IPv4Address": "10.0.8.4/24",
"IPv6Address": ""
}
Note that countly_ is the network namespace that matches the network name in docker network ls, that way you can be sure that they are unique and configure your services to talk to each other using the DNS name according to the rule of NETWORK-NAME_SERVICE-NAME.
Related
I came across an interesting issue which i tried resolving and investigating on my own. I think i can already feel the solution with my fingertips but i just cant grasp it. ..
Any help would be really nice and would be grateful for it.
Common docker network:
local bridge, with defaul driver "Subnet": "172.19.0.0/16","Gateway": "172.19.0.1"
a proxy container (nginx) which handles ssl and two domains and internal routing to two containers on the network
"120253b9613d95bb4d540abe3676c7d309cdc9ac531cc81de9acd548737b829e": {
"Name": "youtrack",
"EndpointID": "0ff42cc51535663df36a47f79f41f4df5bdb229c411b2aa0200fffc0c3e7b824",
"MacAddress": "02:42:ac:13:00:03",
"IPv4Address": "172.19.0.3/16",
"IPv6Address": ""
},
"639a75318859f2b60c93585a77d259f919f307a3f0653fd75cbcc8cad932e3ac": {
"Name": "proxy",
"EndpointID": "7923e5fbe27e0b2a4ff3b8f765a5a2fb34b3b97c10f2545fb875facd04d71fdb",
"MacAddress": "02:42:ac:13:00:02",
"IPv4Address": "172.19.0.2/16",
"IPv6Address": ""
},
"8b0d7cf3f4d09d9fe281e742e747c9947528519e730c0e58a07bec9a6d097083": {
"Name": "gitlab",
"EndpointID": "84b454e38b9178f5f55cefb310f839de1abcd2ed0e7b58018e9522e08dfbff01",
"MacAddress": "02:42:ac:13:00:04",
"IPv4Address": "172.19.0.4/16",
"IPv6Address": ""
}
when communicating from outside world on the container-one.company.com and container-two.company.com through the proxy, there are no issues. Everything works as expected.
Now thing is theese two containers have integration through https. (VCS integration) between youtrack which is issue management and gitlab which is code git repository. And main integration property is the URL of the server. So container-one.company.com's settings include container-two.company.com domain as https point for integration. And the container-two.company.com's settings include container-one.company.com url for the VCS integration.
I have already checked that BOTH domains resolve to the SERVER IP and not Docker's internal IP.
When i did testing and set up a reverse proxy with IDENTICAL config, but pointing BACK to the server. So the flow was SERVER1 which has all 3 containers and SERVER2 which only had nginx container with both of domains, but it redirected back to SERVER1 proxy. So only difference was BOTH containers resolved IP which was not of their own server, but some different server.
That's why i do think there is issue with how docker is handling network calls and it somehow tries to "optimize" both calls to go through internal docker's network instead through proxy, like all external calls.
And i can mention that Nginx's config is really only ssl termination and maping based on the domain. IF domain container-one.company.com proxy pass to youtrack container and its port and if container-two.company.com proxy pass to the gitlab cotntainer and its internal port. All plain http, no complications.
NOTE:
I have tried with the solution:
ports:
- "IP:443:443"
And
--add-host property
Same behaviour.
UPDATE1:
For more clarification adjusted and provided two diagrams with additional explanations and behaviour.
this is the current setup and desired setup. And as you see when trying to make VCS integration ON the same host from a YOUTRACK container to GITLAB one using FQDN it all fails, says it does not map/point to a valid repository url.
But however if you move both domains to a different server, the top-level proxy (diagram 2 which is incoming). It is actually on a different server and CONTAINER is pointing to that container (i even used the add-host property so not even a valid domain to the dummy proxy, on a different host). Then it went through.
Meanwhile, this kind of configuration (moving the FQDN from the same host to a different one and PROXYing it back to the original with NO changes to ANY of the original proxy configurations), works like a charm.
TLDR:
Issue was that UFW did not let full 443:443 binding through so I had to add:
sudo ufw allow 443
sudo ufw reload
Note symptoms were:
Every other PC in our network COULD connect to port 443
EVERY local applications COULD connect to port 443
ONLY containers ON the same host COULD NOT connect to port 443, even if it was FULL IP:PORT:PORT binding on the proxy container.
and with ports configuration:
- "IP:443:443"
It then worked.
Full debugging was made with:
docker run --rm -it --network container:gitlab nicolaka/netshoot:latest
Which allowed all telnet,ping,curl tests to figure out that even tho ping was going through connections on port 443 did not.
I work using a vpn much of the time, and I've noticed that sometimes external network connections fail, like installing from a remote location into a container gets connection refused. docker network prune allows docker to remap connections, and I can then proceed with whatever I was doing, but what's actually happening under the hood here? Using docker for mac, if it's relevant.
A docker network prune deletes any unused networks, and then redeploying the project with something like docker-compose or docker stack deploy will recreate the networks. When docker creates a network, it picks from a pool of private IP's and excludes any networks it can currently route to. That last part is what is changing when you connect and disconnect from a VPN, or work from a different location with different networks visible to docker.
I suspect what you're seeing is a network collision. When docker picks the same network subnet that you later find yourself connected to, (e.g. switching on a VPN or wifi at a new location) attempts to connect to that external network from a docker container get routed to the docker network instead the outside network. This results in your connections failing.
You can tell docker to only pick networks from your select pool of subnets. You will need to identify the subnets used by your VPN, home, office, coffee shop, etc, and then select a private IP range outside of any of those subnets for docker. The configuration for this is in the daemon.json file for bridge networks (on Mac, you go to the docker icon, open the settings/preferences, go to Daemon, and then advanced) looks like:
{
"bip": "10.15.0.1/24",
"default-address-pools": [
{"base": "10.20.0.0/16", "size": 24},
{"base": "10.40.0.0/16", "size": 24}
]
}
The "bip" setting is for the bridge IP, aka docker0 or the bridge network named bridge. The bip address must be valid, so don't end that with a 0 or 255, it will be used for the gateway and the mask (/24) will be used to specify the subnet size.
The "default-address-pools" option came in 18.06 and specifies subnets to use for other bridge networks created by docker, including docker network create bridges and any bridge created by docker-compose.
For swarm mode, starting in 18.09, you can define the pools to use for overlay networks when the swarm is first created with:
$ docker swarm init \
--default-addr-pool 10.20.0.0/16 \
--default-addr-pool 10.40.0.0/16 \
--default-addr-pool-mask-length 24
If you need to change these, you'll need to delete and recreate the swarm.
To see the networks currently being used, you can run ip r to see all the routes. The first column shows each subnet and mask in CIDR notation. The same notation used by the docker commands above.
recently,
I have one container which has joined the swarm overlay network,
sometimes, I will change the static IP for a number of reasons,
but not sure why the IPv4Address from the docker network inspect will
still show the old IP address, but not the new one,
For example:
step 1. Running a container by
docker run -itd -h kafka_1 --name kafka_1 kafka:latest
step 2. Assign a network interface for joining the overlay network
docker network connect --ip 172.20.0.110 test-overlay-net kafka_1
step 3. Attach to kafka container and the change ip by
ifconfig eth1 172.20.0.111 netmask 255.255.0.0 broadcast 172.20.255.255
step 4. Logout the container, and check the inspecting info by
docker network inspect test-overlay-net
Step 5. and realize the IP address is still the old one even the ip is already changed successfully in the container.
"Containers": {
"df1de7d9809f3e84857ef19db10f7c50d3d65153dcd47f3b22af6ed3a5ab1b41": {
"Name": "kafka_1",
"EndpointID": "37fe6b03b87435f897780826992a6e1f9b491444738c10de6c7c56aea3edb71d",
"MacAddress": "02:42:ac:14:00:6f",
"IPv4Address": "172.20.0.110/16",
"IPv6Address": ""
},
Does anyone know how to solve this problem?
currently, I just find the way for workaround by using docker network disconnect -f test-overlay-net kafka_1, and then re-connect again by the docker network connect --ip
much appreciated!
Seems like docker reads the ip from it's internal management, not directly from the container's network namespace, so docker is not aware of your IP change.
It may happen that docker assigns the address you set inside your container to a newly attached one resulting in an address conflict when you don't explicitly specify an IP. There may also occur problems with overlay packet routing.
In sum I would not recommend changing ip settings inside a container.
What is your use case?
There is some websource "http://vpnaccessible.com" where I need to download some RPM package via wget. And this web-source is accessible only from VPN. So I'm using Cisco AnyConnect VPN client to enter VPN, then I want to build image using Dockerfile where this wget command is listed.
The problem is: Docker can't access to that domain within container. So I tried to pass dns options in /etc/docker/daemon.json, but not sure what DNS IP I should pass, because in my local there are default DNS 192.168.0.1, 8.8.8.8. I tried to pass in that array IP addresses of docker0 interface, e.g. 172.17.0.1 -- didn't work.
$ cat /etc/docker/daemon.json
{
"insecure-registry": "http://my-insecure-registry.com",
"dns": ["192.168.0.1", "172.17.0.1", "8.8.8.8"]
}
I also tried to add this websource to /etc/resolf.conf but when I run docker to build image -- it's edited to the prev state (changes are not persisted there), and I guess, it's my Cisco VPN client behavior -- didn't work.
Also tried to add IP address of interface created by Cisco VPN client to that dns -- didn't work
I also commented out dns=dnsmasq in /etc/NetworkManager/NetworkManager.conf -- didnt work
For sure, I'm restarting docker and NetworkManager services after these changes.
Question: Should I create some bridge between Docker container and my VPN? How to solve this issue?
You can try using your host network instead of the default bridge one. Just add the following argument:
--network host
or
--net host
Depending of your docker version.
I need to know the hostnames (or ip addresses) of some container running on the same machine.
As I already commented here (but with no answer yet), I use docker-compose. The documentation says, compose will automatically create a hostname entry for all container defined in the same docker-compose.yml file:
Each container for a service joins the default network and is both reachable by other containers on that network, and discoverable by them at a hostname identical to the container name.
But I can't see any host entry via docker exec -it my_container tail -20 /etc/hosts.
I also tried to add links to my container, but nothing changed.
Docker 1.10 introduced some new networking features which include an internal DNS server where host lookups are done.
On the default bridge network (docker0), lookups continue to function via /etc/hosts as they use to. /etc/resolv.conf will point to your hosts resolvers.
On a user defined network, Docker will use the internal DNS server. /etc/resolv.conf will have an internal IP address for the Docker DNS server. This setup allows bridge, custom and overlay networks to work in a similar fashion. So an overlay network on swarm will populate host data from across the swarm like a local bridge network would.
The "legacy" setup was maintained so the new networking features could be introduced without impacting existing setups.
Discovery
The DNS resolver is able to provide IP's for a docker compose service via the name of that service.
For example, with a web and db service defined, and the db service scaled to 3, all db instances will resolve:
$ docker-compose run --rm web nslookup db
Name: db
Address 1: 172.22.0.4 composenetworks_db_2.composenetworks_mynet
Address 2: 172.22.0.5 composenetworks_db_3.composenetworks_mynet
Address 3: 172.22.0.3 composenetworks_db_1.composenetworks_mynet