Docker Container not use host DNS - docker

i'm used custom docker network named "backend-network"
[root#localhost docker]# docker inspect backend-network
{
"Name": "backend-network",
"Id": "18180c0c1ef14460a25b66b7fb971e090f7bb85f549921704d11937af70766c7",
"Created": "2018-08-07T12:36:02.4175991+09:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "172.18.0.0/16",
"Gateway": "172.18.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Containers": {
"531c1ecbe993ee13e632fbd9697b392ee989d756ff60c07eae96a700901aaa01": {
"Name": "splash",
"EndpointID": "c9e4e7ec319ecf9cdcbb9ca50170efb63c4fca33bcbbabb584c4a4e41576b15d",
"MacAddress": "02:42:ac:12:00:05",
"IPv4Address": "172.18.0.5/16",
"IPv6Address": ""
},
"c6a5aa827e901b6b6d7b35d4a8be5a5b2fc73f1a7a385416ce200e847d400b21": {
"Name": "flask",
"EndpointID": "5d5abb3bc964d251379a7f6a84cb5b5d9bddac9b778f2222d52aba657b28dd34",
"MacAddress": "02:42:ac:12:00:02",
"IPv4Address": "172.18.0.2/16",
"IPv6Address": ""
},
"da839143fb58d738e38922c669efa332c545fee4dd0a5b733583ed7b8df60875": {
"Name": "django",
"EndpointID": "f046e9cc93f895b12ce1c4de983fbe0e54a3904460c04db3ba238ba84ba82327",
"MacAddress": "02:42:ac:12:00:03",
"IPv4Address": "172.18.0.3/16",
"IPv6Address": ""
},
"fc9e6ef183c81a3fe7dd29ecb5c17c0dc27fb803ef8e214d4f344a2b3407ec54": {
"Name": "mongo",
"EndpointID": "ab94182f4b175f105ab01ccbbc43b7dad37cf5506eee831168fd5bd9094ccde8",
"MacAddress": "02:42:ac:12:00:04",
"IPv4Address": "172.18.0.4/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
But each Container not used host DNS.
host DNS is.
[root#localhost docker]# cat /etc/resolv.conf
# Generated by NetworkManager
nameserver 192.168.88.1
and container DNS is
(django) root#da839143fb58:/opt/django_backend/scrapy_app# cat /etc/resolv.conf
nameserver 127.0.0.11
options ndots:0
i added
nameserver 192.168.88.1
container's resolv.conf, and it works but request too long.
I think first search in
nameserver 127.0.0.11
and search
nameserver 192.168.88.1
how can i set docker-network to host dns?
remove
nameserver 127.0.0.11
then don't communicate other container by name. like
(django) root#da839143fb58:/opt/django_backend/scrapy_app# ping splash
ping: splash: Name or service not known

Docker containers are resolving DNS requests through embedded DNS server (this is the IP you are seeing in the container's /etc/resolv.conf – see bottom note in documentation). Depending on your configuration the embedded DNS server forwards the query to your host (default) or another DNS-server. You can pass a custom DNS server with the --dns- flag.
Please find more information about that in the documentation.

Related

Not able to access container by user-defined network name from another container

I created a new network.
Exposed port 8086 for one of my containers and also published this port. Run both containers with --network influx-net.
Check docker network inspect influx-net gives my
docker network inspect influx-net
[
{
"Name": "influx-net",
"Id": "76ad2f3ec76e15d88330739c2b3db866a15d5affb0912c64c7ec7615b14e8570",
"Created": "2021-11-09T17:32:42.479637982+09:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "172.18.0.0/16",
"Gateway": "172.18.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"92c0f2c5dc5c75db15e5a5ea0bed6ec5047f0c57dba06283ef6e72c8adcc6a3a": {
"Name": "ultimate_hydroponics",
"EndpointID": "529c2c31baaec7685251a584e8f5643b4669966372b629d67f3a9910da5e809c",
"MacAddress": "02:42:ac:12:00:02",
"IPv4Address": "172.18.0.2/16",
"IPv6Address": ""
},
"a678409dbc43685fd089ed29f42601b67c46c20b66c277e461a8fe2adad31b5a": {
"Name": "influxdb",
"EndpointID": "73263b9969dc9c67760ec3aab4ebab75e16763c75cd498828a737d9e56ed34ef",
"MacAddress": "02:42:ac:12:00:03",
"IPv4Address": "172.18.0.3/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
]
So my both containers are connected to the network influx-net.
But when I trying to ping or wget InfluxDB container (influx-net:8086) from another container by it's network name I'm obtaining nothing.
Also when I do the same with 192.168.111.11:8086 (my pc IP), I'm getting a response.
What is the problem?
Local PC firewall is off.
# create a network
$ docker network create influx-net
# start influx db attached to the network influx-net
$ docker run -d --name idb --net influx-net influxdb
Now, create a new container attached to same network and try connecting:
$ docker run -it --net influx-net ubuntu
root#fc26733c33da:/# telnet idb 8086
Trying 172.18.0.2...
Connected to idb.
Escape character is '^]'.
And it's working.
If you need to connect using IP, inspect the network to get container IP and then use the same to connect.
$ docker network inspect influx-net
[
{
"Name": "influx-net",
...
"Containers": {
"844311255fb9dd74fee1df2dc65023533ad961d1dd6345128cc2c92237ba35e0": {
"Name": "idb",
"EndpointID": "b294cb0661f9f1166833f381a02bcbfa531cdeba6657c51b9595814f5ee323be",
"MacAddress": "02:42:ac:12:00:02",
"IPv4Address": "172.18.0.2/16", 👈 # this one here
"IPv6Address": ""
}, ...
},
"Options": {},
"Labels": {}
}
]
$ docker run -it --net influx-net ubuntu
root#360bbca6534d:/# telnet 172.18.0.2 8086
Trying 172.18.0.2...
Connected to 172.18.0.2.
Escape character is '^]'.

How can I connect my docker containers on the default bridge network?

I have 2 docker containers running on my windows 10 machine. I have been able to interact with them by binding container ports to host ports, but now I want to dockerize another application that I have been using to interact with these containers. Up until now I have been configuring the urls using localhost, but after moving the third application to a container that will no longer be an option, so I did some research and decided to use the default bridge network. I checked that all 3 containers were in the network:
[
{
"Name": "bridge",
"Id": "c570148be95b87b5bc768de573e85c25fa4584df2c5df5c63b2d317decabe651",
"Created": "2021-03-22T07:49:32.2206325Z",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.17.0.0/16",
"Gateway": "172.17.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"38beb0863d86dab0f014ef9f1ad85f02efa7fb96520455df6f6ea6b5519f60cc": {
"Name": "my_redis",
"EndpointID": "58a6cfab6f233ac39c9b043c660124fd9cb98970f99f154ad8b3774a3356e71b",
"MacAddress": "02:42:ac:11:00:03",
"IPv4Address": "172.17.0.3/16",
"IPv6Address": ""
},
"70fe60faa0dc3b853edcf2005e16d6219015eafa1c65d48aebd57256ff329f2b": {
"Name": "rabbitmq",
"EndpointID": "ed4ac901659785eebfd58de4056efd51addd19eda8c184a38632f1486c178e53",
"MacAddress": "02:42:ac:11:00:04",
"IPv4Address": "172.17.0.4/16",
"IPv6Address": ""
},
"b34359519bbf0253af3eba8e800a1bcabeb3cfe6e5cc5007679c6f632f1d4820": {
"Name": "app",
"EndpointID": "3363141459cc7eebeca1651b047ed3af81c4af37c3706dfa74e5eadb6f95f302",
"MacAddress": "02:42:ac:11:00:02",
"IPv4Address": "172.17.0.2/16",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.bridge.default_bridge": "true",
"com.docker.network.bridge.enable_icc": "true",
"com.docker.network.bridge.enable_ip_masquerade": "true",
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
"com.docker.network.bridge.name": "docker0",
"com.docker.network.driver.mtu": "1500"
},
"Labels": {}
}
]
From what I can see, icc is enabled and all 3 containers are on the network. I used the IPv4Address in the configuration in app: STA_REDIS_HOST = 172.17.0.3 (with and without the /16 at the end, because I'm not sure what it means), and it seems as if the ip is being resolved to something else, because I get the following error:
Error 111 connecting to 127.0.0.1:6379. Connection refused.
I don't know where 127.0.0.1 but it looks like the private ip of the host machine.
Where am I going wrong here?
I didn't get how you are trying to connecting one container to other...
docker_bridge_network demonstrates pinging one container to others by using bridge network

Getting connection refused error from one docker container to another only for REST request

I've two docker containers apiserver and loginserver. Both of them are provide REST API and are built using spring boot. I've created a bridge network called my-network and both the containers are attached to the same bridge.
I pinged loginserver from apiserver via interactive shell and it is accessible. I make REST request from the host machine so I know the socket exposed. But, when I make the same REST request from apiserver to loginserver, I am getting error:
: HttpQueryService::uri=http://172.28.0.7:8090/users/login
2018-06-19 19:08:24.196 ERROR 7 --- [nio-9000-exec-3] o.a.c.c.C.[.[.[/].[dispatcherServlet] : Servlet.service() for servlet [dispatcherServlet] in context with path [] threw exception
org.apache.http.conn.HttpHostConnectException: Connect to 172.28.0.7:8090 [/172.28.0.7] failed: Connection refused (Connection refused)
Here are the details from my-network:
docker network inspect my-network
[
{
"Name": "my-network",
"Id": "ef610688b58b6757cba57caf6261f7a1eaeb083798098214c4848cbb152cae26",
"Created": "2018-04-21T00:19:46.918124848Z",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "172.28.0.0/16",
"Gateway": "172.28.0.1"
}
]
},
"Internal": false,
"Attachable": true,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"71863d2f61789d4350fcabb1330b757500d5734a85c68b60eb1ac8f6f1e8344e": {
"Name": "mymongo",
"EndpointID": "717c8dbdc8993a70f9d3e97e549cb9784020b8e68e7a557c30b0818b4c9acb90",
"MacAddress": "02:42:ac:1c:00:02",
"IPv4Address": "172.28.0.2/16",
"IPv6Address": ""
},
"936447ce8325f3a7273a7fb462d75e55841a9ff37ccf27647831b3db1b8a1371": {
"Name": "mypg",
"EndpointID": "6a1a1b2f7852b89a9d2cb9b9abecdabd134849cd789c31613c7ddb91a4bc43d1",
"MacAddress": "02:42:ac:1c:00:06",
"IPv4Address": "172.28.0.6/16",
"IPv6Address": ""
},
"ad03348dffaef3edd916d349c88e8adf6cf7d88dbc40f82dc2384dee826cfa83": {
"Name": "myloginserver",
"EndpointID": "fe22c2b5f57b7fe4776087972ffa5f7f089ca6a59fde2fa677848b3f238ea026",
"MacAddress": "02:42:ac:1c:00:07",
"IPv4Address": "172.28.0.7/16",
"IPv6Address": ""
},
"c69bfbf9ccdc9e29e87d2847b5c2a51e9c232fb8d06635bcef0cdd1f7c66e051": {
"Name": "apiserver",
"EndpointID": "46e94a52d34670eb00448b1d39a0cc365b882ece790c9d868dcee04ad141d1ca",
"MacAddress": "02:42:ac:1c:00:0b",
"IPv4Address": "172.28.0.11/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
]
Is port 8090 exposed by your loginserver image? For checking type in command
docker images
and then find the ImageID of your loginserver image. Then enter command
docker inspect image {Login server image id}
In the output check ExposedPorts if 8090 is exposed or not
Late to the party but I just fixed this on my system by setting the address to get the REST request from as the public IP address:
eg: http://217.114.203.196/myrequest

Internet unreachable on the container when a VPN is ON on the host

When I do
docker run -i -t --privileged busybox ping google.com
It works when the VPN is OFF on my host machine but when I start it, it fails.
I found my DNS Server address by using
sudo systemd-resolve --status | grep "DNS Server"
And I tried to use it in the docker container, it gives :
docker run -i -t --privileged --dns=192.168.1.254 busybox nslookup google.com
Server: 192.168.1.254
Address 1: 192.168.1.254
Name: google.com
Address 1: 2607:f8b0:4020:804::200e yul02s04-in-x0e.1e100.net
Address 2: 172.217.13.206 yul03s05-in-f14.1e100.net
So it can find the IP address of the domain but I'm still not able to ping the it.
When I use the --network host option, it works but I have trouble with other containers.
Here is my /etc/resolv.conf file when the VPN is OFF :
# Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)
# DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN
# 127.0.0.53 is the systemd-resolved stub resolver.
# run "systemd-resolve --status" to see details about the actual nameservers.
nameserver 8.8.8.8
nameserver 127.0.0.53
search telus
And when it's ON :
nameserver 8.8.8.8
nameserver 8.8.4.4
# Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)
# DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN
# 127.0.0.53 is the systemd-resolved stub resolver.
# run "systemd-resolve --status" to see details about the actual nameservers.
search telus
And here is information about the network used by the container :
[
{
"Name": "project_default",
"Id": "e5b5cdaf12ea277f28b5e5a050041a55fe33d279bcd6b2c737a3a6cdfb039ea2",
"Created": "2017-10-05T21:24:16.200249606+02:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.19.0.0/16",
"Gateway": "172.19.0.1"
}
]
},
"Internal": false,
"Attachable": true,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"03af7eb1bcfb394c436784974603ad72667610a8a2bba6f8ec3ca87a3eecc733": {
"Name": "project_mongo_1",
"EndpointID": "6567c3ad72fba3d2d4519d6aa47ac9fc7d65d2b6884f91aa70f4041ba2fc98cc",
"MacAddress": "02:42:ac:13:00:02",
"IPv4Address": "172.19.0.2/16",
"IPv6Address": ""
},
"13dce3db104e2dafb2ebbdceeefe1a5ca8559808c050a9e95d4abae5b2203b54": {
"Name": "project_redis_1",
"EndpointID": "fb5abe7b896fb9d5f635cd30cbc57ad721e184c36a754bffce696b7f7fc1fbfc",
"MacAddress": "02:42:ac:13:00:05",
"IPv4Address": "172.19.0.5/16",
"IPv6Address": ""
},
"3f301bd0973637ac4440f9f9bfc8f6d2fdf4d3bf048ab95c9db91ef03dc4cde1": {
"Name": "dimelo_faye_server",
"EndpointID": "90eda817636ce87123882187cea629c63419712816fbf952364959fd6d41f25b",
"MacAddress": "02:42:ac:13:00:06",
"IPv4Address": "172.19.0.6/16",
"IPv6Address": ""
},
"530c61bfde2a21812ae3b677f0a019805a1dcea21722d181648defd0037de7f3": {
"Name": "project_elasticsearch_1",
"EndpointID": "a8cd9df941907c4abfb5762fe6af5069cfa6e97259d2d5c1c9992f269f6b444a",
"MacAddress": "02:42:ac:13:00:03",
"IPv4Address": "172.19.0.3/16",
"IPv6Address": ""
},
"5e615627d664bf5cad7b31f5594bc64730dca46ae8c80bb12b0ee881dac8bfb1": {
"Name": "project_memcache_1",
"EndpointID": "b7388f26c1aaedb98ebc6d3eac018d713e0c914bc4d6a84501358cc126be1105",
"MacAddress": "02:42:ac:13:00:04",
"IPv4Address": "172.19.0.4/16",
"IPv6Address": ""
},
"6b143aa84d6fd55245b462220a4dab4f54c9d365ab4a1e7c7823265b53432bbb": {
"Name": "project_web_run_53",
"EndpointID": "964311691aefff57c9ad5898b1153c8a166575193c25babad575e9990092e911",
"MacAddress": "02:42:ac:13:00:08",
"IPv4Address": "172.19.0.8/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {
"com.docker.compose.network": "default",
"com.docker.compose.project": "project"
}
}
]
What can I do to be able to have access to Internet on the container when the VPN is ON?

Docker Swarm Overlay Network Not Working Between Nodes

i am trying to connect my docker services together in docker swarm.
the network is made of 2 raspberry pi's.
i can create an overlay network called test-overlay and i can see that services on either raspberry pi node can connect to the network.
my problem:
i cannot link to services between nodes with the overlay network.
given the following configuration of nodes and services, service1 can use the address http://service2 to connect to service2. but it does NOT work for http://service3. however http://service3 is accessible from service4.
node1:
- service1
- service2
node2:
- service3
- service4
i am new to docker swarm and any help is appreciated.
inspecting overlay
i have run the command sudo docker inspect network test-overlay on both nodes.
on the master node this returns the following:
[
{
"Name": "test-overlay",
"Id": "skxhz8sb3f82dhh9jt9t3j5yl",
"Created": "2018-04-15T20:31:20.629719732Z",
"Scope": "swarm",
"Driver": "overlay",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "10.0.0.0/24",
"Gateway": "10.0.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"3acb436a0cc9a4d584d537edb1546988d334afa4793cc4fae4dd6ac9b48828ea": {
"Name": "docker-registry.1.la1myuodpkq0x5h39pqo6lt7f",
"EndpointID": "66887fb1f5f253c6cbec149aa51ab85168903fdd2290719f26d2bcd8d6c68dc8",
"MacAddress": "02:42:0a:00:00:04",
"IPv4Address": "10.0.0.4/24",
"IPv6Address": ""
},
"786e1fee538f81fe41ccd082800c646a0e191b0fd912e5c15530e61c248e81ac": {
"Name": "portainer.1.qyvvlcdqo5sewuku3eiykaplz",
"EndpointID": "0d29e5452c208ed637ae2e7dcec026f39d2431e8e0e20765a9e0e6d6dfdc60ca",
"MacAddress": "02:42:0a:00:00:15",
"IPv4Address": "10.0.0.21/24",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.driver.overlay.vxlanid_list": "4101"
},
"Labels": {},
"Peers": [
{
"Name": "d049fc8f8ae1",
"IP": "192.168.1.2"
},
{
"Name": "6c0da128f308",
"IP": "192.168.1.3"
}
]
}
]
on the worker node this returns the following:
[
{
"Name": "test-overlay",
"Id": "skxhz8sb3f82dhh9jt9t3j5yl",
"Created": "2018-04-20T14:04:57.870696195Z",
"Scope": "swarm",
"Driver": "overlay",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "10.0.0.0/24",
"Gateway": "10.0.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"4cb50161119e4b58a472e1b5c380c301bbb00a23fc99fc2e0712a8c4bde6d9d4": {
"Name": "minio.1.fo2su2quv8herbmnxqfi3g8w2",
"EndpointID": "3e85786304ed08f02c09b8e1ed6a153a3b4c2ef7afe503a1b0ca6cf341521645",
"MacAddress": "02:42:0a:00:00:d6",
"IPv4Address": "10.0.0.214/24",
"IPv6Address": ""
},
"ce99b3788a4f9438e276e0f52a8f4d29fa09179e3e93b31b14f45339ce3c5315": {
"Name": "load-balancer.1.j64h1eecsc05b7d397ejvedv3",
"EndpointID": "3b7e73d27fe30151f2dc2a0ba8a5afc7f74fd283159a03a592be10e297f58d51",
"MacAddress": "02:42:0a:00:00:d0",
"IPv4Address": "10.0.0.208/24",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.driver.overlay.vxlanid_list": "4101"
},
"Labels": {},
"Peers": [
{
"Name": "d049fc8f8ae1",
"IP": "192.168.1.2"
},
{
"Name": "6c0da128f308",
"IP": "192.168.1.3"
}
]
}
]
it seems this problem was because of the nodes being not being able to connect to each other on the required ports.
TCP port 2377 for cluster management communications
TCP and UDP port 7946 for communication among nodes
UDP port 4789 for overlay network traffic
before you open those ports.
a better and simpler solution is to use the docker image portainer/agent. like the documentation says,
The Portainer Agent is a workaround for a Docker API limitation when using the Docker API to manage a Docker environment.
https://portainer.readthedocs.io/en/stable/agent.html
i hope this helps anyone else experiencing this problem.
I am not able to leave a comment yet, but i managed to solve this issue with the solution provided by X0r0N, and i am leaving this comment to help people in my position to find a solution in the future.
I was deploying 10 Droplets in DigitalOcean, with the default Docker image provided by Docker. It says in the description that it closes all ports, but them related to Docker. This is clearly not included Swarm usecases.
After allowing port 2377, 4789 and 7946 in ufw the Docker Swarm is now working as expected.
To make this answer stand on its own, the ports map to the following functionality:
TCP port 2377: Cluster Management Communication
TCP and UDP port 7649: Communication between nodes
UDP port 4789: Overlay Network Traffic
Check if your nodes have the ports the swarm needs to operate opened properly as described here https://docs.docker.com/network/overlay/ in "Prerequisites":
TCP port 2377 for cluster management communications
TCP and UDP port 7946 for communication among nodes
UDP port 4789 for overlay network traffic

Resources