I have set up a docker stack with telegraf, influxdb and grafana to monitor urls using telegraf's http_request input.
When it monitors external URLs like google there is no problem, but when it launches the request for hostname mydomain. com and it resolves to the same ip the telegraf container gives a timeout.
I have tried from inside the container to launch a curl and indeed it fails, but from the host (outside the container) the curl works.
Any idea what could be going on or where I could move forward?
root#08ad708c4a09:/# curl -m 5 https://mydomain1.com:9443
curl: (28) Connection timed out after 5001 milliseconds
root#08ad708c4a09:/# ping mydomain1.com
PING mydomain1.com (itself.ip.host.machine) 56(84) bytes of data.
64 bytes from 1.vps.net (itself.ip.host.machine): icmp_seq=1 ttl=64 time=0.148 ms
64 bytes from 1.vps.net (itself.ip.host.machine): icmp_seq=2 ttl=64 time=0.138 ms
64 bytes from 1.vps.net (itself.ip.host.machine): icmp_seq=3 ttl=64 time=0.126 ms
^C
--- mydomain1.com ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 25ms
rtt min/avg/max/mdev = 0.126/0.137/0.148/0.013 ms
root#08ad708c4a09:/# curl -m 5 mydomain2.com
Hello world
thank you very much community.
I hope that telegraf's http_request knows how to resolve a domain that points to the same ip and does not respond with a timeout.
I have a macvlan network where i put all my containers in. It was created like this:
docker network create -d macvlan -o parent=eno2 \
--subnet 10.0.2.0/24 \
--gateway 10.0.2.1 \
--ip-range 10.0.2.0/24 \
mynet
This is allows for container <-> container communication and from external computers <-> container communication. The problem is that host <-> container communication is not possible.
Searching about how to fix this, i found this blog.
That solution appears to work, running this commands:
ip link add zrz-ch link eno2 type macvlan mode bridge
ip addr add 10.0.2.15/24 dev zrz-ch
ifconfig zrz-ch up
I can successfully ping any container, and any container can ping the host.
The problem is that after 5-10 seconds the link breaks and the communication does not work anymore.
Ping from host -> container:
root#pluto:/home/zrz# ping 10.0.2.10
PING 10.0.2.10 (10.0.2.10) 56(84) bytes of data.
64 bytes from 10.0.2.10: icmp_seq=1 ttl=64 time=0.106 ms
64 bytes from 10.0.2.10: icmp_seq=2 ttl=64 time=0.039 ms
64 bytes from 10.0.2.10: icmp_seq=3 ttl=64 time=0.050 ms
64 bytes from 10.0.2.10: icmp_seq=4 ttl=64 time=0.068 ms
From 10.0.2.15 icmp_seq=5 Destination Host Unreachable
From 10.0.2.15 icmp_seq=6 Destination Host Unreachable
From 10.0.2.15 icmp_seq=7 Destination Host Unreachable
From 10.0.2.15 icmp_seq=8 Destination Host Unreachable
From 10.0.2.15 icmp_seq=9 Destination Host Unreachable
From 10.0.2.15 icmp_seq=10 Destination Host Unreachable
As you see after 5 seconds~ it stops working, the same happens when pinging from container -> host (after running the above commands on the host)
I can ping again if i do
ip link delete zrz-ch
And then run the commands above again. But as i said, it breaks after a couple of seconds.
How can i fix this?
Any ideas of how can i fix this?
We've been experiencing a long-standing networking issue. In short, one container cannot ping (or ssh) another. Does anybody have an extra moment to think along with me?
Our setup:
Docker CE 18.06.03 (while trying to fix the issue, we've upgraded from 17.03, but it has not helped)
Swarm Classic (Standalone) 1.2.9
Consul as a Swarm backend, running with members on five nodes
Seven nodes in total, six of which host containers
Each container is connected to an overlay network when it is started
What we've tried so far:
This issue has largely stumped us. We've spent a lot of time on it and done much of the basic troubleshooting, and some more advanced troubleshooting (happy to elaborate). (But I don't expect that I've exhausted our options, so please don't hesitate to suggest anything you may think will work.)
It's inconsistent (happening to different images, different nodes), intermittent, and long-standing (several months). We've made two changes, one of which was a workaround for MAC address assignment (explained here: https://github.com/docker/libnetwork/pull/2380; the actual workaround: https://github.com/systemd/systemd/issues/3374#issuecomment-452718898), which did improve the situation, including removing MAC address assignment errors from the logs. We also upgraded to get this fix (https://github.com/docker/libnetwork/pull/1935), which deals with IP reuse. This also decreased the problem (at the time, no containers could communicate). I've also run through some basics tests using the netshoot container (let me know if you want more info on that).
We have a workaround for a given container that is broken: we delete the Consul data for this container and then stop and restart it. From what I can tell, it does not seem to be an issue with the Consul data per se but instead comes from Docker/Swarm resetting several network configurations when the container is started (I can say more if this seems to trigger a thought for anybody reading). Then, the container can often ping other containers, but not always.
Specific question:
It seems like there's a window of time during which this can be worse. It's not necessarily tied to starting several containers at once, but there's a somewhat clear pattern: during some windows of time, containers do not get configured properly to communicate with each other. What troubleshooting steps come to mind for you?
The content below is the output from trying to ping one container (82afb0dccbcc) from two other containers. It fails at first, but then is successful.
The first time I try to ping the container, at 2019-12-10T23:57:52+00:00:
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
82afb0dccbcc: user___92397089 crccheck/hello-world
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
PING 82afb0dccbcc (172.24.0.165) 56(84) bytes of data.^M
^M
--- 82afb0dccbcc ping statistics ---^M
4 packets transmitted, 0 received, 100% packet loss, time 3033ms^M
^M
PING 82afb0dccbcc (172.24.0.165) 56(84) bytes of data.^M
64 bytes from user___92397089.wharf (172.24.0.165): icmp_seq=2 ttl=64 time=0.083 ms^M
64 bytes from user___92397089.wharf (172.24.0.165): icmp_seq=3 ttl=64 time=0.072 ms^M
64 bytes from user___92397089.wharf (172.24.0.165): icmp_seq=4 ttl=64 time=0.073 ms^M
^M
--- 82afb0dccbcc ping statistics ---^M
4 packets transmitted, 3 received, 25% packet loss, time 3023ms^M
rtt min/avg/max/mdev = 0.072/0.076/0.083/0.005 ms^M
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
In this first ping test, above, we note that the packet loss from the first container is 100% and from the second container, it is 25%.
A few minutes later (2019-12-10T23:57:52+00:00), however, 82afb0dccbcc can be successfully pinged from both containers:
82afb0dccbcc: user___92397089 crccheck/hello-world
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
ping from ansible-provisioner:
PING 82afb0dccbcc (172.24.0.165) 56(84) bytes of data.^M
64 bytes from user___92397089.wharf (172.24.0.165): icmp_seq=1 ttl=64 time=0.056 ms^M
64 bytes from user___92397089.wharf (172.24.0.165): icmp_seq=2 ttl=64 time=0.073 ms^M
64 bytes from user___92397089.wharf (172.24.0.165): icmp_seq=3 ttl=64 time=0.077 ms^M
64 bytes from user___92397089.wharf (172.24.0.165): icmp_seq=4 ttl=64 time=0.087 ms^M
^M
--- 82afb0dccbcc ping statistics ---^M
4 packets transmitted, 4 received, 0% packet loss, time 3063ms^M
rtt min/avg/max/mdev = 0.056/0.073/0.087/0.012 ms^M
ping from ansible_container:
PING 82afb0dccbcc (172.24.0.165) 56(84) bytes of data.^M
64 bytes from user___92397089.wharf (172.24.0.165): icmp_seq=1 ttl=64 time=0.055 ms^M
64 bytes from user___92397089.wharf (172.24.0.165): icmp_seq=2 ttl=64 time=0.055 ms^M
64 bytes from user___92397089.wharf (172.24.0.165): icmp_seq=3 ttl=64 time=0.060 ms^M
64 bytes from user___92397089.wharf (172.24.0.165): icmp_seq=4 ttl=64 time=0.085 ms^M
^M
--- 82afb0dccbcc ping statistics ---^M
4 packets transmitted, 4 received, 0% packet loss, time 3062ms^M
rtt min/avg/max/mdev = 0.055/0.063/0.085/0.015 ms^M
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
You need to create a network and connect both the containers to that network.
The Docker embedded DNS server enables name resolution for containers connected to a given network. This means that any connected container can ping another container on the same network by its container name.
From within container1, you can ping container2 by name.So, its important to explicitly specify names for the containers otherwise this would not work.
Create two containers:
docker run -d --name container1 -p 8001:80 test/apache-php
docker run -d --name container2 -p 8002:80 test/apache-php
Now create a network:
docker network create myNetwork
After that connect your containers to the network:
docker network connect myNetwork container1
docker network connect myNetwork container2
Check if your containers are part of the new network:
docker network inspect myNetwork
Now test the connection, you will be able to ping container2 from container1:
docker exec -ti container1 ping container2
I actually ran into this issue randomly, but in my case both containers were already on the same network so it was puzzling me why one container couldn't ping another.
until I ran docker network inspect myNetwork and randomly noticed that for some reason both containers were assigned the SAME mac address... no idea why that happened or even how. Obviously that would preclude pinging since on a LAN mac addresses are used by switching logic to route traffic.
I had to stop and remove the container then recreate it to change the mac address.
In case, if there is any webapp is running on any one of your container and you want to ping/call any endpoint from another container and want to use response then you can follow steps as mentioned below -
First establish inter-container communications using docker network
1. docker network create dockerContainerCommunication
Now connect containers to network dockerContainerCommunication
2. docker network connect dockerContainerCommunication container1
3. docker network connect dockerContainerCommunication container2
Now start your containers (if not started)
4. docker start container1
5. docker start container2
Inspect your network. Here you can also find out IP address of the containers.
docker network inspect dockerContainerCommunication
Now attach to any one of the container from where you want to use web application, then ping other container using curl + IP address you found out in step 6.
or
docker attach container1
OR
docker attach container2
and then run curl command
curl http://IP_ADDRESS:PORT_ON_WHICH_APP_IS_RUNNING/api/endpointPath
I hope it helps.
I have two containers connected to the default bridge network:
» docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3cc528ddbe7e gitlab/gitlab-runner:latest "/usr/bin/dumb-ini..." 25 minutes ago Up 25 minutes gitlab-runner
3c01073065c7 gitlab/gitlab-ee:latest "/assets/wrapper" About an hour ago Up About an hour (healthy) 0.0.0.0:45022->22/tcp, 0.0.0.0:45080->80/tcp, 0.0.0.0:45443->443/tcp gitlab
I have found the corresponsing IP addresses with docker inspect (any better method of obtaining them?), and I can ping from one container to the other, by IP address:
» docker exec -it gitlab-runner bash
root#3cc528ddbe7e:/# ping 172.17.0.3
PING 172.17.0.3 (172.17.0.3) 56(84) bytes of data.
64 bytes from 172.17.0.3: icmp_seq=1 ttl=64 time=0.079 ms
64 bytes from 172.17.0.3: icmp_seq=2 ttl=64 time=0.063 ms
64 bytes from 172.17.0.3: icmp_seq=3 ttl=64 time=0.060 ms
^C
--- 172.17.0.3 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 1999ms
rtt min/avg/max/mdev = 0.060/0.067/0.079/0.010 ms
But I cannot ping by name:
root#3cc528ddbe7e:/# ping gitlab
ping: unknown host gitlab
Why is this? I thought docker provides DNS by container name.
I have two containers connected to the default bridge network...
I can ping from one container to the other, by IP address...
But I cannot ping by name...
This is the default behavior for the default bridge network.
From: Docker docs
Differences between user-defined bridges and the default bridge
User-defined bridges provide automatic DNS resolution between containers.
Containers on the default bridge network can only access each other by IP addresses, unless you use the --link option, which is considered legacy. On a user-defined bridge network, containers can resolve each other by name or alias.
When I setup a network with docker create network test1 and then start a few containers, for example
docker run -d --net=test1 --name=t1 elasticsearch
docker run -d --net=test1 elasticsearch
docker run -d --net=test1 elasticsearch
I can't broadcast ping any of these containers with docker exec -ti t1 ping 255.255.255.255.
Any idea how I can change this?
This is currently followed in issue 17814
UDP broadcasts don't work in multi-host network between hosts.
UDP broadcasts only work if both containers run on the same host.
Playing with icmp broadcast by pinging on 255.255.255.255, I receive replies only from the local host:
# ping -b 255.255.255.255
WARNING: pinging broadcast address
PING 255.255.255.255 (255.255.255.255) 56(84) bytes of data.
64 bytes from 172.18.0.1: icmp_req=1 ttl=64 time=0.601 ms
64 bytes from 172.18.0.1: icmp_req=2 ttl=64 time=0.424 ms
64 bytes from 172.18.0.1: icmp_req=3 ttl=64 time=0.420 ms
64 bytes from 172.18.0.1: icmp_req=4 ttl=64 time=0.427 ms
(I made sure /proc/sys/net/ipv4/icmp_echo_ignore_broadcasts is set to 0 on both hosts.)
It also seems impossible to set a broadcast address on the interface connected to the shared network:
# ifconfig eth0 broadcast 10.0.0.255
SIOCSIFBRDADDR: Operation not permitted
SIOCSIFFLAGS: Operation not permitted
This ability to multicast in overlay driver is discussed in docker/libnetwork issue 552.
(help wanted)