I am able to ping rest of the world but not the host from container in which docker container is running. I am sure someone encountered this issue before
See below details
Ubuntu 14.04.2 LTS (GNU/Linux 3.16.0-30-generic x86_64
**Container is using "bridge" network
Docker version 18.06.1-ce, build e68fc7a**
IP address for eth0: 135.25.87.162
IP address for eth1: 192.168.122.55
IP address for eth2: 135.21.171.209
IP address for docker0: 172.17.42.1
route
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default 135.21.248.1 0.0.0.0 UG 0 0 0 eth1
135.21.171.192 * 255.255.255.192 U 0 0 0 eth2
135.21.248.0 * 255.255.255.0 U 0 0 0 eth1
135.25.87.128 * 255.255.255.192 U 0 0 0 eth0
172.17.0.0 * 255.255.0.0 U 0 0 0 docker0
192.168.122.0 * 255.255.255.192 U 0 0 0 eth1
#ping commands from container
# ping google.com
PING google.com (64.233.177.113): 56 data bytes
64 bytes from 64.233.177.113: icmp_seq=0 ttl=29 time=51.827 ms
64 bytes from 64.233.177.113: icmp_seq=1 ttl=29 time=50.184 ms
64 bytes from 64.233.177.113: icmp_seq=2 ttl=29 time=50.991 ms
^C--- google.com ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max/stddev = 50.184/51.001/51.827/0.671 ms
# ping 135.25.87.162
PING 135.25.87.162 (135.25.87.162): 56 data bytes
^C--- 135.25.87.162 ping statistics ---
4 packets transmitted, 0 packets received, 100% packet loss
root#9ed17e4c2ee3:/opt/app/tomcat#
Related
I created two docker containers and connected them to the same network, but either of the container cannot connect to the other one.
I have tried the steps on this page, but none of the methods worked.
Anything else I can try?
docker run -d --name db1 -e POSTGRES_PASSWORD=password postgres:10-alpine
docker run -d --name db2 -e POSTGRES_PASSWORD=password postgres:10-alpine
docker network create myNetwork
docker network connect myNetwork db1
docker network connect myNetwork db2
# make sure that the network has 2 containers
docker inspect myNetwork
docker exec -it db1 ping db2
PING db2 (172.18.0.4): 56 data bytes
^C
--- cvat_db ping statistics ---
3 packets transmitted, 0 packets received, 100% packet loss
docker exec -it db1route
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default 172.18.0.1 0.0.0.0 UG 0 0 0 eth0
172.18.0.0 * 255.255.0.0 U 0 0 0 eth0
#ifconfig
br-3f4022544f42: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.18.0.1 netmask 255.255.0.0 broadcast 172.18.255.255
inet6 fe80::42:e5ff:fe9f:33bc prefixlen 64 scopeid 0x20<link>
ether 02:42:e5:9f:33:bc txqueuelen 0 (Ethernet)
RX packets 21 bytes 1164 (1.1 KB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 44 bytes 5656 (5.6 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255
inet6 fe80::42:b9ff:fe47:f00c prefixlen 64 scopeid 0x20<link>
ether 02:42:b9:47:f0:0c txqueuelen 0 (Ethernet)
RX packets 1 bytes 28 (28.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 54 bytes 6637 (6.6 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
I've network related issue on the Kubernetes host, using Calico network layer. For continuous integration I need to run docker in docker, but running simple docker build with this Dockerfile:
FROM praqma/network-multitool AS build
RUN route
RUN ping -c 4 google.com
RUN traceroute google.com
produces output:
Step 1/4 : FROM praqma/network-multitool AS build
---> 3619cb81e582
Step 2/4 : RUN route
---> Running in 80bda13a9860
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default 172.17.0.1 0.0.0.0 UG 0 0 0 eth0
172.17.0.0 * 255.255.0.0 U 0 0 0 eth0
Removing intermediate container 80bda13a9860
---> d79e864eafaf
Step 3/4 : RUN ping -c 4 google.com
---> Running in 76354a92a413
PING google.com (216.58.201.110) 56(84) bytes of data.
--- google.com ping statistics ---
4 packets transmitted, 0 received, 100% packet loss, time 53ms
---> 3619cb81e582
Step 4/4 : RUN traceroute google.com
---> Running in 3aa7908347ba
traceroute to google.com (216.58.201.110), 30 hops max, 46 byte packets
1 172.17.0.1 (172.17.0.1) 0.009 ms 0.005 ms 0.003 ms
Seems docker container has invalid routing while created off Kubernetes. Pods orchestrated by Kubernetes can access internet normally.
bash-5.0# ping -c 3 google.com
PING google.com (216.58.201.110) 56(84) bytes of data.
64 bytes from prg03s02-in-f14.1e100.net (216.58.201.110): icmp_seq=1 ttl=55 time=0.726 ms
64 bytes from prg03s02-in-f14.1e100.net (216.58.201.110): icmp_seq=2 ttl=55 time=0.586 ms
64 bytes from prg03s02-in-f14.1e100.net (216.58.201.110): icmp_seq=3 ttl=55 time=0.451 ms
--- google.com ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 10ms
rtt min/avg/max/mdev = 0.451/0.587/0.726/0.115 ms
bash-5.0# route
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default 169.254.1.1 0.0.0.0 UG 0 0 0 eth0
169.254.1.1 * 255.255.255.255 UH 0 0 0 eth0
bash-5.0# traceroute google.com
traceroute to google.com (216.58.201.110), 30 hops max, 46 byte packets
1 10-68-149-194.kubelet.kube-system.svc.kube.example.com (10.68.149.194) 0.006 ms 0.005 ms 0.004 ms
On the host, there is a service
#server# netstat -ln | grep 3308
tcp6 0 0 :::3308 :::* LISTEN
It can be reached from remote.
The container is in a user-defined bridge network.
The server IP address is 192.168.1.30
#localhost ~]# ifconfig
br-a54fd3b63acd: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.18.0.1 netmask 255.255.0.0 broadcast 172.18.255.255
inet6 fe80::42:1eff:fecc:92e8 prefixlen 64 scopeid 0x20<link>
ether 02:42:1e:cc:92:e8 txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
docker0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255
inet6 fe80::42:37ff:fe9f:e4f1 prefixlen 64 scopeid 0x20<link>
ether 02:42:37:9f:e4:f1 txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 34 bytes 4018 (3.9 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.1.30 netmask 255.255.255.0 broadcast 192.168.1.255
And ping from container also works.
#33208c18aa61:~# ping -c 2 192.168.1.30
PING 192.168.1.30 (192.168.1.30) 56(84) bytes of data.
64 bytes from 192.168.1.30: icmp_seq=1 ttl=64 time=0.120 ms
64 bytes from 192.168.1.30: icmp_seq=2 ttl=64 time=0.105 ms
And the service is available.
#server# telnet 192.168.1.30 3308
Trying 192.168.1.30...
Connected to 192.168.1.30.
Escape character is '^]'.
N
But the service can't be reached from the container.
#33208c18aa61:~# telnet 192.168.1.30 3308
Trying 192.168.1.30...
telnet: Unable to connect to remote host: No route to host
I checked
Make docker use IPv4 for port binding
make sure I didn't have IPv6 set to only bind on IPv6
# sysctl net.ipv6.bindv6only
net.ipv6.bindv6only = 0
From inside of a Docker container, how do I connect to the localhost of the machine?
find my route is a little different.
# route
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default router.asus.com 0.0.0.0 UG 100 0 0 eth0
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
172.18.0.0 0.0.0.0 255.255.0.0 U 0 0 0 br-a54fd3b63acd
192.168.1.0 0.0.0.0 255.255.255.0 U 100 0 0 eth0
Does it matter? Or could it be another reason?
Your docker container is on a different network namespace and connected to a different interface than your host machine that's why you can't reach it using the ip 192.168.x.x
What you need to do is to use the docker network gateway instead, in your case 172.17.0.1 but be aware that this IP might no be the same from host to host so to reproduce this everywhere and be completely sure of which is the IP you can create an user-defined network specifying the subnet and gateway and running your container there for example:
docker network create -d bridge --subnet 172.16.0.0/24 --gateway 172.16.0.1 dockernet
docker run --net=dockernet ubuntu
Also whatever service you are trying to connect here must be listening on the docker's bridge interface as well.
Another option is to run the container on the same network namespace as the host with the --net=host flag, and in this case you can access service outside the container using localhost
Inspired by the official document
The Docker bridge driver automatically installs rules in the host
machine so that containers on different bridge networks cannot
communicate directly with each other.
I checked the iptables on the server, for an experiment I stopped the iptables temporary. Then the container can reach that service success. Later I was told, the server has been reboot recently. So guessing some config was lost after that reboot. Not familiar with iptables very much, and when I try
systemctl status iptables.service
It says the service is not installed. After I install and run the service,
iptables -L -n
is almost empty. Now not clue what kind of iptables rules can cause that messy.
But if anyone face the ping success telnet fail situation, iptables could be the place of the root cause.
we created a docker container like this:
docker container create \
--name orderer \
--network dscsa_net \
--workdir $WORK_DIR \
--expose=7050 \
hyperledger/fabric-orderer:1.3.0 ./start-orderer.sh
but are unable to connect to port 7050 on the container.
root#dcee7e74266f:/home# nc -vz 10.0.0.194 7050
nc: connect to 10.0.0.194 port 7050 (tcp) failed: Connection refused
we are able to ping the container:
root#dcee7e74266f:/home# ping 10.0.0.194
PING 10.0.0.194 (10.0.0.194) 56(84) bytes of data.
64 bytes from 10.0.0.194: icmp_seq=1 ttl=64 time=0.810 ms
64 bytes from 10.0.0.194: icmp_seq=2 ttl=64 time=1.30 ms
64 bytes from 10.0.0.194: icmp_seq=3 ttl=64 time=0.668 ms
64 bytes from 10.0.0.194: icmp_seq=4 ttl=64 time=1.10 ms
64 bytes from 10.0.0.194: icmp_seq=5 ttl=64 time=0.631 ms
^C
--- 10.0.0.194 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4006ms
rtt min/avg/max/mdev = 0.631/0.902/1.301/0.261 ms
and also see a process listening on port 7050 on the container:
root#9756199efefa:/home# netstat -tuplen
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State User Inode PID/Program name
tcp 0 0 127.0.0.1:7050 0.0.0.0:* LISTEN 0 10097930 7/orderer
tcp 0 0 127.0.0.11:34865 0.0.0.0:* LISTEN 0 10097705 -
udp 0 0 127.0.0.11:51385 0.0.0.0:* 0 10097704 -
what is going on here? how can we fix this?
EDIT: we are on a overlay network. The publish flag suggested in the answer is n/a as we are doing container to container communication. Anyway we tried it and it doesn't work.
There is one thing we have noticed which is if we run:
docker network inspect <our-network-name>
Among other things, it prints out a containers section but in that section only the containers on the host from which docker network inspect is executed are listed. The containers hosted on other nodes are not listed (also mentioned here).
we verified that if we run:
docker node ls
all the nodes are part of the swarm.
It seems other people have also run into this issue e.g., here but what is the solution?
Note: we are able to connect to another container running a different service exposed on port 7054. This container was created without even using the expose flag.
root#dcee7e74266f:/home# nc -zv 10.0.0.164 7054
Connection to 10.0.0.164 7054 port [tcp/*] succeeded!
Did further debugging with tcpdump and output of tcpdump is identical to the output when someone tries to connect to a port on which no process is listening. But as shown earlier netstat shows a process that is listening and we can connect to the process from localhost.
Output of tcpdump:
root#dcee7e74266f:/test# tcpdump -s0 host 10.0.0.195
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes
23:44:45.978583 IP dcee7e74266f.52148 > orderer.dscsa_net.7050: Flags [S], seq 3845506108, win 28200, options [mss 1410,sackOK,TS val 4203049443 ecr 0,nop,wscale 7], length 0
23:44:45.979324 IP orderer.dscsa_net.7050 > dcee7e74266f.52148: Flags [R.], seq 0, ack 3845506109, win 0, length 0
The R flag tells client to reset the connection.
Output of traceroute:
root#dcee7e74266f:/test# traceroute 10.0.0.195
traceroute to 10.0.0.195 (10.0.0.195), 30 hops max, 60 byte packets
1 orderer.dscsa_net (10.0.0.195) 1.008 ms 0.900 ms 0.872 ms
Expose only sets metadata on the image or container, it does not make the port externally accessible. The option you are looking for is publish:
docker container create \
--name orderer \
--network dscsa_net \
--workdir $WORK_DIR \
--publish=7050:7050 \
hyperledger/fabric-orderer:1.3.0 ./start-orderer.sh
Solved this issue thanks to 1. The server listening to 127.0.0.1 was the problem. Once we changed the listening address to 0.0.0.0 (shows as ::: in netstat output below), we are able to connect to the server:
root#e9766a94d102:/home# netstat -tuplen
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State User Inode PID/Program name
tcp 0 0 127.0.0.11:37641 0.0.0.0:* LISTEN 0 12821468 -
tcp6 0 0 :::7050 :::* LISTEN 0 12821696 7/orderer
udp 0 0 127.0.0.11:51855 0.0.0.0:* 0 12821467 -
there is no need for either expose or publish flags. note to self: wasted 1.5 days on this.
I am running two VM on OpenStack Mirantis. For Simplicity let's call host-1 and host-2. I am unable to communicate neither from Container to Container on different hosts not Container to Public Internet On each Host I have installed Docker ver 1.12.3 and run the following things --
tee Dockerfile <<-'EOF'
FROM centos
RUN yum -y install net-tools bind-utils iputils*
EOF
Later --
docker build -t crazy:3 .
On host-1 :--
dockerd --ipv6 --fixed-cidr-v6="2001:1b76:2400:e2::2/64" &
run -i -t --entrypoint /bin/bash crazy:3
ping6 -c3 google.com
ifconfig
On host-2 :--
dockerd --ipv6 --fixed-cidr-v6="2001:1b76:2400:e2::2/64" &
run -i -t --entrypoint /bin/bash crazy:3
ping6 -c3 google.com
ifconfig
Host-1 output:--
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.17.0.2 netmask 255.255.0.0 broadcast 0.0.0.0
inet6 2001:1b76:2400:e2:0:242:ac11:2 prefixlen 64 scopeid 0x0<global>
inet6 fe80::42:acff:fe11:2 prefixlen 64 scopeid 0x20<link>
ether 02:42:ac:11:00:02 txqueuelen 0 (Ethernet)
RX packets 18 bytes 1663 (1.6 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 53 bytes 4604 (4.4 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
Host-2 output:--
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.17.0.2 netmask 255.255.0.0 broadcast 0.0.0.0
inet6 2001:1b76:2400:e2:0:242:ac11:3 prefixlen 64 scopeid 0x0<global>
inet6 fe80::42:acff:fe11:2 prefixlen 64 scopeid 0x20<link>
ether 02:42:ac:11:00:02 txqueuelen 0 (Ethernet)
RX packets 8 bytes 808 (808.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 6 bytes 508 (508.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
Then again
On host-1:--
ping6 2001:1b76:2400:e2:0:242:ac11:3
On host-2:--
ping6 2001:1b76:2400:e2:0:242:ac11:2
All are same output i,e --
PING 2001:1b76:2400:e2:0:242:ac11:3(2001:1b76:2400:e2:0:242:ac11:3) 56 data bytes
From 2001:1b76:2400:e2:0:242:ac11:2 icmp_seq=1 Destination unreachable: Address unreachable
From 2001:1b76:2400:e2:0:242:ac11:2 icmp_seq=2 Destination unreachable: Address unreachable
From 2001:1b76:2400:e2:0:242:ac11:2 icmp_seq=3 Destination unreachable: Address unreachable
From 2001:1b76:2400:e2:0:242:ac11:2 icmp_seq=4 Destination unreachable: Address unreachable
Both hosts ip route are same i,e --
2001:1b76:2400:e2:f816:3eff:fe69:c2f2 dev eth0 metric 0
cache
2001:1b76:2400:e2::/64 dev eth0 proto kernel metric 256 expires 28133sec
2001:1b76:2400:e2::/64 dev docker0 proto kernel metric 256
2001:1b76:2400:e2::/64 dev docker0 metric 1024
fe80::/64 dev eth0 proto kernel metric 256
fe80::/64 dev docker0 proto kernel metric 256
Both containers ip route are same i,e --
2001:1b76:2400:e2::1 dev eth0 metric 0
cache
2001:1b76:2400:e2::/64 dev eth0 proto kernel metric 256
fe80::/64 dev eth0 proto kernel metric 256
default via 2001:1b76:2400:e2::1 dev eth0 metric 1024
Both hosts ip forwarding are same i,e --
net.ipv4.conf.all.forwarding = 1
net.ipv6.conf.all.forwarding = 1
Both containers ip forwarding are same i,e --
net.ipv4.conf.all.forwarding = 1
net.ipv6.conf.all.forwarding = 0