ICMP replies seen by tcpdump but ping 100% fails [closed] - virtual

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
I have a virtual interface (oip1) configured which has a valid IP config. When I try to ping an address on the internet from oip1, I can see the ICMP echo requests/replies on tcpdump, but ping still reports 100% failing.
root#openair-PC:~/openair4G# ifconfig oip1
oip1 Link encap:AMPR NET/ROM HWaddr
inet addr:192.188.0.2 Bcast:192.188.0.255 Mask:255.255.255.0
UP BROADCAST RUNNING NOARP MULTICAST MTU:1500 Metric:1
RX packets:10 errors:0 dropped:0 overruns:0 frame:0
TX packets:10 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:100
RX bytes:840 (840.0 B) TX bytes:840 (840.0 B)
I've created a new table "lte":
root#openair-PC:~/openair4G# ip rule show
0: from all lookup local
32743: from all fwmark 0x1 lookup lte
32766: from all lookup main
32767: from all lookup default
Thus, I'm marking packets to/from oip1 address:
root#openair-PC:~/openair4G# iptables -t mangle -L
Chain PREROUTING (policy ACCEPT)
target prot opt source destination
MARK all -- anywhere 192.188.0.0/24 MARK set 0x1
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain FORWARD (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
MARK all -- 192.188.0.0/24 anywhere MARK set 0x1
Chain POSTROUTING (policy ACCEPT)
target prot opt source destination
Then, I added a default gateway for table "lte":
root#openair-PC:~/openair4G# ip route show table lte
default dev oip1 scope link
ping result:
openair#openair-PC:~/openair4G/cmake_targets$ ping -c 5 -I oip1 8.8.8.8
PING 8.8.8.8 (8.8.8.8) from 192.188.0.2 oip1: 56(84) bytes of data.
^C
--- 8.8.8.8 ping statistics ---
5 packets transmitted, 0 received, 100% packet loss, time 3999ms
tcpdump screen:
openair#openair-PC:~/openair4G$ sudo tcpdump -p -i oip1
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on oip1, link-type LINUX_SLL (Linux cooked), capture size 65535 bytes
17:19:10.576680 IP 192.188.0.2 > google-public-dns-a.google.com: ICMP echo request, id 20987, seq 1, length 64
17:19:11.021851 IP google-public-dns-a.google.com > 192.188.0.2: ICMP echo reply, id 20987, seq 1, length 64
17:19:11.576617 IP 192.188.0.2 > google-public-dns-a.google.com: ICMP echo request, id 20987, seq 2, length 64
17:19:11.752243 IP google-public-dns-a.google.com > 192.188.0.2: ICMP echo reply, id 20987, seq 2, length 64
17:19:12.576583 IP 192.188.0.2 > google-public-dns-a.google.com: ICMP echo request, id 20987, seq 3, length 64
17:19:12.676980 IP google-public-dns-a.google.com > 192.188.0.2: ICMP echo reply, id 20987, seq 3, length 64
17:19:13.576559 IP 192.188.0.2 > google-public-dns-a.google.com: ICMP echo request, id 20987, seq 4, length 64
17:19:13.767922 IP google-public-dns-a.google.com > 192.188.0.2: ICMP echo reply, id 20987, seq 4, length 64
17:19:14.576561 IP 192.188.0.2 > google-public-dns-a.google.com: ICMP echo request, id 20987, seq 5, length 64
17:19:15.097164 IP google-public-dns-a.google.com > 192.188.0.2: ICMP echo reply, id 20987, seq 5, length 64
^C
10 packets captured
10 packets received by filter
0 packets dropped by kernel
As you can see, all packets can be seen by tcpdump.
Does anyone have an idea about this ?
Best regards,

I finally solved this. It was cause by rp_filter activated on the concerned interface and on all the system as well. I disabled it with: sysctl -w net.ipv4.conf.all.rp_filter=0 (replace all with the interface also).

This could have been caused by the reverse-path-filtering done by kernel. This is likely to occur if you are doing things like proxy-arp in the next hop. The system thinks that the packet received is a spoofed one (if the reverse-path-filtering is enabled) and drops it.
Obvious (but not suggested) way is to disable the reverse-path-filtering. Another way is to set the default gateway in the host for the pinged IP, through the intended interface.

Related

Container can't connect to the internet

I'm trying to update packages from an ubuntu container; however, updating fails and I've noticed I can't connect anywhere, am able to lookup dns names thought.
I'm using nicolaka/netshoot container in order to test network.
I've used "tcpdump" command in order to trace any error related to messages being sent, and testing with ping from the container results in the following error "ICMP time exceeded in-transit".
tcpdump in host interface:
16:18:25.257270 IP 172.217.192.100 > nicolas: ICMP echo reply, id 33, seq 3, length 64
16:18:25.257314 IP nicolas > 172.217.192.100: ICMP time exceeded in-transit, length 92
16:18:26.237575 IP nicolas > 172.217.192.100: ICMP echo request, id 33, seq 4, length 64
16:18:26.286692 IP 172.217.192.100 > nicolas: ICMP echo reply, id 33, seq 4, length 64
16:18:26.286757 IP nicolas > 172.217.192.100: ICMP time exceeded in-transit, length 92
16:18:27.261770 IP nicolas > 172.217.192.100: ICMP echo request, id 33, seq 5, length 64
16:18:27.302193 IP 172.217.192.100 > nicolas: ICMP echo reply, id 33, seq 5, length 64
16:18:27.302241 IP nicolas > 172.217.192.100: ICMP time exceeded in-transit, length 92
16:18:28.285631 IP nicolas > 172.217.192.100: ICMP echo request, id 33, seq 6, length 64
16:18:28.329531 IP 172.217.192.100 > nicolas: ICMP echo reply, id 33, seq 6, length 64
16:18:28.329596 IP nicolas > 172.217.192.100: ICMP time exceeded in-transit, length 92
16:18:29.309767 IP nicolas > 172.217.192.100: ICMP echo request, id 33, seq 7, length 64
16:18:29.353202 IP 172.217.192.100 > nicolas: ICMP echo reply, id 33, seq 7, length 64
16:18:29.353272 IP nicolas > 172.217.192.100: ICMP time exceeded in-transit, length 92
Also I'm not sure if my iptables are as expected for docker containers to get internet connection.
iptables -L:
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain FORWARD (policy DROP)
target prot opt source destination
DOCKER-USER all -- anywhere anywhere
DOCKER-ISOLATION-STAGE-1 all -- anywhere anywhere
ACCEPT all -- anywhere anywhere ctstate RELATED,ESTABLISHED
DOCKER all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
Chain DOCKER (1 references)
target prot opt source destination
Chain DOCKER-ISOLATION-STAGE-1 (1 references)
target prot opt source destination
DOCKER-ISOLATION-STAGE-2 all -- anywhere anywhere
RETURN all -- anywhere anywhere
Chain DOCKER-ISOLATION-STAGE-2 (1 references)
target prot opt source destination
DROP all -- anywhere anywhere
RETURN all -- anywhere anywhere
Chain DOCKER-USER (1 references)
target prot opt source destination
RETURN all -- anywhere anywhere
Whats is your result of ping 8.8.8.8 in container?
or whats is your commend when you want to run container.
I'll close this question as I have found the problem is with my ISP, which by using double NAT (ISP NAT + Docker NAT) is blocking connections from my containers.

IPv6 binding error: Cannot assign requested address

As I understand, IPv6 addresses are allocated in blocks. Each machine gets a range of IPv6 address and any IPv6 address in that range would point to it.
Basis for this assumption:
https://stackoverflow.com/a/15266701/681671
The /64 is the prefix length. It is the number of bits in the address
that is fixed. So a /64 indicates that the first 64 bits of the
128-bit IPv6 address are fixed. The remaining bits (64 in this case)
are flexible, and you can use all of them. This means that when your
ISP gives you a /64 they are giving you 264 addresses (that is
18,446,744,073,709,551,616 addresses).
Edit: I confirmed using Wireshark that the packets sent to any IP in that /64 range do get routed to my server.
Looking at this line from ifconfig output
inet6 2a01:2e8:d2c:e24c::1 prefixlen 64 scopeid 0x0<global>
I conclude that all IPv6 addresses with 2a01:2e8:d2c:e24c prefix will point to my machine.
However I am unable to bind any service to any IPv6 address other than
2a01:2e8:d2c:e24c:0000:0000:0000:0001
nc -l 2a01:2e8:d2c:e24c:0000:0000:0000:0002 80 Does not work
nc -l 2a01:2e8:d2c:e24c:0000:0000:0001:0001 80 Does not work
nc -l 2a01:2e8:d2c:e24c:1000:0000:0000:0001 80 Does not work
nc -l 2a01:2e8:d2c:e24c:0000:0000:0000:0001 80 Only this works
nc -l <IP> <PORT> opens up a simple TCP server on the specified IP and port.
The error I get is nc: Cannot assign requested address
I want to run multiple instances of a service on same port but different IPv6 addresses. Since public IPv6 address are abundantly available to each machine, I thought of utilizing the same.
ifconfig:
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 88.77.66.55 netmask 255.255.255.255 broadcast 88.77.66.55
inet6 fe80::9300:ff:fe33:64c1 prefixlen 64 scopeid 0x20<link>
inet6 2a01:2e8:d2c:e24c::1 prefixlen 64 scopeid 0x0<global>
ether 96:00:00:4e:31:e4 txqueuelen 1000 (Ethernet)
RX packets 26788391 bytes 21199864639 (21.1 GB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 21940989 bytes 20045216536 (20.0 GB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
OS: Ubuntu 18.04
VPS Host: Hetzner
I am actually trying to run multiple nginx docker containers mapped to port 80 on different IPv6 addresses of the host. That is when I encountered the problem. The nc -l test is just to simplify the problem description.
I conclude that all IPv6 addresses with 2a01:2e8:d2c:e24c prefix will point to my machine
That assumption is wrong. The prefix length has the same meaning as the IPv4 netmask. It determines which addresses are on your local network, not which addresses belong to your local host.
This is all you need:
ip route add local 2a01:2e8:d2c:e24c::/64 dev lo
Credit: Can I bind a (large) block of addresses to an interface?
To re-iterate and expand upon Sander's answer:
You must bind each individual IP address to the nic, network interface card, before it considers the traffic to send up the stack.
Wireshark sets the nic in promiscuous mode,i.e. send all traffic received.
There is a practical limit to how many IP addresses can be assigned on a system, MUCH less than the 2^64 required by the op post. Storing the addresses alone would be more than any system's memory.
Unlike IPV4 with its, 127.0.0.0/8, 2^24 loopback addresses, IPV6 only defines a single address 0::1/128.
The only practical solution would be to treat the entire IPV6 subnet as a "reverse" NAT using IP masquerading(NAT). This solution would require a second instance acting as the NAT "router". The rules would rewrite the destination addresses to a single address/port.

access remote network from within docker container

I have a Docker Host that is connected to a non-default route network. My problem is now that I can't reach this Network from within the Docker Containers on the Docker Host.
Primary IP: 189.69.77.21 (default route)
Secondary IP: 192.168.77.21
Routing is like the following:
[root#mgmt]# route -n
Kernel IP Routentabelle
Ziel Router Genmask Flags Metric Ref Use Iface
0.0.0.0 189.69.77.1 0.0.0.0 UG 0 0 0 enp0s31f6
189.69.77.1 0.0.0.0 255.255.255.255 UH 0 0 0 enp0s31f6
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
192.168.77.0 0.0.0.0 255.255.255.0 U 0 0 0 enp0s31f6.4000
and IPtables untouched:
[root#mgmt]# iptables -L -n
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain FORWARD (policy DROP)
target prot opt source destination
DOCKER-ISOLATION-STAGE-1 all -- 0.0.0.0/0 0.0.0.0/0
ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED
DOCKER all -- 0.0.0.0/0 0.0.0.0/0
ACCEPT all -- 0.0.0.0/0 0.0.0.0/0
ACCEPT all -- 0.0.0.0/0 0.0.0.0/0
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
Chain DOCKER (1 references)
target prot opt source destination
ACCEPT tcp -- 0.0.0.0/0 172.17.0.2 tcp dpt:3000
Chain DOCKER-ISOLATION-STAGE-1 (1 references)
target prot opt source destination
DOCKER-ISOLATION-STAGE-2 all -- 0.0.0.0/0 0.0.0.0/0
RETURN all -- 0.0.0.0/0 0.0.0.0/0
Chain DOCKER-ISOLATION-STAGE-2 (1 references)
target prot opt source destination
DROP all -- 0.0.0.0/0 0.0.0.0/0
RETURN all -- 0.0.0.0/0 0.0.0.0/0
I start the Docker Container with the following:
docker run -d --restart=unless-stopped -p 127.0.0.1:3000:3000/tcp --name mongoclient -e MONGOCLIENT_DEFAULT_CONNECTION_URL=mongodb://192.168.77.21:27017,192.168.77.40:27017,192.168.77.41:27017/graylog?replicaSet=ars0 -e ROOT_URL=http://192.168.77.21/nosqlclient mongoclient/mongoclient
I can reach the container (via NGINX Proxy) over Network but the container itself can only ping/reach the Docker Host IP and not others.
node#1c5cf0e8d14c:/opt/meteor/dist/bundle$ ping 192.168.77.21
PING 192.168.77.21 (192.168.77.21) 56(84) bytes of data.
64 bytes from 192.168.77.21: icmp_seq=1 ttl=64 time=0.078 ms
64 bytes from 192.168.77.21: icmp_seq=2 ttl=64 time=0.080 ms
64 bytes from 192.168.77.21: icmp_seq=3 ttl=64 time=0.079 ms
^C
--- 192.168.77.21 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2000ms
rtt min/avg/max/mdev = 0.078/0.079/0.080/0.000 ms
node#1c5cf0e8d14c:/opt/meteor/dist/bundle$ ping 192.168.77.40
PING 192.168.77.40 (192.168.77.40) 56(84) bytes of data.
^C
--- 192.168.77.40 ping statistics ---
240 packets transmitted, 0 received, 100% packet loss, time 239000ms
So my question is, how can I make the Docker Container reach the hosts on the Network? My goal is to have a running mongoclient via Docker that can be used to manage the MongoDB ReplicaSet that is in that additional private Network.
You can use the network host in the container. So the container is use the host network and you can access the container and host network.
Here is the documentation:
https://docs.docker.com/network/host/
BR
Carlos

Port not accessible even after being exposed. Connection refused

we created a docker container like this:
docker container create \
--name orderer \
--network dscsa_net \
--workdir $WORK_DIR \
--expose=7050 \
hyperledger/fabric-orderer:1.3.0 ./start-orderer.sh
but are unable to connect to port 7050 on the container.
root#dcee7e74266f:/home# nc -vz 10.0.0.194 7050
nc: connect to 10.0.0.194 port 7050 (tcp) failed: Connection refused
we are able to ping the container:
root#dcee7e74266f:/home# ping 10.0.0.194
PING 10.0.0.194 (10.0.0.194) 56(84) bytes of data.
64 bytes from 10.0.0.194: icmp_seq=1 ttl=64 time=0.810 ms
64 bytes from 10.0.0.194: icmp_seq=2 ttl=64 time=1.30 ms
64 bytes from 10.0.0.194: icmp_seq=3 ttl=64 time=0.668 ms
64 bytes from 10.0.0.194: icmp_seq=4 ttl=64 time=1.10 ms
64 bytes from 10.0.0.194: icmp_seq=5 ttl=64 time=0.631 ms
^C
--- 10.0.0.194 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4006ms
rtt min/avg/max/mdev = 0.631/0.902/1.301/0.261 ms
and also see a process listening on port 7050 on the container:
root#9756199efefa:/home# netstat -tuplen
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State User Inode PID/Program name
tcp 0 0 127.0.0.1:7050 0.0.0.0:* LISTEN 0 10097930 7/orderer
tcp 0 0 127.0.0.11:34865 0.0.0.0:* LISTEN 0 10097705 -
udp 0 0 127.0.0.11:51385 0.0.0.0:* 0 10097704 -
what is going on here? how can we fix this?
EDIT: we are on a overlay network. The publish flag suggested in the answer is n/a as we are doing container to container communication. Anyway we tried it and it doesn't work.
There is one thing we have noticed which is if we run:
docker network inspect <our-network-name>
Among other things, it prints out a containers section but in that section only the containers on the host from which docker network inspect is executed are listed. The containers hosted on other nodes are not listed (also mentioned here).
we verified that if we run:
docker node ls
all the nodes are part of the swarm.
It seems other people have also run into this issue e.g., here but what is the solution?
Note: we are able to connect to another container running a different service exposed on port 7054. This container was created without even using the expose flag.
root#dcee7e74266f:/home# nc -zv 10.0.0.164 7054
Connection to 10.0.0.164 7054 port [tcp/*] succeeded!
Did further debugging with tcpdump and output of tcpdump is identical to the output when someone tries to connect to a port on which no process is listening. But as shown earlier netstat shows a process that is listening and we can connect to the process from localhost.
Output of tcpdump:
root#dcee7e74266f:/test# tcpdump -s0 host 10.0.0.195
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes
23:44:45.978583 IP dcee7e74266f.52148 > orderer.dscsa_net.7050: Flags [S], seq 3845506108, win 28200, options [mss 1410,sackOK,TS val 4203049443 ecr 0,nop,wscale 7], length 0
23:44:45.979324 IP orderer.dscsa_net.7050 > dcee7e74266f.52148: Flags [R.], seq 0, ack 3845506109, win 0, length 0
The R flag tells client to reset the connection.
Output of traceroute:
root#dcee7e74266f:/test# traceroute 10.0.0.195
traceroute to 10.0.0.195 (10.0.0.195), 30 hops max, 60 byte packets
1 orderer.dscsa_net (10.0.0.195) 1.008 ms 0.900 ms 0.872 ms
Expose only sets metadata on the image or container, it does not make the port externally accessible. The option you are looking for is publish:
docker container create \
--name orderer \
--network dscsa_net \
--workdir $WORK_DIR \
--publish=7050:7050 \
hyperledger/fabric-orderer:1.3.0 ./start-orderer.sh
Solved this issue thanks to 1. The server listening to 127.0.0.1 was the problem. Once we changed the listening address to 0.0.0.0 (shows as ::: in netstat output below), we are able to connect to the server:
root#e9766a94d102:/home# netstat -tuplen
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State User Inode PID/Program name
tcp 0 0 127.0.0.11:37641 0.0.0.0:* LISTEN 0 12821468 -
tcp6 0 0 :::7050 :::* LISTEN 0 12821696 7/orderer
udp 0 0 127.0.0.11:51855 0.0.0.0:* 0 12821467 -
there is no need for either expose or publish flags. note to self: wasted 1.5 days on this.

Unable to ping docker host but the world is reachable

I am able to ping rest of the world but not the host from container in which docker container is running. I am sure someone encountered this issue before
See below details
Ubuntu 14.04.2 LTS (GNU/Linux 3.16.0-30-generic x86_64
**Container is using "bridge" network
Docker version 18.06.1-ce, build e68fc7a**
IP address for eth0: 135.25.87.162
IP address for eth1: 192.168.122.55
IP address for eth2: 135.21.171.209
IP address for docker0: 172.17.42.1
route
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default 135.21.248.1 0.0.0.0 UG 0 0 0 eth1
135.21.171.192 * 255.255.255.192 U 0 0 0 eth2
135.21.248.0 * 255.255.255.0 U 0 0 0 eth1
135.25.87.128 * 255.255.255.192 U 0 0 0 eth0
172.17.0.0 * 255.255.0.0 U 0 0 0 docker0
192.168.122.0 * 255.255.255.192 U 0 0 0 eth1
#ping commands from container
# ping google.com
PING google.com (64.233.177.113): 56 data bytes
64 bytes from 64.233.177.113: icmp_seq=0 ttl=29 time=51.827 ms
64 bytes from 64.233.177.113: icmp_seq=1 ttl=29 time=50.184 ms
64 bytes from 64.233.177.113: icmp_seq=2 ttl=29 time=50.991 ms
^C--- google.com ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max/stddev = 50.184/51.001/51.827/0.671 ms
# ping 135.25.87.162
PING 135.25.87.162 (135.25.87.162): 56 data bytes
^C--- 135.25.87.162 ping statistics ---
4 packets transmitted, 0 packets received, 100% packet loss
root#9ed17e4c2ee3:/opt/app/tomcat#

Resources