Which IPv4 is used while running ping *hostname* command - network-programming

If I have a hostname that has several IPv4 addresses assigned.
Which IPv4 will be used by ping request to resolve the hostname address [for example, while running "ping Some-Pc"]?

Run the command 'route' in Linux and you will see the routing tables. Based on the destination address and the routing table you should be able to determine the interface being used to send the ICMP messages and thus the src IP address.
For example, given this routing table in Linux:
[mynode]$ route
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default gateway 0.0.0.0 UG 100 0 0 enp0s3
10.0.2.0 0.0.0.0 255.255.255.0 U 100 0 0 enp0s3
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
192.168.56.0 0.0.0.0 255.255.255.0 U 100 0 0 enp0s8
192.168.124.0 0.0.0.0 255.255.255.0 U 0 0 0 virbr0
If you send a ping to address 10.0.2.45, it will use enp0s3 and the corresponding IP address as src address.
If you send a ping to address 172.17.0.0 it will send the address from NIC docker0 and the corresponding src IP address.
With ifconfig in Linux (ipconfig in Windows) you can see the IP address assigned to each interface.

Related

Docker is overriding my default route configuration

A noob here starting with docker in a Orange Pi 3 (Rasberry Pi clone).
I'm trying to configure and start a docker containter (bitwarden_rs), but when I do, I lost connection to the external network. Docker mess with my route table.
Network configuration: I have a bridge br0 that bridges eth0 and wlan0.
(Eth0 connects to the router, wlan0 is configured in AP mode)
Table when container is stopped:
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default _gateway 0.0.0.0 UG 425 0 0 br0 <---OK
link-local 0.0.0.0 255.255.0.0 U 1000 0 0 br0
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
192.168.1.0 0.0.0.0 255.255.255.0 U 425 0 0 br0
192.168.2.0 0.0.0.0 255.255.255.0 U 425 0 0 br0
Table when container is running (No internet access to the exterior)
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default 0.0.0.0 0.0.0.0 U 205 0 0 docker0 <---NOT OK
default _gateway 0.0.0.0 UG 425 0 0 br0
link-local 0.0.0.0 255.255.0.0 U 205 0 0 docker0
link-local 0.0.0.0 255.255.0.0 U 230 0 0 vethed140ce
link-local 0.0.0.0 255.255.0.0 U 1000 0 0 br0
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
192.168.1.0 0.0.0.0 255.255.255.0 U 425 0 0 br0
192.168.2.0 0.0.0.0 255.255.255.0 U 425 0 0 br0
What can I do to fix it? It's docker config problem or maybe my system problem (armbian).
Thanks
On ubuntu 20.04, I tried many methods,
like prevent dhcpd to update route
or change NetworkManager configure to let network-manager to igonre veth* device
Neither of the above works.
I spent a lot of time and found that connman service changes default route. Change its config file /etc/connman/main.conf by uncommenting following line:
#NetworkInterfaceBlacklist = vmnet,vboxnet,virbr,ifb,veth-,vb-
and
systemctl restart connman
to restart connman service. The issue resolved eventually.
This is because, as you can see docker creates a linux bridge named 'docker0'.
You can change the default settings for the docker bridge to resolve the issue.
Configure the default bridge network by providing the bip option along with the desired subnet in the daemon.json
# vi /etc/docker/daemon.json
{
"bip": "172.200.0.1/16"
}
and restart the service.
systemctl restart docker
More details HERE and HERE

IPv6 binding error: Cannot assign requested address

As I understand, IPv6 addresses are allocated in blocks. Each machine gets a range of IPv6 address and any IPv6 address in that range would point to it.
Basis for this assumption:
https://stackoverflow.com/a/15266701/681671
The /64 is the prefix length. It is the number of bits in the address
that is fixed. So a /64 indicates that the first 64 bits of the
128-bit IPv6 address are fixed. The remaining bits (64 in this case)
are flexible, and you can use all of them. This means that when your
ISP gives you a /64 they are giving you 264 addresses (that is
18,446,744,073,709,551,616 addresses).
Edit: I confirmed using Wireshark that the packets sent to any IP in that /64 range do get routed to my server.
Looking at this line from ifconfig output
inet6 2a01:2e8:d2c:e24c::1 prefixlen 64 scopeid 0x0<global>
I conclude that all IPv6 addresses with 2a01:2e8:d2c:e24c prefix will point to my machine.
However I am unable to bind any service to any IPv6 address other than
2a01:2e8:d2c:e24c:0000:0000:0000:0001
nc -l 2a01:2e8:d2c:e24c:0000:0000:0000:0002 80 Does not work
nc -l 2a01:2e8:d2c:e24c:0000:0000:0001:0001 80 Does not work
nc -l 2a01:2e8:d2c:e24c:1000:0000:0000:0001 80 Does not work
nc -l 2a01:2e8:d2c:e24c:0000:0000:0000:0001 80 Only this works
nc -l <IP> <PORT> opens up a simple TCP server on the specified IP and port.
The error I get is nc: Cannot assign requested address
I want to run multiple instances of a service on same port but different IPv6 addresses. Since public IPv6 address are abundantly available to each machine, I thought of utilizing the same.
ifconfig:
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 88.77.66.55 netmask 255.255.255.255 broadcast 88.77.66.55
inet6 fe80::9300:ff:fe33:64c1 prefixlen 64 scopeid 0x20<link>
inet6 2a01:2e8:d2c:e24c::1 prefixlen 64 scopeid 0x0<global>
ether 96:00:00:4e:31:e4 txqueuelen 1000 (Ethernet)
RX packets 26788391 bytes 21199864639 (21.1 GB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 21940989 bytes 20045216536 (20.0 GB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
OS: Ubuntu 18.04
VPS Host: Hetzner
I am actually trying to run multiple nginx docker containers mapped to port 80 on different IPv6 addresses of the host. That is when I encountered the problem. The nc -l test is just to simplify the problem description.
I conclude that all IPv6 addresses with 2a01:2e8:d2c:e24c prefix will point to my machine
That assumption is wrong. The prefix length has the same meaning as the IPv4 netmask. It determines which addresses are on your local network, not which addresses belong to your local host.
This is all you need:
ip route add local 2a01:2e8:d2c:e24c::/64 dev lo
Credit: Can I bind a (large) block of addresses to an interface?
To re-iterate and expand upon Sander's answer:
You must bind each individual IP address to the nic, network interface card, before it considers the traffic to send up the stack.
Wireshark sets the nic in promiscuous mode,i.e. send all traffic received.
There is a practical limit to how many IP addresses can be assigned on a system, MUCH less than the 2^64 required by the op post. Storing the addresses alone would be more than any system's memory.
Unlike IPV4 with its, 127.0.0.0/8, 2^24 loopback addresses, IPV6 only defines a single address 0::1/128.
The only practical solution would be to treat the entire IPV6 subnet as a "reverse" NAT using IP masquerading(NAT). This solution would require a second instance acting as the NAT "router". The rules would rewrite the destination addresses to a single address/port.

Can't reach service of host from container

On the host, there is a service
#server# netstat -ln | grep 3308
tcp6 0 0 :::3308 :::* LISTEN
It can be reached from remote.
The container is in a user-defined bridge network.
The server IP address is 192.168.1.30
#localhost ~]# ifconfig
br-a54fd3b63acd: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.18.0.1 netmask 255.255.0.0 broadcast 172.18.255.255
inet6 fe80::42:1eff:fecc:92e8 prefixlen 64 scopeid 0x20<link>
ether 02:42:1e:cc:92:e8 txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
docker0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255
inet6 fe80::42:37ff:fe9f:e4f1 prefixlen 64 scopeid 0x20<link>
ether 02:42:37:9f:e4:f1 txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 34 bytes 4018 (3.9 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.1.30 netmask 255.255.255.0 broadcast 192.168.1.255
And ping from container also works.
#33208c18aa61:~# ping -c 2 192.168.1.30
PING 192.168.1.30 (192.168.1.30) 56(84) bytes of data.
64 bytes from 192.168.1.30: icmp_seq=1 ttl=64 time=0.120 ms
64 bytes from 192.168.1.30: icmp_seq=2 ttl=64 time=0.105 ms
And the service is available.
#server# telnet 192.168.1.30 3308
Trying 192.168.1.30...
Connected to 192.168.1.30.
Escape character is '^]'.
N
But the service can't be reached from the container.
#33208c18aa61:~# telnet 192.168.1.30 3308
Trying 192.168.1.30...
telnet: Unable to connect to remote host: No route to host
I checked
Make docker use IPv4 for port binding
make sure I didn't have IPv6 set to only bind on IPv6
# sysctl net.ipv6.bindv6only
net.ipv6.bindv6only = 0
From inside of a Docker container, how do I connect to the localhost of the machine?
find my route is a little different.
# route
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default router.asus.com 0.0.0.0 UG 100 0 0 eth0
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
172.18.0.0 0.0.0.0 255.255.0.0 U 0 0 0 br-a54fd3b63acd
192.168.1.0 0.0.0.0 255.255.255.0 U 100 0 0 eth0
Does it matter? Or could it be another reason?
Your docker container is on a different network namespace and connected to a different interface than your host machine that's why you can't reach it using the ip 192.168.x.x
What you need to do is to use the docker network gateway instead, in your case 172.17.0.1 but be aware that this IP might no be the same from host to host so to reproduce this everywhere and be completely sure of which is the IP you can create an user-defined network specifying the subnet and gateway and running your container there for example:
docker network create -d bridge --subnet 172.16.0.0/24 --gateway 172.16.0.1 dockernet
docker run --net=dockernet ubuntu
Also whatever service you are trying to connect here must be listening on the docker's bridge interface as well.
Another option is to run the container on the same network namespace as the host with the --net=host flag, and in this case you can access service outside the container using localhost
Inspired by the official document
The Docker bridge driver automatically installs rules in the host
machine so that containers on different bridge networks cannot
communicate directly with each other.
I checked the iptables on the server, for an experiment I stopped the iptables temporary. Then the container can reach that service success. Later I was told, the server has been reboot recently. So guessing some config was lost after that reboot. Not familiar with iptables very much, and when I try
systemctl status iptables.service
It says the service is not installed. After I install and run the service,
iptables -L -n
is almost empty. Now not clue what kind of iptables rules can cause that messy.
But if anyone face the ping success telnet fail situation, iptables could be the place of the root cause.

Docker container IP 172.17.XXX unable to reach from windows host machine 192.168.X.X

I have successfully run docker container with default bridge network on ubuntu (Virtual box), I am able to communicate (ping) from My container, but not from my Host os (windows) . virtual box network adapter has bridge
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 192.168.0.1 0.0.0.0 UG 0 0 0 enp0s3
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
192.168.0.0 0.0.0.0 255.255.255.0 U 0 0 0 enp0s3
you need to ping from host (windows) too, I think the result is RTO. solution is you need change youre ip to same class as a HOST Windows.

How can I shut down an ethernet interface but not the attached virtual interface?

I have an linux embedded machine that has a ethernet interface with a working network configuration. Also on this interface runs a second virtual network.
The config file reads as follows:
auto lo eth0 eth0:1
# loopback interface
iface lo inet loopback
# ethernet
iface eth0 inet static
address 192.168.1.1
netmask 255.255.255.0
network 192.168.1.0
broadcast 192.168.1.255
gateway 192.168.1.2
# ethernet
iface eth0:1 inet static
address 123.123.123.1
netmask 255.255.255.0
network 123.123.123.0
broadcast 123.123.123.255
gateway 123.123.123.2
Now I need to bring down the eth0 device but still be able to reach the eth0:1 device.
How can this be done?
I tried to simply flush the ip address of the eth0 device, which works with the following command ip addr flush eth0. This works but it seems the services (webserver etc) are still listening on this interface...

Resources