Docker Container to Container communication with IPv6 only - docker

I am running two VM on OpenStack Mirantis. For Simplicity let's call host-1 and host-2. I am unable to communicate neither from Container to Container on different hosts not Container to Public Internet On each Host I have installed Docker ver 1.12.3 and run the following things --
tee Dockerfile <<-'EOF'
FROM centos
RUN yum -y install net-tools bind-utils iputils*
EOF
Later --
docker build -t crazy:3 .
On host-1 :--
dockerd --ipv6 --fixed-cidr-v6="2001:1b76:2400:e2::2/64" &
run -i -t --entrypoint /bin/bash crazy:3
ping6 -c3 google.com
ifconfig
On host-2 :--
dockerd --ipv6 --fixed-cidr-v6="2001:1b76:2400:e2::2/64" &
run -i -t --entrypoint /bin/bash crazy:3
ping6 -c3 google.com
ifconfig
Host-1 output:--
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.17.0.2 netmask 255.255.0.0 broadcast 0.0.0.0
inet6 2001:1b76:2400:e2:0:242:ac11:2 prefixlen 64 scopeid 0x0<global>
inet6 fe80::42:acff:fe11:2 prefixlen 64 scopeid 0x20<link>
ether 02:42:ac:11:00:02 txqueuelen 0 (Ethernet)
RX packets 18 bytes 1663 (1.6 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 53 bytes 4604 (4.4 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
Host-2 output:--
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.17.0.2 netmask 255.255.0.0 broadcast 0.0.0.0
inet6 2001:1b76:2400:e2:0:242:ac11:3 prefixlen 64 scopeid 0x0<global>
inet6 fe80::42:acff:fe11:2 prefixlen 64 scopeid 0x20<link>
ether 02:42:ac:11:00:02 txqueuelen 0 (Ethernet)
RX packets 8 bytes 808 (808.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 6 bytes 508 (508.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
Then again
On host-1:--
ping6 2001:1b76:2400:e2:0:242:ac11:3
On host-2:--
ping6 2001:1b76:2400:e2:0:242:ac11:2
All are same output i,e --
PING 2001:1b76:2400:e2:0:242:ac11:3(2001:1b76:2400:e2:0:242:ac11:3) 56 data bytes
From 2001:1b76:2400:e2:0:242:ac11:2 icmp_seq=1 Destination unreachable: Address unreachable
From 2001:1b76:2400:e2:0:242:ac11:2 icmp_seq=2 Destination unreachable: Address unreachable
From 2001:1b76:2400:e2:0:242:ac11:2 icmp_seq=3 Destination unreachable: Address unreachable
From 2001:1b76:2400:e2:0:242:ac11:2 icmp_seq=4 Destination unreachable: Address unreachable
Both hosts ip route are same i,e --
2001:1b76:2400:e2:f816:3eff:fe69:c2f2 dev eth0 metric 0
cache
2001:1b76:2400:e2::/64 dev eth0 proto kernel metric 256 expires 28133sec
2001:1b76:2400:e2::/64 dev docker0 proto kernel metric 256
2001:1b76:2400:e2::/64 dev docker0 metric 1024
fe80::/64 dev eth0 proto kernel metric 256
fe80::/64 dev docker0 proto kernel metric 256
Both containers ip route are same i,e --
2001:1b76:2400:e2::1 dev eth0 metric 0
cache
2001:1b76:2400:e2::/64 dev eth0 proto kernel metric 256
fe80::/64 dev eth0 proto kernel metric 256
default via 2001:1b76:2400:e2::1 dev eth0 metric 1024
Both hosts ip forwarding are same i,e --
net.ipv4.conf.all.forwarding = 1
net.ipv6.conf.all.forwarding = 1
Both containers ip forwarding are same i,e --
net.ipv4.conf.all.forwarding = 1
net.ipv6.conf.all.forwarding = 0

Related

Docker container can not connect to host machine

I use docker-compose to operate with 3 containers and a network with the bridge driver.
Network is created with following command:
docker network create -d bridge --subnet 192.168.60.0/24 --gateway 192.168.60.1 mynet
The problem is that containers are not available by their address from host machine:
curl: (7) Failed to connect to 192.168.60.3 port 80: Connection refused. I know exactly that container is running and listening on the port.
From the inside of container host machine is unavailable either: curl: (7) Failed to connect to 192.168.60.1.
There must be some trouble with driver, because the network is not listed in interfaces. I did the same thing on another machine and found all docker networks with names likevethXXXXXXX. But on this machine ifconfig -a shows:
docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255
ether 02:42:98:c3:b9:63 txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 1000 (Local Loopback)
RX packets 41250 bytes 11892280 (11.8 MB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 41250 bytes 11892280 (11.8 MB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
wlp4s0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.202.43 netmask 255.255.255.0 broadcast 192.168.202.255
inet6 fe80::65e5:6492:9305:2d71 prefixlen 64 scopeid 0x20<link>
ether d4:3b:04:74:5c:48 txqueuelen 1000 (Ethernet)
RX packets 693406 bytes 537178014 (537.1 MB)
RX errors 0 dropped 884 overruns 0 frame 0
TX packets 2803399 bytes 572926991 (572.9 MB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
What kind of problem it could be? Why the network is not shown in interfaces list?

LXC on host's lan running under normal user

I have set up LXC container using this manual. It's working great under root, but I can't run it under my normal user.
Start of lxc container failed with followed error:
lxc-start Test 20221009142640.181 ERROR network - network.c:lxc_create_network_unpriv_exec:2629 - lxc-user-nic failed to configure requested network: cmd/lxc_user_nic.c: 1209: main: Quota reached
lxc-start Test 20221009142640.182 ERROR start - start.c:lxc_spawn:1786 - Failed to create the network
lxc-start Test 20221009142640.182 ERROR lxccontainer - lxccontainer.c:wait_on_daemonized_start:859 - Received container state "ABORTING" instead of "RUNNING"
lxc-start Test 20221009142640.182 ERROR lxc_start - tools/lxc_start.c:main:308 - The container failed to start
lxc-start Test 20221009142640.182 ERROR lxc_start - tools/lxc_start.c:main:311 - To get more details, run the container in foreground mode
lxc-start Test 20221009142640.182 ERROR lxc_start - tools/lxc_start.c:main:313 - Additional information can be obtained by setting the --logfile and --logpriority options
lxc-start Test 20221009142640.184 ERROR start - start.c:__lxc_start:1999 - Failed to spawn container "Test"
I suspect that issue is from reason that normal user can't setup proprial network runed from lxc.net.0.script.up.
I'm not so familiar with Linux networking, so I'll appreciate if somebody help me.
cat default.conf
#lxc.apparmor.profile = generated
#lxc.apparmor.allow_nesting = 1
lxc.idmap = u 0 100000 65536
lxc.idmap = g 0 100000 65536
lxc.include = /etc/lxc/default.conf
######################################
lxc.net.0.type = veth
lxc.net.0.veth.pair = veth0
lxc.net.0.flags = up
lxc.net.0.hwaddr = 00:16:3e:xx:xx:xx
lxc.net.0.ipv4.address = 192.168.1.13/32
lxc.net.0.ipv4.gateway = 192.168.1.10
lxc.net.0.script.up = /var/lib/lxc/netup.sh 192.168.1.13
lxc.net.0.script.down = /var/lib/lxc/netdown.sh 192.168.1.13
cat lxc-usernet
pi veth veth0 2
ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.1.10 netmask 255.255.255.0 broadcast 192.168.1.255
inet6 fd03:d926:5f2b:0:1b5a:7e3f:e65f:cf49 prefixlen 64 scopeid 0x0<global>
inet6 fd03:d926:5f2b::10 prefixlen 128 scopeid 0x0<global>
inet6 fe80::1b9:aa6:c2f3:b99c prefixlen 64 scopeid 0x20<link>
ether dc:a6:32:d3:22:99 txqueuelen 1000 (Ethernet)
RX packets 121141930 bytes 157518188138 (146.7 GiB)
RX errors 2 dropped 2 overruns 0 frame 0
TX packets 65951525 bytes 48575917258 (45.2 GiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lxcbr0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 10.0.3.1 netmask 255.255.255.0 broadcast 10.0.3.255
ether 00:16:3e:00:00:00 txqueuelen 1000 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

Docker: Two containers under same network cann't communicate

I created two docker containers and connected them to the same network, but either of the container cannot connect to the other one.
I have tried the steps on this page, but none of the methods worked.
Anything else I can try?
docker run -d --name db1 -e POSTGRES_PASSWORD=password postgres:10-alpine
docker run -d --name db2 -e POSTGRES_PASSWORD=password postgres:10-alpine
docker network create myNetwork
docker network connect myNetwork db1
docker network connect myNetwork db2
# make sure that the network has 2 containers
docker inspect myNetwork
docker exec -it db1 ping db2
PING db2 (172.18.0.4): 56 data bytes
^C
--- cvat_db ping statistics ---
3 packets transmitted, 0 packets received, 100% packet loss
docker exec -it db1route
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default 172.18.0.1 0.0.0.0 UG 0 0 0 eth0
172.18.0.0 * 255.255.0.0 U 0 0 0 eth0
#ifconfig
br-3f4022544f42: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.18.0.1 netmask 255.255.0.0 broadcast 172.18.255.255
inet6 fe80::42:e5ff:fe9f:33bc prefixlen 64 scopeid 0x20<link>
ether 02:42:e5:9f:33:bc txqueuelen 0 (Ethernet)
RX packets 21 bytes 1164 (1.1 KB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 44 bytes 5656 (5.6 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255
inet6 fe80::42:b9ff:fe47:f00c prefixlen 64 scopeid 0x20<link>
ether 02:42:b9:47:f0:0c txqueuelen 0 (Ethernet)
RX packets 1 bytes 28 (28.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 54 bytes 6637 (6.6 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

Connect docker containers to TAP interface

What I'm doing is connecting two docker containers using OVS-DPDK to test throughput between then (using sockperf or iperf3). For this, I've been suggested to use TAP interfaces.
What is expected is that container A passes/receives traffic from TAP0 and container B sends/receives traffic from the TAP1 interface. TAP0 must send traffic to TAP1 over userspace OVS-DPDK and vise versa.
But unfortunately, I can't get the traffic to go to the TAP interfaces.
Here is what I'm doing (based on this answer):
On the host OS:
sudo ./utilities/ovs-vsctl add-br br0 -- set bridge br0 datapath_type=netdev
sudo ./utilities/ovs-vsctl add-port br0 myeth0 -- set Interface myeth0 type=dpdk options:dpdk-devargs=net_tap0,iface=tap0
sudo ./utilities/ovs-vsctl add-port br0 myeth1 -- set Interface myeth1 type=dpdk options:dpdk-devargs=net_tap1,iface=tap1
sudo ./utilities/ovs-ofctl add-flow br0 in_port=1,action=output:2
sudo ./utilities/ovs-ofctl add-flow br0 in_port=2,action=output:1
It creates two TAP interfaces (shown in ifconfig) and two OVS-DPDK ports (myeth0 and myeth1)
Then I assign IP to the TAP interfaces:
sudo ip addr add 173.17.0.1/24 dev tap0
sudo ip addr add 173.17.1.1/24 dev tap1
sudo ip link set tap0 up
sudo ip link set tap1 up
And then run the docker containers:
docker run -it --rm --name=server -p 5201:5201 --entrypoint /bin/bash "networkstatic/iperf3"
docker run -it --rm --name=client --entrypoint /bin/bash "networkstatic/iperf3"
The traffic goes through docker created venth interfaces and nothing goes through TAP interfaces (As I check in ifconfig).
What is the correct way to connect two containers using OVS-DPDK and TAP interface in Linux?
EDIT:
Output of ifconfig:
tap0: flags=4931<UP,BROADCAST,RUNNING,PROMISC,ALLMULTI,MULTICAST> mtu 1500
inet6 fe80::3847:cbff:fe27:3c2e prefixlen 64 scopeid 0x20<link>
ether 3a:47:cb:27:3c:2e txqueuelen 1000 (Ethernet)
RX packets 16 bytes 2447 (2.4 KB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 29 bytes 3545 (3.5 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
tap1: flags=4931<UP,BROADCAST,RUNNING,PROMISC,ALLMULTI,MULTICAST> mtu 1500
inet6 fe80::2835:bcff:fe4c:4f0e prefixlen 64 scopeid 0x20<link>
ether 2a:35:bc:4c:4f:0e txqueuelen 1000 (Ethernet)
RX packets 12 bytes 1203 (1.2 KB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 16 bytes 2447 (2.4 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
veth8f1f04e: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet6 fe80::50bf:f2ff:fed9:e03b prefixlen 64 scopeid 0x20<link>
ether 52:bf:f2:d9:e0:3b txqueuelen 0 (Ethernet)
RX packets 2047606 bytes 135148094 (135.1 MB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 2717619 bytes 119774365333 (119.7 GB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
vethb6e1780: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet6 fe80::382b:e0ff:fe8f:afa0 prefixlen 64 scopeid 0x20<link>
ether 3a:2b:e0:8f:af:a0 txqueuelen 0 (Ethernet)
RX packets 2717563 bytes 119774357789 (119.7 GB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 2047637 bytes 135151896 (135.1 MB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

Can't reach service of host from container

On the host, there is a service
#server# netstat -ln | grep 3308
tcp6 0 0 :::3308 :::* LISTEN
It can be reached from remote.
The container is in a user-defined bridge network.
The server IP address is 192.168.1.30
#localhost ~]# ifconfig
br-a54fd3b63acd: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.18.0.1 netmask 255.255.0.0 broadcast 172.18.255.255
inet6 fe80::42:1eff:fecc:92e8 prefixlen 64 scopeid 0x20<link>
ether 02:42:1e:cc:92:e8 txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
docker0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255
inet6 fe80::42:37ff:fe9f:e4f1 prefixlen 64 scopeid 0x20<link>
ether 02:42:37:9f:e4:f1 txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 34 bytes 4018 (3.9 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.1.30 netmask 255.255.255.0 broadcast 192.168.1.255
And ping from container also works.
#33208c18aa61:~# ping -c 2 192.168.1.30
PING 192.168.1.30 (192.168.1.30) 56(84) bytes of data.
64 bytes from 192.168.1.30: icmp_seq=1 ttl=64 time=0.120 ms
64 bytes from 192.168.1.30: icmp_seq=2 ttl=64 time=0.105 ms
And the service is available.
#server# telnet 192.168.1.30 3308
Trying 192.168.1.30...
Connected to 192.168.1.30.
Escape character is '^]'.
N
But the service can't be reached from the container.
#33208c18aa61:~# telnet 192.168.1.30 3308
Trying 192.168.1.30...
telnet: Unable to connect to remote host: No route to host
I checked
Make docker use IPv4 for port binding
make sure I didn't have IPv6 set to only bind on IPv6
# sysctl net.ipv6.bindv6only
net.ipv6.bindv6only = 0
From inside of a Docker container, how do I connect to the localhost of the machine?
find my route is a little different.
# route
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default router.asus.com 0.0.0.0 UG 100 0 0 eth0
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
172.18.0.0 0.0.0.0 255.255.0.0 U 0 0 0 br-a54fd3b63acd
192.168.1.0 0.0.0.0 255.255.255.0 U 100 0 0 eth0
Does it matter? Or could it be another reason?
Your docker container is on a different network namespace and connected to a different interface than your host machine that's why you can't reach it using the ip 192.168.x.x
What you need to do is to use the docker network gateway instead, in your case 172.17.0.1 but be aware that this IP might no be the same from host to host so to reproduce this everywhere and be completely sure of which is the IP you can create an user-defined network specifying the subnet and gateway and running your container there for example:
docker network create -d bridge --subnet 172.16.0.0/24 --gateway 172.16.0.1 dockernet
docker run --net=dockernet ubuntu
Also whatever service you are trying to connect here must be listening on the docker's bridge interface as well.
Another option is to run the container on the same network namespace as the host with the --net=host flag, and in this case you can access service outside the container using localhost
Inspired by the official document
The Docker bridge driver automatically installs rules in the host
machine so that containers on different bridge networks cannot
communicate directly with each other.
I checked the iptables on the server, for an experiment I stopped the iptables temporary. Then the container can reach that service success. Later I was told, the server has been reboot recently. So guessing some config was lost after that reboot. Not familiar with iptables very much, and when I try
systemctl status iptables.service
It says the service is not installed. After I install and run the service,
iptables -L -n
is almost empty. Now not clue what kind of iptables rules can cause that messy.
But if anyone face the ping success telnet fail situation, iptables could be the place of the root cause.

Resources