I know I can't ping the macvlan interface from the same host, but I can't ping my container's macvlan interface from hosts on a different subnet (even though they're connected via a router).
Host IP: 10.8.2.132/22
Macvlan container IP: 10.8.2.250/22
Other host IP: 10.4.16.141/22
Ping FROM 10.8.2.132 TO 10.4.16.141 is successful
Ping FROM 10.8.2.250 TO 10.4.16.141 is successful
Ping FROM 10.4.16.141 TO 10.8.2.132 is successful
Ping FROM 10.4.16.141 TO 10.8.2.250 fails with 100% packet loss
ip route get 10.8.2.250 shows that there is a known route:
10.8.2.250 via 10.4.16.1 dev eth0 src 10.4.16.141
cache mtu 1500 hoplimit 64
How can I go about debugging this?
The docker macvlan network was created with:
docker network create -d macvlan --subnet=10.8.0.0/22 --gateway=10.8.0.1 -o parent=em1 macnet
and when I run the container I specifically add "--ip=10.8.2.250"
Related
I have a macvlan network created with the following command:
docker network create -d macvlan --subnet=192.168.1.0/24 --gateway=192.168.1.2 -o parent=wlp2s0 pub_ne
Where wlp2s0 is the name of the wireless interface of my laptop.
gateway is 192.168.1.1 and subnet 192.168.1.0/24
Then I have created and attached a container to this network:
docker run --rm -itd --network pub_ne --name myAlpine alpine:latest sh
In addition I have created a virtual machine using, virtualbox provider, with bridged network interface.
if I use ping command:
- docker container -> vm ubuntu (ip of vm: 192.168.1.200) : ping works
but if I use ping command:
- docker container -> gateway 192.168.1.1
or
- docker container -> external world (google.com): ping not works
suggestions?
edit 1:
On docker host if i run tcpdump ( tcpdump -i icmp ) i see:
14:53:30.015822 IP 192.168.1.56 > 216.58.205.142: ICMP echo request, id 5376, seq 29, length 64
14:53:31.016143 IP 192.168.1.56 > 216.58.205.142: ICMP echo request, id 5376, seq 30, length 64
14:53:32.016426 IP 192.168.1.56 > 216.58.205.142: ICMP echo request, id 5376, seq 31, length 64
14:53:33.016722 IP 192.168.1.56 > 216.58.205.142: ICMP echo request, id 5376, seq 32, length 64
Where 192.168.1.56 is my docker container and 216.58.205.142 should be google ip address. No echo reply is received.
Macvlan is unlikely to work with IEEE 802.11.
Your wifi access point, and/or your host network stack, are not going to be thrilled.
You might want to try ipvlan instead: add -o ipvlan_mode=l2 to your network creation call and see if that helps.
That might very well still not work... (for eg, if you rely on DHCP and your DHCP server uses macaddresses and not client id)
And your only (reasonable) solution might be to drop the wifi entirely and wire the device up instead... (or move away from macvlan and use host / bridge - whichever is the most convenient)
I have a bare metal server on Hetzner with IP 5.6.7.8 and 8 additional IPs reserved for me.
IPs: 1.2.3.144 to 1.2.3.151
subnet: 1.2.3.144/29
netmask: 255.255.255.248
broadcast: 1.2.3.151
gateway: 5.6.7.8
Now I want to create Docker network with type macvlan
docker network create macvlan --subnet=1.2.3.144/29 --gateway=5.6.7.8 -o parent=enp0s31f6 macvlan1
But this command causes an error
no matching subnet for gateway 5.6.7.8
Note when I set for example IP 1.2.3.150 and gateway 5.6.7.8 on a virtual machine on the host, it works correctly! but I can't set this none matching gateway in Docker network create command.
I have setup Macvlan network between 2 docker host as follows:
Host Setup: HOST_1 ens192: 172.18.0.21
Create macvlan bridge interface
docker network create -d macvlan \
--subnet=172.18.0.0/22 \
--gateway=172.18.0.1 \
--ip-range=172.18.1.0/28 \
-o macvlan_mode=bridge \
-o parent=ens192 macvlan
Create macvlan interface HOST_1
ip link add ens192.br link ens192 type macvlan mode bridge
ip addr add 172.18.1.0/28 dev ens192.br
ip link set dev ens192.br up
Host Setup: HOST_2 ens192: 172.18.0.23
Create macvlan bridge interface
docker network create -d macvlan \
--subnet=172.18.0.0/22 \
--gateway=172.18.0.1 \
--ip-range=172.18.1.16/28 \
-o macvlan_mode=bridge \
-o parent=ens192 macvlan
Create macvlan interface in HOST_2
ip link add ens192.br link ens192 type macvlan mode bridge
ip addr add 172.18.1.16/28 dev ens192.br
ip link set dev ens192.br up
Container Setup
Create containers in both host
HOST_1# docker run --net=macvlan -it --name macvlan_1 --rm alpine /bin/sh
HOST_2# docker run --net=macvlan -it --name macvlan_1 --rm alpine /bin/sh
CONTAINER_1 in HOST_1
24: eth0#if2: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UNKNOWN
link/ether 02:42:ac:12:01:00 brd ff:ff:ff:ff:ff:ff
inet 172.18.1.0/22 brd 172.18.3.255 scope global eth0
valid_lft forever preferred_lft forever
CONTAINER_2 in HOST_2
21: eth0#if2: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UNKNOWN
link/ether 02:42:ac:12:01:10 brd ff:ff:ff:ff:ff:ff
inet 172.18.1.16/22 brd 172.18.3.255 scope global eth0
valid_lft forever preferred_lft forever
Route table in CONTAINER_1 and CONTAINER_2
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 172.18.0.1 0.0.0.0 UG 0 0 0 eth0
172.18.0.0 0.0.0.0 255.255.252.0 U 0 0 0 eth0
Scenario
HOST_1 (172.18.0.21) <-> HOST_2 (172.18.0.23) = OK (Vice-versa)
HOST_1 (172.18.0.21) -> CONTAINER_1 (172.18.1.0) and CONTAINER_2 (172.18.1.16) = OK
HOST_2 (172.18.0.23) -> CONTAINER_1 (172.18.1.0) and CONTAINER_2 (172.18.1.16) = OK
CONTAINER_1 (172.18.1.0) -> HOST_2 (172.18.0.23) = OK
CONTAINER_2 (172.18.1.16) -> HOST_1 (172.18.0.21) = OK
CONTAINER_1 (172.18.1.0) <-> CONTAINER_2 (172.18.1.16) = OK (Vice-versa)
CONTAINER_1 (172.18.1.0) -> HOST_1 (172.18.0.21) = FAIL
CONTAINER_2 (172.18.1.16) -> HOST_2 (172.18.0.23) = FAIL
Question
I am very close to my solution I wanted to achieve except this 1 single problem. How can I make this work for container to connect to its own host. If there is solution to this, I would like to know how to configure in ESXi virtualization perspective and also bare-metal if there is any difference
The question is "a bit old", however others might find it useful. There is a workaround described in Host access section of USING DOCKER MACVLAN NETWORKS BY LARS KELLOGG-STEDMAN. I can confirm - it's working.
Host access With a container attached to a macvlan network, you will
find that while it can contact other systems on your local network
without a problem, the container will not be able to connect to your
host (and your host will not be able to connect to your container).
This is a limitation of macvlan interfaces: without special support
from a network switch, your host is unable to send packets to its own
macvlan interfaces.
Fortunately, there is a workaround for this problem: you can create
another macvlan interface on your host, and use that to communicate
with containers on the macvlan network.
First, I’m going to reserve an address from our network range for use
by the host interface by using the --aux-address option to docker
network create. That makes our final command line look like:
docker network create -d macvlan -o parent=eno1 \
--subnet 192.168.1.0/24 \
--gateway 192.168.1.1 \
--ip-range 192.168.1.192/27 \
--aux-address 'host=192.168.1.223' \
mynet
This will prevent Docker from assigning that address to a container.
Next, we create a new macvlan interface on the host. You can call it
whatever you want, but I’m calling this one mynet-shim:
ip link add mynet-shim link eno1 type macvlan mode bridge
Now we need to configure the interface with the address we reserved
and bring it up:
ip addr add 192.168.1.223/32 dev mynet-shim
ip link set mynet-shim up
The last thing we need to do is to tell our host to use that interface
when communicating with the containers. This is relatively easy
because we have restricted our containers to a particular CIDR subset
of the local network; we just add a route to that range like this:
ip route add 192.168.1.192/27 dev mynet-shim
With that route in place, your host will automatically use ths
mynet-shim interface when communicating with containers on the mynet
network.
Note that the interface and routing configuration presented here is
not persistent – you will lose if if you were to reboot your host. How
to make it persistent is distribution dependent.
This is defined behavior for macvlan and is by design. See Docker Macvlan Documentation
When using macvlan, you cannot ping or communicate with the default namespace IP address. For example, if you create a container and try to ping the Docker host’s eth0, it will not work. That traffic is explicitly filtered by the kernel modules themselves to offer additional provider isolation and security.
A macvlan subinterface can be added to the Docker host, to allow traffic between the Docker host and containers. The IP address needs to be set on this subinterface and removed from the parent address.
In my situation, I added one more network to the container.
so CONTAINER_1 -> HOST_1 can be reached by a different IP (10.123.0.2).
CONTAINER_2 or HOST_2 can reach to 172.18.1.0.
Following is the docker-compose sample, hope this could be a workaround.
version: "3"
services:
macvlan_1:
image: alpine
container: macvlan_1
command: ....
restart: always
networks:
macvlan:
ipv4_address: 172.18.1.0
internalbr:
ipv4_address: 10.123.0.2
networks:
macvlan:
driver: macvlan
driver_opts:
parent: ens192
macvlan_mode: bridge
ipam:
driver: default
config:
- subnet: 172.18.0.0/22
gateway: 172.18.0.1
ip_range: 172.18.1.0/28
internalbr:
driver: bridge
ipam:
config:
- subnet: 10.123.0.0/24
I have setup a docker swarm with 3 nodes (docker 18.03). These nodes use an overlay network to communicate.
node1:
laptop
host tun0 172.16.0.6 --> openvpn -> nat gateway
container n1
ip = 192.169.1.10
node2:
aws ec2
host eth2 10.0.30.62
container n2
ip = 192.169.1.9
node3:
aws ec2
host eth2 10.0.140.122
container n3
ip = 192.169.1.12
nat-gateway:
aws ec2
tun0 172.16.0.1 --> openvpn --> laptop
eth0 10.0.30.198
The scheme is partly working:
1. Containers can ping eachother using name (n1,n2,n3)
2. Docker swarm commands are working, services can be deployed
The overlay is partly working. Some nodes cannot communicate with each other either using tcp/ip or udp. I tried all combinations of the 3 nodes with udp and tcp/ip:
I did a tcpdump on the nat gateway to monitor overlay vxlan network activity (port 4789):
tcpdump -l -n -i eth0 "port 4789"
tcpdump -l -n -i tun0 "port 4789"
Then I tried tcp/ip communication from node2 to node3. On node3:
nc -l -s 0.0.0.0 -p 8999
On node1:
telnet 192.169.1.12 8999
Node1 will then try to connect to node3. I see packets coming in on the nat-gateway over the tun0 interface:
on the nat-gateway eth0 interface:
it seems that the nat-gateway is not sending replies back over the tun0 interface.
The iptables configuration the nat-gateway
The routing of the nat-gateway
Can you help me solve this issue?
I have been able to fix the issue using the following configuration on the NAT gateway:
and
No masquerading of 172.16.0.0/22 is needed. All the workers and managers will route their traffic for 172.16.0.0/22 via the NAT gateway, and it knows how to send the packets over tun0.
Masquerading of eth0 was just wrong...
All the containers can now ping and establish tcp/ip connections to each other.
I ran docker daemon for using it with global IPv6 for containers:
docker daemon --ipv6 --fixed-cidr-v6="xxxx:xxxx:xxxx:xxxx::/64"
After it I ran docker container:
docker run -d --name my-container some-image
It successfully got Global IPv6 address( I checked by docker inspect my-container). But I can't to ping my container by this ip:
Destination unreachable: Address unreachable
But I can successfully ping docker0 bridge by it's IPv6 address.
Output of route -n -6 contains next lines:
Destination Next Hop Flag Met Ref Use If
xxxx:xxxx:xxxx:xxxx::/64 :: U 256 0 0 docker0
xxxx:xxxx:xxxx:xxxx::/64 :: U 1024 0 0 docker0
fe80::/64 :: U 256 0 0 docker0
docker0 interface has global IPv6 address:
inet6 addr: xxxx:xxxx:xxxx:xxxx::1/64 Scope:Global
xxxx:xxxx:xxxx:xxxx:: everywhere is the same, and it's global IPv6 address of my eth0 interface
Does docker required something additional configs for accessing my containers via IPv6?
Assuming IPv6 in your guest OS is properly configured probably you are pinging the container not from host OS, but outside and network discovery protocol is not configured. Other hosts does not know if your container is behind of your host. I'm doing this after start of container with IPv6 (in host OS) (in ExecStartPost clauses of Systemd .service file)
/usr/sbin/sysctl net.ipv6.conf.interface_name.proxy_ndp=1
/usr/bin/ip -6 neigh add proxy $(docker inspect --format {{.NetworkSettings.GlobalIPv6Address}} container_name) dev interface_name"
Beware of IPv6: docker developers say in replies to bug reports they do not have enough time to make IPv6 production-ready in version 1.10 and say nothing about 1.11.
Mb you use wrong ping command. For ipv6 is ping6.
$ ping6 2607:f0d0:1002:51::4