https://docs.docker.com/network/network-tutorial-macvlan/#prerequisites
docker network create -d macvlan \
--subnet=172.16.86.0/24 \
--gateway=172.16.86.1 \
-o parent=eth0 \
my-macvlan-net
"Create a macvlan network called my-macvlan-net. Modify the subnet, gateway, and parent values to values that make sense in your environment."
I am noob when it comes to network. I have no idea what it means the values which make sense in my env
this is what i see in my host network interface, ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: enp6s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq state UP qlen 1000
link/ether 00:25:b5:66:11:31 brd ff:ff:ff:ff:ff:ff
3: enp7s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq state UP qlen 1000
link/ether 00:25:b5:66:11:32 brd ff:ff:ff:ff:ff:ff
4: enp12s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq state UP qlen 1000
link/ether 00:25:b5:66:11:33 brd ff:ff:ff:ff:ff:ff
inet 10.60.114.101/23 brd 10.60.115.255 scope global dynamic enp12s0
valid_lft 442187sec preferred_lft 442187sec
inet6 fd20:8b1e:b255:8136:225:b5ff:fe66:1133/64 scope global noprefixroute dynamic
valid_lft 2591830sec preferred_lft 604630sec
inet6 fe80::225:b5ff:fe66:1133/64 scope link
valid_lft forever preferred_lft forever
5: enp13s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq state UP qlen 1000
link/ether 00:25:b5:66:11:34 brd ff:ff:ff:ff:ff:ff
inet 10.60.115.252/23 brd 10.60.115.255 scope global dynamic enp13s0
valid_lft 414540sec preferred_lft 414540sec
inet6 fd20:8b1e:b255:8136:607f:edd6:613a:41da/64 scope global noprefixroute dynamic
valid_lft 2591830sec preferred_lft 604630sec
inet6 fd20:8b1e:b255:8136:225:b5ff:fe66:1134/64 scope global deprecated mngtmpaddr dynamic
valid_lft 1720109sec preferred_lft 0sec
inet6 fe80::225:b5ff:fe66:1134/64 scope link
valid_lft forever preferred_lft forever
6: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN
link/ether 02:42:02:16:fb:be brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:2ff:fe16:fbbe/64 scope link
valid_lft forever preferred_lft forever
11: docker_gwbridge: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
link/ether 02:42:bb:c4:b4:18 brd ff:ff:ff:ff:ff:ff
inet 172.18.0.1/16 brd 172.18.255.255 scope global docker_gwbridge
valid_lft forever preferred_lft forever
inet6 fe80::42:bbff:fec4:b418/64 scope link
valid_lft forever preferred_lft forever
106: veth65ae6f8#if105: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker_gwbridge state UP
link/ether 52:be:7f:de:e2:11 brd ff:ff:ff:ff:ff:ff link-netnsid 1
inet6 fe80::50be:7fff:fede:e211/64 scope link
valid_lft forever preferred_lft forever
How do I know which values make sense in my env?
ip route
ip route
default via 10.60.114.1 dev enp12s0 proto static metric 100
default via 10.60.114.1 dev enp13s0 proto static metric 101
10.60.114.0/23 dev enp12s0 proto kernel scope link src 10.60.114.101
10.60.114.0/23 dev enp13s0 proto kernel scope link src 10.60.115.252
10.60.114.0/23 dev enp12s0 proto kernel scope link src 10.60.114.101 metric 100
10.60.114.0/23 dev enp13s0 proto kernel scope link src 10.60.115.252 metric 101
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1
172.18.0.0/16 dev docker_gwbridge proto kernel scope link src 172.18.0.1
I am noob when it comes to network. I have no idea what it means the values which make sense in my env
When you're creating a macvlan network, you are effectively making a "clone" of an existing network interface. In order for your containers to communicate on the associated network, they will generally need to be using the same ip address range and gateway used by other devices on the network.
For example, if you were to create a macvlan network associated with enp12s0 on your system, then you would need to use the 10.60.114.0/23 network range and whatever default gateway your system is using (you don't include this information in your question so I can't suggest a specific value).
That is (replacing the argument to --gateway with the correct value):
docker network create -d macvlan \
--subnet=10.60.114.0/24 \
--gateway=10.60.114.1 \
-o parent=enp12s0 \
my-macvlan-net
This by itself might not work, because it is likely that docker would assign ip addresses to containers that are already in use elsewhere on the network. You can avoid this by assigning docker a dedicated subset of addresses using the --ip-range option:
docker network create -d macvlan \
--subnet=10.60.114.0/24 \
--gateway=10.60.114.1 \
--ip-range=10.60.115.0/28 \
-o parent=enp12s0 \
my-macvlan-net
This would restrict docker to addresses between 10.60.115.0 and 10.60.115.15. Whether or not this actually makes sense in your environment is something only you would know (possibly by asking your network administrator if you are not responsible for the network configuration).
Related
I'm running a k3s cluster and a docker traefik container on the same host. The traefik docker container is actually doing the reverse proxy stuff for tls which is working already on ports 80 and 443 for my different subdomains. I'm trying to get ssh working (for only one subdomain) too but without success so far.
port 22 is open through ufw allow (on Ubuntu 22.04)
traefik rules are set as following:
tcp:
routers:
giti-ssh:
entrypoints:
- "https"
rule: "HostSNI(`*`)"
tls: {}
service: giti-ssh
services:
giti-ssh:
loadBalancer:
servers:
- address: "10.42.0.232:22"
k3s is running flannel and metallb where the externalIP-range is at 10.42.0.0
ip a shows (the interesting parts):
2: ens192: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:50:56:19:ea:c3 brd ff:ff:ff:ff:ff:ff
altname enp11s0
inet "private"/32 metric 100 scope global dynamic ens192
valid_lft 36147sec preferred_lft 36147sec
inet 10.42.0.200/32 scope global ens192
valid_lft forever preferred_lft forever
inet6 "private"/64 scope link
valid_lft forever preferred_lft forever
3: br-5014eb2ffdf2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:7e:ab:72:98 brd ff:ff:ff:ff:ff:ff
inet 172.18.0.1/16 brd 172.18.255.255 scope global br-5014eb2ffdf2
valid_lft forever preferred_lft forever
inet6 fe80::42:7eff:feab:7298/64 scope link
valid_lft forever preferred_lft forever
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:a5:03:77:2c brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
7: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default
link/ether 42:1b:d3:49:d3:6b brd ff:ff:ff:ff:ff:ff
inet 10.42.0.0/32 scope global flannel.1
valid_lft forever preferred_lft forever
inet6 fe80::401b:d3ff:fe49:d36b/64 scope link
valid_lft forever preferred_lft forever
8: cni0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default qlen 1000
link/ether e2:27:27:96:96:7e brd ff:ff:ff:ff:ff:ff
inet 10.42.0.1/24 brd 10.42.0.255 scope global cni0
valid_lft forever preferred_lft forever
inet6 fe80::e027:27ff:fe96:967e/64 scope link
valid_lft forever preferred_lft forever
the containers are set up and the service for the one for ssh is listening on port 22 as type: LoadBalancer
I can connect to that container through another service and IP on port 443 from the traefik reverse proxy but am missing something for port 22 and I think it has something to do with the traefik HostSNI or maybe the iptables....
Can s.o. give me a hint on how to achieve this.
Thanks in advance!
jim
Update
I try the same setup using Ubuntu as host. It works! And I notice the interface info (ip a) in host and container are the same in Ubuntu docker. But are different when using docker windows desktop.
So, the question becomes, why windows docker desktop give different network interface between host and container when using --net=host?
Original question
I open a container with --net=host. I want to connect to a device, which on the same subnet of my host, from inside the container. Also, the container has a server running on port 3000.
Host (192.168.64.101/18)
Device (192.168.64.102/18)
Container (--net=host, server on port 3000)
Container can connect to device with 192.168.64.102.
Container can ping the host with 192.168.64.101
But I cannot access container's server on port 3000 from host. I try curl localhost:3000 the connection refuse.
I thought --net=host will treat container as same network as host. Why can't I connect to the container's server using localhost?
ip a from container
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 brd 127.255.255.255 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: tunl0#NONE: <NOARP> mtu 1480 qdisc noop state DOWN qlen 1000
link/ipip 0.0.0.0 brd 0.0.0.0
3: sit0#NONE: <NOARP> mtu 1480 qdisc noop state DOWN qlen 1000
link/sit 0.0.0.0 brd 0.0.0.0
4: services1#if5: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP
link/ether a2:41:c9:a1:cd:4e brd ff:ff:ff:ff:ff:ff
inet 192.168.65.4 peer 192.168.65.5/32 scope global services1
valid_lft forever preferred_lft forever
inet6 fe80::a041:c9ff:fea1:cd4e/64 scope link
valid_lft forever preferred_lft forever
6: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UNKNOWN qlen 1000
link/ether 02:50:00:00:00:01 brd ff:ff:ff:ff:ff:ff
inet 192.168.65.3/24 brd 192.168.65.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::50:ff:fe00:1/64 scope link
valid_lft forever preferred_lft forever
7: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
link/ether 02:42:fb:e9:2d:76 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:fbff:fee9:2d76/64 scope link
valid_lft forever preferred_lft forever
11: vethfd2c43f#if10: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue master docker0 state UP
link/ether d6:ee:fe:80:24:04 brd ff:ff:ff:ff:ff:ff
inet6 fe80::d4ee:feff:fe80:2404/64 scope link
valid_lft forever preferred_lft forever
I get the four model types of docker network snapshot:
the host model (Open container) is attached to host machine's Logical host interface and Loopback interface.
in my case, I created the host model container:
$ docker run --name container-bridge --network=host -it --rm busybox:latest
and in the container it has 10 virtual interfaces.
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 brd 127.255.255.255 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000
link/ether 02:50:00:00:00:01 brd ff:ff:ff:ff:ff:ff
inet 192.168.65.3/24 brd 192.168.65.255 scope global deprecated dynamic noprefixroute eth0
valid_lft 1415sec preferred_lft 0sec
inet6 fe80::50:ff:fe00:1/64 scope link
valid_lft forever preferred_lft forever
3: tunl0#NONE: <NOARP> mtu 1480 qdisc noop qlen 1000
link/ipip 0.0.0.0 brd 0.0.0.0
4: ip6tnl0#NONE: <NOARP> mtu 1452 qdisc noop qlen 1000
link/tunnel6 00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00 brd 00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00
5: services1#if6: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue
link/ether c2:db:47:39:c7:fc brd ff:ff:ff:ff:ff:ff
inet 192.168.65.4 peer 192.168.65.5/32 scope global services1
valid_lft forever preferred_lft forever
inet6 fe80::c0db:47ff:fe39:c7fc/64 scope link
valid_lft forever preferred_lft forever
7: br-b7cc12043647: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue
link/ether 02:42:30:d7:06:a7 brd ff:ff:ff:ff:ff:ff
inet 192.168.49.1/24 brd 192.168.49.255 scope global br-b7cc12043647
valid_lft forever preferred_lft forever
inet6 fe80::42:30ff:fed7:6a7/64 scope link
valid_lft forever preferred_lft forever
8: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue
link/ether 02:42:9e:26:2d:f9 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:9eff:fe26:2df9/64 scope link
valid_lft forever preferred_lft forever
10: veth2fba778#if9: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue master br-b7cc12043647
link/ether 52:64:9d:7f:d1:01 brd ff:ff:ff:ff:ff:ff
inet6 fe80::5064:9dff:fe7f:d101/64 scope link
valid_lft forever preferred_lft forever
the main interface is eth0 for connecting to host(my macos), who has IP address of 192.168.65.3/24
but in my macos I do not find a IP address under the segment 192.168.65.0/24.
$ ifconfig -a | grep 192.168.65
See Use host networking, it clealy mentioned the --net=host not works on macos:
The host networking driver only works on Linux hosts, and is not supported on Docker Desktop for Mac, Docker Desktop for Windows, or Docker EE for Windows Server.
So, the network in container surely not same as the one in your macos.
So here is my problem:
I have a server with debian 10 that runs docker
In the docker containers i run pihole
When i run the pihole container, docker set his ip to 172.17.0.2
Docker itself create a network interface called: docker0 and his ip is 172.17.0.1
The problem being outside the server, when i ping to the docker interface 172.17.0.1 its fine, but when i ping to the docker container 172.17.0.2 its no reachable.
Here is the ip a command output
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether ac:16:2d:12:30:71 brd ff:ff:ff:ff:ff:ff
inet 10.42.0.247/24 brd 10.42.0.255 scope global dynamic eno1
valid_lft 3152sec preferred_lft 3152sec
inet6 fe80::ae16:2dff:fe12:3071/64 scope link
valid_lft forever preferred_lft forever
3: wlp2s0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether d0:37:45:80:81:0f brd ff:ff:ff:ff:ff:ff
4: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:55:80:15:34 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:55ff:fe80:1534/64 scope link
valid_lft forever preferred_lft forever
25: vethedcefcc#if24: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
link/ether e2:02:56:8f:9b:22 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet6 fe80::e002:56ff:fe8f:9b22/64 scope link
valid_lft forever preferred_lft forever
What i need to do?, what i have to configure?
Thanks:
~James Phoenix
You can't access container IP directly from host.
If you want to access service from outside you need to forward (publish) service ports
Example:
docker host IP → 192.168.0.111
container IP → 172.17.0.111
Run nginx container and publish 8080 port to connect from outside:
docker run --name some-nginx -d -p 8080:80 some-content-nginx
Here 8080 is external port (accessible from outside)
And 80 is internal port (accessible from container group in same network)
Access to nginx:
curl http://localhost:8080
# or
curl http://192.168.0.111:8080
I just don't have enough networking knowledge to understand this.
On my laptop, I'm running both Docker and multiple vagrant VMs.
I want to connect to one of the vagrant VMs from within a docker container but ping keeps hanging or spitting out "Destination Host Unreachable". I can ping the vagrant VMs just fine from the host (ie. outside the container).
Could you point me in the right direction to fixing this? I basically want to install nginx on the vagrant VMs but have some load balancers in Docker.
This means that docker containers need to be able to "see" the vagrant VMs.
Do I need a route table entry? Do I need a special network adapter? Do I need to create a bridge? I just don't know enough and would appreciate a nudge in the right direction.
Here are details from the container:
root#d755dbb8bbc9:/# ip route
default via 172.18.0.1 dev eth1
10.0.1.0/24 dev eth2 proto kernel scope link src 10.0.1.6
10.255.0.0/16 dev eth0 proto kernel scope link src 10.255.0.4
172.18.0.0/16 dev eth1 proto kernel scope link src 172.18.0.5
root#d755dbb8bbc9:/# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet 10.255.0.30/32 brd 10.255.0.30 scope global lo
valid_lft forever preferred_lft forever
inet 10.0.1.41/32 brd 10.0.1.41 scope global lo
valid_lft forever preferred_lft forever
2: tunl0#NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1
link/ipip 0.0.0.0 brd 0.0.0.0
3: ip6tnl0#NONE: <NOARP> mtu 1452 qdisc noop state DOWN group default qlen 1
link/tunnel6 :: brd ::
29: eth0#if30: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default
link/ether 02:42:0a:ff:00:04 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 10.255.0.4/16 brd 10.255.255.255 scope global eth0
valid_lft forever preferred_lft forever
35: eth1#if36: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:12:00:05 brd ff:ff:ff:ff:ff:ff link-netnsid 1
inet 172.18.0.5/16 brd 172.18.255.255 scope global eth1
valid_lft forever preferred_lft forever
39: eth2#if40: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default
link/ether 02:42:0a:00:01:06 brd ff:ff:ff:ff:ff:ff link-netnsid 2
inet 10.0.1.6/24 brd 10.0.1.255 scope global eth2
valid_lft forever preferred_lft forever
And here is some stuff from on of the vagrant VMs:
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 08:00:27:cf:1a:c3 brd ff:ff:ff:ff:ff:ff
inet 10.0.2.15/24 brd 10.0.2.255 scope global dynamic eth0
valid_lft 67730sec preferred_lft 67730sec
inet6 fe80::a00:27ff:fecf:1ac3/64 scope link
valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 08:00:27:ca:c7:a1 brd ff:ff:ff:ff:ff:ff
inet 172.17.8.101/16 brd 172.17.255.255 scope global eth1
valid_lft forever preferred_lft forever
inet6 fe80::a00:27ff:feca:c7a1/64 scope link
valid_lft forever preferred_lft forever
core#core-01 ~ $ ip route
default via 10.0.2.2 dev eth0 proto dhcp src 10.0.2.15 metric 1024
10.0.2.0/24 dev eth0 proto kernel scope link src 10.0.2.15
10.0.2.2 dev eth0 proto dhcp scope link src 10.0.2.15 metric 1024
172.17.0.0/16 dev eth1 proto kernel scope link src 172.17.8.101
When I ping 172.17.8.101 (the ip of the vagrant VM i want to ping) from the docker container, it just hangs. How can I get access to one of the VMs from one of the docker containers?