how to start a docker container with multi interface? - docker

I want to start a docker container with three interfaces, all these interfaces will be attached to a bridge on host.
The only solution is providing my own network plugin. the below interface will be invoked by docker daemon once container is created to configure its network:
func (d *Driver) Join(r *dknet.JoinRequest) (*dknet.JoinResponse, error)
but there is only one Endpoint object in JoinRequest struct, so the above invocation can only configure one container interface.
I don't know how to create and configure three container interfaces?

You need to do it multiple time
$ docker network create net1
bdc53c143e89d562761eedfd232620daf585968bc9ae022ba142d17601af6146
$ docker network create net2
d9a72a7a6ee6b61da3c6bb17e312e48888807a5a8c159fd42b6c99d219977559
$ docker network create net3
d2be9628f4fd60587d44967a5813e9ba7c730d24e953368b18d7872731a9478c
$ docker run -it --network net3 ubuntu:16.04 bash
root#cd70c7cbe367:/# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
90: eth0#if91: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:18:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 172.24.0.2/16 scope global eth0
valid_lft forever preferred_lft forever
Now your container is running with net3 network only. You can attach net1 and net2 as well.
$ docker network connect net1 cd70c7cbe367
$ docker network connect net2 cd70c7cbe367
After that check in container
root#cd70c7cbe367:/# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
90: eth0#if91: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:18:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 172.24.0.2/16 scope global eth0
valid_lft forever preferred_lft forever
92: eth1#if93: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:16:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 172.22.0.2/16 scope global eth1
valid_lft forever preferred_lft forever
94: eth2#if95: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:17:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 172.23.0.2/16 scope global eth2
valid_lft forever preferred_lft forever
PS: the ip command is missing by default in container, so i installed the iproute2 packaged to check

Related

Windows docker desktop gives different network interface between host and container when using --net=host

Update
I try the same setup using Ubuntu as host. It works! And I notice the interface info (ip a) in host and container are the same in Ubuntu docker. But are different when using docker windows desktop.
So, the question becomes, why windows docker desktop give different network interface between host and container when using --net=host?
Original question
I open a container with --net=host. I want to connect to a device, which on the same subnet of my host, from inside the container. Also, the container has a server running on port 3000.
Host (192.168.64.101/18)
Device (192.168.64.102/18)
Container (--net=host, server on port 3000)
Container can connect to device with 192.168.64.102.
Container can ping the host with 192.168.64.101
But I cannot access container's server on port 3000 from host. I try curl localhost:3000 the connection refuse.
I thought --net=host will treat container as same network as host. Why can't I connect to the container's server using localhost?
ip a from container
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 brd 127.255.255.255 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: tunl0#NONE: <NOARP> mtu 1480 qdisc noop state DOWN qlen 1000
link/ipip 0.0.0.0 brd 0.0.0.0
3: sit0#NONE: <NOARP> mtu 1480 qdisc noop state DOWN qlen 1000
link/sit 0.0.0.0 brd 0.0.0.0
4: services1#if5: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP
link/ether a2:41:c9:a1:cd:4e brd ff:ff:ff:ff:ff:ff
inet 192.168.65.4 peer 192.168.65.5/32 scope global services1
valid_lft forever preferred_lft forever
inet6 fe80::a041:c9ff:fea1:cd4e/64 scope link
valid_lft forever preferred_lft forever
6: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UNKNOWN qlen 1000
link/ether 02:50:00:00:00:01 brd ff:ff:ff:ff:ff:ff
inet 192.168.65.3/24 brd 192.168.65.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::50:ff:fe00:1/64 scope link
valid_lft forever preferred_lft forever
7: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
link/ether 02:42:fb:e9:2d:76 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:fbff:fee9:2d76/64 scope link
valid_lft forever preferred_lft forever
11: vethfd2c43f#if10: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue master docker0 state UP
link/ether d6:ee:fe:80:24:04 brd ff:ff:ff:ff:ff:ff
inet6 fe80::d4ee:feff:fe80:2404/64 scope link
valid_lft forever preferred_lft forever

I can't access to docker container directly from his ip

So here is my problem:
I have a server with debian 10 that runs docker
In the docker containers i run pihole
When i run the pihole container, docker set his ip to 172.17.0.2
Docker itself create a network interface called: docker0 and his ip is 172.17.0.1
The problem being outside the server, when i ping to the docker interface 172.17.0.1 its fine, but when i ping to the docker container 172.17.0.2 its no reachable.
Here is the ip a command output
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether ac:16:2d:12:30:71 brd ff:ff:ff:ff:ff:ff
inet 10.42.0.247/24 brd 10.42.0.255 scope global dynamic eno1
valid_lft 3152sec preferred_lft 3152sec
inet6 fe80::ae16:2dff:fe12:3071/64 scope link
valid_lft forever preferred_lft forever
3: wlp2s0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether d0:37:45:80:81:0f brd ff:ff:ff:ff:ff:ff
4: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:55:80:15:34 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:55ff:fe80:1534/64 scope link
valid_lft forever preferred_lft forever
25: vethedcefcc#if24: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
link/ether e2:02:56:8f:9b:22 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet6 fe80::e002:56ff:fe8f:9b22/64 scope link
valid_lft forever preferred_lft forever
What i need to do?, what i have to configure?
Thanks:
~James Phoenix
You can't access container IP directly from host.
If you want to access service from outside you need to forward (publish) service ports
Example:
docker host IP → 192.168.0.111
container IP → 172.17.0.111
Run nginx container and publish 8080 port to connect from outside:
docker run --name some-nginx -d -p 8080:80 some-content-nginx
Here 8080 is external port (accessible from outside)
And 80 is internal port (accessible from container group in same network)
Access to nginx:
curl http://localhost:8080
# or
curl http://192.168.0.111:8080

how to pass dockerized program ip addresses of overlay networks attached to it

I have three overlay networks connected to container in a swarm and i need to give the addresses of the different networks to the program running inside when the container comes up. Each network is for a different purpose and i cant seem to identify them within the container.
I've tried the usual IP a and hostname -i but it only displays the adapter information and nothing identifiable about which overlay is on which adapter.
lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
eth3#if118: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default
link/ether 02:42:0a:00:0e:03 brd ff:ff:ff:ff:ff:ff link-netnsid 3
inet 10.0.14.3/24 brd 10.0.14.255 scope global eth3
valid_lft forever preferred_lft forever
eth4#if120: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:12:00:03 brd ff:ff:ff:ff:ff:ff link-netnsid 4
inet 172.18.0.3/16 brd 172.18.255.255 scope global eth4
valid_lft forever preferred_lft forever
eth0#if122: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default
link/ether 02:42:0a:ff:00:0d brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 10.255.0.13/16 brd 10.255.255.255 scope global eth0
valid_lft forever preferred_lft forever
eth1#if124: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default
link/ether 02:42:0a:00:0d:03 brd ff:ff:ff:ff:ff:ff link-netnsid 1
inet 10.0.13.3/24 brd 10.0.13.255 scope global eth1
valid_lft forever preferred_lft forever
eth2#if126: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default
link/ether 02:42:0a:00:0f:03 brd ff:ff:ff:ff:ff:ff link-netnsid 2
inet 10.0.15.3/24 brd 10.0.15.255 scope global eth2
valid_lft forever preferred_lft forever
hostname -i
10.0.14.3 10.0.13.3 10.0.15.3
This shows the addresses but nothing identifiable about which is which overlay. Any pointers would be appreciated.
You could write a script that runs on the host when the swarm service is started. The script would run docker inspect on your container and write the network information to a file. The file would be stored on a mount point that the container has mounted. Then you can read the information from that file from within the container.
A more elaborate solution would be to build a custom API that would run the docker inspect for you and return the network information in a response.
Overlay networks are likely assigned to interfaces in order. A little bit of testing shows this to be true, but I don't like relying on assumptions.

How to debug "no route to host" in a docker container

I just don't have enough networking knowledge to understand this.
On my laptop, I'm running both Docker and multiple vagrant VMs.
I want to connect to one of the vagrant VMs from within a docker container but ping keeps hanging or spitting out "Destination Host Unreachable". I can ping the vagrant VMs just fine from the host (ie. outside the container).
Could you point me in the right direction to fixing this? I basically want to install nginx on the vagrant VMs but have some load balancers in Docker.
This means that docker containers need to be able to "see" the vagrant VMs.
Do I need a route table entry? Do I need a special network adapter? Do I need to create a bridge? I just don't know enough and would appreciate a nudge in the right direction.
Here are details from the container:
root#d755dbb8bbc9:/# ip route
default via 172.18.0.1 dev eth1
10.0.1.0/24 dev eth2 proto kernel scope link src 10.0.1.6
10.255.0.0/16 dev eth0 proto kernel scope link src 10.255.0.4
172.18.0.0/16 dev eth1 proto kernel scope link src 172.18.0.5
root#d755dbb8bbc9:/# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet 10.255.0.30/32 brd 10.255.0.30 scope global lo
valid_lft forever preferred_lft forever
inet 10.0.1.41/32 brd 10.0.1.41 scope global lo
valid_lft forever preferred_lft forever
2: tunl0#NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1
link/ipip 0.0.0.0 brd 0.0.0.0
3: ip6tnl0#NONE: <NOARP> mtu 1452 qdisc noop state DOWN group default qlen 1
link/tunnel6 :: brd ::
29: eth0#if30: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default
link/ether 02:42:0a:ff:00:04 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 10.255.0.4/16 brd 10.255.255.255 scope global eth0
valid_lft forever preferred_lft forever
35: eth1#if36: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:12:00:05 brd ff:ff:ff:ff:ff:ff link-netnsid 1
inet 172.18.0.5/16 brd 172.18.255.255 scope global eth1
valid_lft forever preferred_lft forever
39: eth2#if40: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default
link/ether 02:42:0a:00:01:06 brd ff:ff:ff:ff:ff:ff link-netnsid 2
inet 10.0.1.6/24 brd 10.0.1.255 scope global eth2
valid_lft forever preferred_lft forever
And here is some stuff from on of the vagrant VMs:
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 08:00:27:cf:1a:c3 brd ff:ff:ff:ff:ff:ff
inet 10.0.2.15/24 brd 10.0.2.255 scope global dynamic eth0
valid_lft 67730sec preferred_lft 67730sec
inet6 fe80::a00:27ff:fecf:1ac3/64 scope link
valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 08:00:27:ca:c7:a1 brd ff:ff:ff:ff:ff:ff
inet 172.17.8.101/16 brd 172.17.255.255 scope global eth1
valid_lft forever preferred_lft forever
inet6 fe80::a00:27ff:feca:c7a1/64 scope link
valid_lft forever preferred_lft forever
core#core-01 ~ $ ip route
default via 10.0.2.2 dev eth0 proto dhcp src 10.0.2.15 metric 1024
10.0.2.0/24 dev eth0 proto kernel scope link src 10.0.2.15
10.0.2.2 dev eth0 proto dhcp scope link src 10.0.2.15 metric 1024
172.17.0.0/16 dev eth1 proto kernel scope link src 172.17.8.101
When I ping 172.17.8.101 (the ip of the vagrant VM i want to ping) from the docker container, it just hangs. How can I get access to one of the VMs from one of the docker containers?

Docker Network moving from the default bridge

I have a bunch of docker containers running on the default bridge network, that need to communicate with each other.
I want to move some of the containers to a separate user defined network so I can specify their IP addresses.
Is there any way to do this without having to take down/replicate all the containers and move them to the other network, or is this the only way?
It's possible to create networks and connect containers while they are live. You may still need to stop/start processes if the process is listening on specific a IP addresses rather than all interfaces (* or :: )
Create a network
docker network create \
--driver=bridge \
--subnet=192.168.38.0/24 \
--gateway=172.16.238.1 \
<NETWORK>
Connect a container
docker network connect \
--ip 192.168.38.14 \
<NETWORK> \
<CONTAINER>
Disconnect from original network
docker network disconnect <OLDNETWORK> <CONTAINER>
Example
Before the containers eth0 is on the default bridge network
→ docker exec $CONTAINER ip ad sh
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
15: eth0#if16: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue
link/ether 02:42:ac:11:00:04 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.4/16 brd 172.17.255.255 scope global eth0
valid_lft forever preferred_lft forever
Afterwards, eth1 has been added and no more eth0
→ docker exec $CONTAINER ip ad sh
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
17: eth1#if18: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue
link/ether 02:42:c0:a8:26:0e brd ff:ff:ff:ff:ff:ff
inet 192.168.38.14/24 brd 192.168.38.255 scope global eth1
valid_lft forever preferred_lft forever
You also should think about using a docker compose. It will create a network automatically, with its own DNS, allowing the containers to be connected.

Resources