I have a bunch of docker containers running on the default bridge network, that need to communicate with each other.
I want to move some of the containers to a separate user defined network so I can specify their IP addresses.
Is there any way to do this without having to take down/replicate all the containers and move them to the other network, or is this the only way?
It's possible to create networks and connect containers while they are live. You may still need to stop/start processes if the process is listening on specific a IP addresses rather than all interfaces (* or :: )
Create a network
docker network create \
--driver=bridge \
--subnet=192.168.38.0/24 \
--gateway=172.16.238.1 \
<NETWORK>
Connect a container
docker network connect \
--ip 192.168.38.14 \
<NETWORK> \
<CONTAINER>
Disconnect from original network
docker network disconnect <OLDNETWORK> <CONTAINER>
Example
Before the containers eth0 is on the default bridge network
→ docker exec $CONTAINER ip ad sh
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
15: eth0#if16: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue
link/ether 02:42:ac:11:00:04 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.4/16 brd 172.17.255.255 scope global eth0
valid_lft forever preferred_lft forever
Afterwards, eth1 has been added and no more eth0
→ docker exec $CONTAINER ip ad sh
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
17: eth1#if18: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue
link/ether 02:42:c0:a8:26:0e brd ff:ff:ff:ff:ff:ff
inet 192.168.38.14/24 brd 192.168.38.255 scope global eth1
valid_lft forever preferred_lft forever
You also should think about using a docker compose. It will create a network automatically, with its own DNS, allowing the containers to be connected.
Related
Update
I try the same setup using Ubuntu as host. It works! And I notice the interface info (ip a) in host and container are the same in Ubuntu docker. But are different when using docker windows desktop.
So, the question becomes, why windows docker desktop give different network interface between host and container when using --net=host?
Original question
I open a container with --net=host. I want to connect to a device, which on the same subnet of my host, from inside the container. Also, the container has a server running on port 3000.
Host (192.168.64.101/18)
Device (192.168.64.102/18)
Container (--net=host, server on port 3000)
Container can connect to device with 192.168.64.102.
Container can ping the host with 192.168.64.101
But I cannot access container's server on port 3000 from host. I try curl localhost:3000 the connection refuse.
I thought --net=host will treat container as same network as host. Why can't I connect to the container's server using localhost?
ip a from container
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 brd 127.255.255.255 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: tunl0#NONE: <NOARP> mtu 1480 qdisc noop state DOWN qlen 1000
link/ipip 0.0.0.0 brd 0.0.0.0
3: sit0#NONE: <NOARP> mtu 1480 qdisc noop state DOWN qlen 1000
link/sit 0.0.0.0 brd 0.0.0.0
4: services1#if5: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP
link/ether a2:41:c9:a1:cd:4e brd ff:ff:ff:ff:ff:ff
inet 192.168.65.4 peer 192.168.65.5/32 scope global services1
valid_lft forever preferred_lft forever
inet6 fe80::a041:c9ff:fea1:cd4e/64 scope link
valid_lft forever preferred_lft forever
6: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UNKNOWN qlen 1000
link/ether 02:50:00:00:00:01 brd ff:ff:ff:ff:ff:ff
inet 192.168.65.3/24 brd 192.168.65.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::50:ff:fe00:1/64 scope link
valid_lft forever preferred_lft forever
7: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
link/ether 02:42:fb:e9:2d:76 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:fbff:fee9:2d76/64 scope link
valid_lft forever preferred_lft forever
11: vethfd2c43f#if10: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue master docker0 state UP
link/ether d6:ee:fe:80:24:04 brd ff:ff:ff:ff:ff:ff
inet6 fe80::d4ee:feff:fe80:2404/64 scope link
valid_lft forever preferred_lft forever
So here is my problem:
I have a server with debian 10 that runs docker
In the docker containers i run pihole
When i run the pihole container, docker set his ip to 172.17.0.2
Docker itself create a network interface called: docker0 and his ip is 172.17.0.1
The problem being outside the server, when i ping to the docker interface 172.17.0.1 its fine, but when i ping to the docker container 172.17.0.2 its no reachable.
Here is the ip a command output
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether ac:16:2d:12:30:71 brd ff:ff:ff:ff:ff:ff
inet 10.42.0.247/24 brd 10.42.0.255 scope global dynamic eno1
valid_lft 3152sec preferred_lft 3152sec
inet6 fe80::ae16:2dff:fe12:3071/64 scope link
valid_lft forever preferred_lft forever
3: wlp2s0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether d0:37:45:80:81:0f brd ff:ff:ff:ff:ff:ff
4: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:55:80:15:34 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:55ff:fe80:1534/64 scope link
valid_lft forever preferred_lft forever
25: vethedcefcc#if24: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
link/ether e2:02:56:8f:9b:22 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet6 fe80::e002:56ff:fe8f:9b22/64 scope link
valid_lft forever preferred_lft forever
What i need to do?, what i have to configure?
Thanks:
~James Phoenix
You can't access container IP directly from host.
If you want to access service from outside you need to forward (publish) service ports
Example:
docker host IP → 192.168.0.111
container IP → 172.17.0.111
Run nginx container and publish 8080 port to connect from outside:
docker run --name some-nginx -d -p 8080:80 some-content-nginx
Here 8080 is external port (accessible from outside)
And 80 is internal port (accessible from container group in same network)
Access to nginx:
curl http://localhost:8080
# or
curl http://192.168.0.111:8080
I just don't have enough networking knowledge to understand this.
On my laptop, I'm running both Docker and multiple vagrant VMs.
I want to connect to one of the vagrant VMs from within a docker container but ping keeps hanging or spitting out "Destination Host Unreachable". I can ping the vagrant VMs just fine from the host (ie. outside the container).
Could you point me in the right direction to fixing this? I basically want to install nginx on the vagrant VMs but have some load balancers in Docker.
This means that docker containers need to be able to "see" the vagrant VMs.
Do I need a route table entry? Do I need a special network adapter? Do I need to create a bridge? I just don't know enough and would appreciate a nudge in the right direction.
Here are details from the container:
root#d755dbb8bbc9:/# ip route
default via 172.18.0.1 dev eth1
10.0.1.0/24 dev eth2 proto kernel scope link src 10.0.1.6
10.255.0.0/16 dev eth0 proto kernel scope link src 10.255.0.4
172.18.0.0/16 dev eth1 proto kernel scope link src 172.18.0.5
root#d755dbb8bbc9:/# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet 10.255.0.30/32 brd 10.255.0.30 scope global lo
valid_lft forever preferred_lft forever
inet 10.0.1.41/32 brd 10.0.1.41 scope global lo
valid_lft forever preferred_lft forever
2: tunl0#NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1
link/ipip 0.0.0.0 brd 0.0.0.0
3: ip6tnl0#NONE: <NOARP> mtu 1452 qdisc noop state DOWN group default qlen 1
link/tunnel6 :: brd ::
29: eth0#if30: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default
link/ether 02:42:0a:ff:00:04 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 10.255.0.4/16 brd 10.255.255.255 scope global eth0
valid_lft forever preferred_lft forever
35: eth1#if36: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:12:00:05 brd ff:ff:ff:ff:ff:ff link-netnsid 1
inet 172.18.0.5/16 brd 172.18.255.255 scope global eth1
valid_lft forever preferred_lft forever
39: eth2#if40: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default
link/ether 02:42:0a:00:01:06 brd ff:ff:ff:ff:ff:ff link-netnsid 2
inet 10.0.1.6/24 brd 10.0.1.255 scope global eth2
valid_lft forever preferred_lft forever
And here is some stuff from on of the vagrant VMs:
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 08:00:27:cf:1a:c3 brd ff:ff:ff:ff:ff:ff
inet 10.0.2.15/24 brd 10.0.2.255 scope global dynamic eth0
valid_lft 67730sec preferred_lft 67730sec
inet6 fe80::a00:27ff:fecf:1ac3/64 scope link
valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 08:00:27:ca:c7:a1 brd ff:ff:ff:ff:ff:ff
inet 172.17.8.101/16 brd 172.17.255.255 scope global eth1
valid_lft forever preferred_lft forever
inet6 fe80::a00:27ff:feca:c7a1/64 scope link
valid_lft forever preferred_lft forever
core#core-01 ~ $ ip route
default via 10.0.2.2 dev eth0 proto dhcp src 10.0.2.15 metric 1024
10.0.2.0/24 dev eth0 proto kernel scope link src 10.0.2.15
10.0.2.2 dev eth0 proto dhcp scope link src 10.0.2.15 metric 1024
172.17.0.0/16 dev eth1 proto kernel scope link src 172.17.8.101
When I ping 172.17.8.101 (the ip of the vagrant VM i want to ping) from the docker container, it just hangs. How can I get access to one of the VMs from one of the docker containers?
https://docs.docker.com/network/network-tutorial-macvlan/#prerequisites
docker network create -d macvlan \
--subnet=172.16.86.0/24 \
--gateway=172.16.86.1 \
-o parent=eth0 \
my-macvlan-net
"Create a macvlan network called my-macvlan-net. Modify the subnet, gateway, and parent values to values that make sense in your environment."
I am noob when it comes to network. I have no idea what it means the values which make sense in my env
this is what i see in my host network interface, ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: enp6s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq state UP qlen 1000
link/ether 00:25:b5:66:11:31 brd ff:ff:ff:ff:ff:ff
3: enp7s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq state UP qlen 1000
link/ether 00:25:b5:66:11:32 brd ff:ff:ff:ff:ff:ff
4: enp12s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq state UP qlen 1000
link/ether 00:25:b5:66:11:33 brd ff:ff:ff:ff:ff:ff
inet 10.60.114.101/23 brd 10.60.115.255 scope global dynamic enp12s0
valid_lft 442187sec preferred_lft 442187sec
inet6 fd20:8b1e:b255:8136:225:b5ff:fe66:1133/64 scope global noprefixroute dynamic
valid_lft 2591830sec preferred_lft 604630sec
inet6 fe80::225:b5ff:fe66:1133/64 scope link
valid_lft forever preferred_lft forever
5: enp13s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq state UP qlen 1000
link/ether 00:25:b5:66:11:34 brd ff:ff:ff:ff:ff:ff
inet 10.60.115.252/23 brd 10.60.115.255 scope global dynamic enp13s0
valid_lft 414540sec preferred_lft 414540sec
inet6 fd20:8b1e:b255:8136:607f:edd6:613a:41da/64 scope global noprefixroute dynamic
valid_lft 2591830sec preferred_lft 604630sec
inet6 fd20:8b1e:b255:8136:225:b5ff:fe66:1134/64 scope global deprecated mngtmpaddr dynamic
valid_lft 1720109sec preferred_lft 0sec
inet6 fe80::225:b5ff:fe66:1134/64 scope link
valid_lft forever preferred_lft forever
6: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN
link/ether 02:42:02:16:fb:be brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:2ff:fe16:fbbe/64 scope link
valid_lft forever preferred_lft forever
11: docker_gwbridge: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
link/ether 02:42:bb:c4:b4:18 brd ff:ff:ff:ff:ff:ff
inet 172.18.0.1/16 brd 172.18.255.255 scope global docker_gwbridge
valid_lft forever preferred_lft forever
inet6 fe80::42:bbff:fec4:b418/64 scope link
valid_lft forever preferred_lft forever
106: veth65ae6f8#if105: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker_gwbridge state UP
link/ether 52:be:7f:de:e2:11 brd ff:ff:ff:ff:ff:ff link-netnsid 1
inet6 fe80::50be:7fff:fede:e211/64 scope link
valid_lft forever preferred_lft forever
How do I know which values make sense in my env?
ip route
ip route
default via 10.60.114.1 dev enp12s0 proto static metric 100
default via 10.60.114.1 dev enp13s0 proto static metric 101
10.60.114.0/23 dev enp12s0 proto kernel scope link src 10.60.114.101
10.60.114.0/23 dev enp13s0 proto kernel scope link src 10.60.115.252
10.60.114.0/23 dev enp12s0 proto kernel scope link src 10.60.114.101 metric 100
10.60.114.0/23 dev enp13s0 proto kernel scope link src 10.60.115.252 metric 101
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1
172.18.0.0/16 dev docker_gwbridge proto kernel scope link src 172.18.0.1
I am noob when it comes to network. I have no idea what it means the values which make sense in my env
When you're creating a macvlan network, you are effectively making a "clone" of an existing network interface. In order for your containers to communicate on the associated network, they will generally need to be using the same ip address range and gateway used by other devices on the network.
For example, if you were to create a macvlan network associated with enp12s0 on your system, then you would need to use the 10.60.114.0/23 network range and whatever default gateway your system is using (you don't include this information in your question so I can't suggest a specific value).
That is (replacing the argument to --gateway with the correct value):
docker network create -d macvlan \
--subnet=10.60.114.0/24 \
--gateway=10.60.114.1 \
-o parent=enp12s0 \
my-macvlan-net
This by itself might not work, because it is likely that docker would assign ip addresses to containers that are already in use elsewhere on the network. You can avoid this by assigning docker a dedicated subset of addresses using the --ip-range option:
docker network create -d macvlan \
--subnet=10.60.114.0/24 \
--gateway=10.60.114.1 \
--ip-range=10.60.115.0/28 \
-o parent=enp12s0 \
my-macvlan-net
This would restrict docker to addresses between 10.60.115.0 and 10.60.115.15. Whether or not this actually makes sense in your environment is something only you would know (possibly by asking your network administrator if you are not responsible for the network configuration).
I want to start a docker container with three interfaces, all these interfaces will be attached to a bridge on host.
The only solution is providing my own network plugin. the below interface will be invoked by docker daemon once container is created to configure its network:
func (d *Driver) Join(r *dknet.JoinRequest) (*dknet.JoinResponse, error)
but there is only one Endpoint object in JoinRequest struct, so the above invocation can only configure one container interface.
I don't know how to create and configure three container interfaces?
You need to do it multiple time
$ docker network create net1
bdc53c143e89d562761eedfd232620daf585968bc9ae022ba142d17601af6146
$ docker network create net2
d9a72a7a6ee6b61da3c6bb17e312e48888807a5a8c159fd42b6c99d219977559
$ docker network create net3
d2be9628f4fd60587d44967a5813e9ba7c730d24e953368b18d7872731a9478c
$ docker run -it --network net3 ubuntu:16.04 bash
root#cd70c7cbe367:/# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
90: eth0#if91: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:18:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 172.24.0.2/16 scope global eth0
valid_lft forever preferred_lft forever
Now your container is running with net3 network only. You can attach net1 and net2 as well.
$ docker network connect net1 cd70c7cbe367
$ docker network connect net2 cd70c7cbe367
After that check in container
root#cd70c7cbe367:/# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
90: eth0#if91: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:18:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 172.24.0.2/16 scope global eth0
valid_lft forever preferred_lft forever
92: eth1#if93: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:16:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 172.22.0.2/16 scope global eth1
valid_lft forever preferred_lft forever
94: eth2#if95: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:17:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 172.23.0.2/16 scope global eth2
valid_lft forever preferred_lft forever
PS: the ip command is missing by default in container, so i installed the iproute2 packaged to check