I have some virtual development machines on my lan that i use for testing out openvidu developments, the main server in question sits on 192.168.1.0/24 with ip 192.168.1.150.
I want my local docker development environment via docker compose to be able to access this ip address, so i've setup a bridge network:
networks:
my-net:
name: my-net
my-lan-access:
name: my-lan-access
driver: bridge
ipam:
driver: default
config:
- subnet: "192.168.1.0/24"
- gateway: "192.168.1.254"
Then i allow the specific containers access to this network:
networks:
my-net:
my-lan-access:
ipv4_address: "192.168.1.149"
I logged into one of the containers and attempted to ping 192.168.1.150 and i get:
From 192.168.1.149 icmp_seq=160 Destination Host Unreachable
It's clearly added the correct network as it's got 192.168.1.149 ip address yet it's unable to see the virtual machine.
Note: From outside the container on my mac i can ping 192.168.1.150 no problem and access via ssh.
UPDATE
After some reading i get why this doesn't work, it's because the bridge adapter doesn't exist on my host machine.
The idea isn't to spend time creating a bridge adapter, if the compose file needs to be sent to another developer we want to just be able to fire it all up without any hassle.
So i started to look at "macvlan" which seems like a much better option where by i can connect my specific containers directly to the lan from the host adapter:
networks:
my-net:
name: my-net
my-lan-access:
name: my-lan-access
driver: macvlan
driver_opts:
parent: en0
ipam:
config:
- subnet: "192.168.1.0/24"
gateway: "192.168.1.254"
This now brings up a new error though:
ERROR: invalid subinterface vlan name en0, example formatting is eth0.10
This doesn't make any sense, i can clearly see the en0 interface (my wifi adapter) on my mac with ifconfig:
en0: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500
options=6463<RXCSUM,TXCSUM,TSO4,TSO6,CHANNEL_IO,PARTIAL_CSUM,ZEROINVERT_CSUM>
ether b0:f1:d8:21:22:dd
inet6 fe80::1400:ad93:eea1:2818%en0 prefixlen 64 secured scopeid 0xe
inet 192.168.1.124 netmask 0xffffff00 broadcast 192.168.1.255
inet6 fdaa:bbcc:ddee:0:10a5:3e52:179e:aa31 prefixlen 64 autoconf secured
inet6 2a00:23c5:ef15:1101:45d:dabb:8af:43a3 prefixlen 64 autoconf secured
inet6 2a00:23c5:ef15:1101:419:9097:a2b3:5cf7 prefixlen 64 deprecated autoconf temporary
inet6 2a00:23c5:ef15:1101:edd5:a0e0:baa1:b3ae prefixlen 64 autoconf temporary
nd6 options=201<PERFORMNUD,DAD>
media: autoselect
status: active
So instead i tried en0.0 hoping it would understand i don't have a sub adapter which causes this error:
ERROR: -o parent interface does was not found on the host: en0
It appears like this might be a bug in docker or docker compose?
So maybe the only option i have is to create my own bridge adapter attached to my hardware interface?
Related
Instead of listening to a single IP address like e.g. localhost:
ports:
- "127.0.0.1:80:80"
I want the container to only listen to a local network, i.e. e.g.:
ports:
- "10.0.0.0/16:80:80"
ERROR: The Compose file './docker-compose.yml' is invalid because:
services.SERVICE.ports contains an invalid type, it should be a number, or an object
Is this possible?
I don't want to use things like swarm mode etc., yet.
If IP range is not supported, maybe at least multiple IP addresses like 10.0.0.2 and 10.0.0.3?
ERROR: for CONTAINER Cannot start service SERVICE: driver failed programming external connectivity on endpoint CONTAINER (...): Error starting userland proxy: listen tcp 10.0.0.3:80: bind: cannot assign requested address
ERROR: for SERVICE Cannot start service SERVICE: driver failed programming external connectivity on endpoint CONTAINER (...): Error starting userland proxy: listen tcp 10.0.0.3:80: bind: cannot assign requested address
Or is it not even supported to listen to 10.0.0.3 ?
The host machine is connected to 10.0.0.0/16:
> ifconfig
ens10: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450
inet 10.0.0.2 netmask 255.255.255.255 broadcast 10.0.0.2
inet6 f**0::8**0:ff:f**9:b**7 prefixlen 64 scopeid 0x20<link>
ether **:00:00:**:**:** txqueuelen 1000 (Ethernet)
Listening to a single IP address seems not correct. The service is listening at an IP address.
Let's say your VM has two network interfaces (ethernet cards):
Network 1 → subnet: 10.0.0.0/24 and IP 10.0.0.100
Network 2 → subnet: 10.0.1.0/24 and IP 10.0.1.200
If you set 127.0.0.1:80:80 that means that your service listening at 127.0.0.1's (localhost) port 80.
If you want to access service from 10.0.0.0/24 subnet you should set 10.0.0.100:80:80 and use http://10.0.0.100:80 address to be able connect your container from external hosts
If you want to access service from multiple networks simultaneously you can bind the container port to multiple ports, where the IP is the connection source IP):
ports:
- 10.0.0.100:80:80
- 10.0.1.200:80:80
- 127.0.0.1:80:80
And don't forget to open 80 port at VM's firewall, if a firewall exists and restricts that network
I think you misunderstood this field.
When you map 127.0.0.1:80:80 you will map interface 127.0.0.1 from your host to your container.
In the case of the 127.0.0.1 you can only access it from inside your host.
When you map 10.0.0.3:80:80 you will map interface 10.0.0.3 from your host to your container. And all ip who can access 10.0.0.3 will have acces to your docker container mapping.
But in anycase this field will not do any filtering about who access this container
EDIT: After your modification i've seen my misunderstood about your question.
You want docker to create "bridge interface" to not share the ip of your host.
I don't think this is possible when using the port mapping
If you give Compose ports: (or docker run -p) an IP address, it must be a specific known IP address of a host interface, or 0.0.0.0 for "all interfaces". The Docker daemon gives this specific IP address to a bind(2) call, which takes an address and not a network, and follows the rules in ip(7) for IPv4.
With the output you show, you can only bind containers to 10.0.0.2. If you want to use other IP addresses on the same network, you also need to assign them to the host; see for example How can I (from CLI) assign multiple IP addresses to one interface? on Ask Ubuntu, and then you can bind a container to the newly-added address.
If your system is on multiple physical networks, you can have any number of ports: so long as the host address and host port are unique. In particular you can have multiple ports: that all forward to the same container port.
ports:
# make this visible to the external load balancer on port 80
- '192.168.17.2:80:3000'
# also make this visible to the internal network also on port 80
- '10.0.0.2:80:3000'
# and the management network but on port 3000
- '10.99.0.36:3000:3000'
Again, the host must already have these IP addresses in the ifconfig output.
I've moved my Mongodb from a container to a local service (it was really flaky when containerised). Problem is I cannot connect from a Node api into the locally running MongoDB service. I can get this working on my Mac, but not on Ubuntu. I've tried:
- DB_HOST=mongodb://172.17.0.1:27017/proto?authSource=admin
- DB_HOST=mongodb://localhost:27017/proto?authSource=admin
// this works locally, but not on my Ubuntu server
- DB_HOST=mongodb://host.docker.internal:27017/proto?authSource=admin
Tried adding this to my docker file:
ip -4 route list match 0/0 | awk '{print $3 "host.docker.internal"}' >> /etc/hosts && \
Also tried network bridge to no avail. Example docker compose
version: '3.3'
services:
search-api:
build: ../search-api
environment:
- PORT=3333
- DB_HOST=mongodb://host.docker.internal:27017/search?authSource=admin
- DB_USER=dbuser
- DB_PASS=password
ports:
- 3333:3333
restart: always
Problem can be caused by MongoDb not listening on the correct ip address and therefore blocking your access.
Either make sure you're listening to a specific ip or listening to all: 0.0.0.0
On linux the config file is per default installed here: /etc/mongod.conf
Configuration specific Ip address:
net:
bindIp: 172.17.0.1 #being your host's ip address
port: 27017
Configuration open to all connections:
net:
bindIp: 0.0.0.0
port: 27017
To get your hosts ip address (from within a container)
On docker-for-mac and docker-for-windows you can use host.docker.internal
While on linux you need to run ip route show in the container.
When running Docker natively on Linux, you can access host services using the IP address of the docker0 interface. From inside the container, this will be your default route.
For example, on my system:
$ ip addr show docker0
7: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::f4d2:49ff:fedd:28a0/64 scope link
valid_lft forever preferred_lft forever
And inside a container:
# ip route show
default via 172.17.0.1 dev eth0
172.17.0.0/16 dev eth0 src 172.17.0.4
(copied from here: How to access host port from docker container)
I know I can't ping the macvlan interface from the same host, but I can't ping my container's macvlan interface from hosts on a different subnet (even though they're connected via a router).
Host IP: 10.8.2.132/22
Macvlan container IP: 10.8.2.250/22
Other host IP: 10.4.16.141/22
Ping FROM 10.8.2.132 TO 10.4.16.141 is successful
Ping FROM 10.8.2.250 TO 10.4.16.141 is successful
Ping FROM 10.4.16.141 TO 10.8.2.132 is successful
Ping FROM 10.4.16.141 TO 10.8.2.250 fails with 100% packet loss
ip route get 10.8.2.250 shows that there is a known route:
10.8.2.250 via 10.4.16.1 dev eth0 src 10.4.16.141
cache mtu 1500 hoplimit 64
How can I go about debugging this?
The docker macvlan network was created with:
docker network create -d macvlan --subnet=10.8.0.0/22 --gateway=10.8.0.1 -o parent=em1 macnet
and when I run the container I specifically add "--ip=10.8.2.250"
I am running a Docker image in a MAC machine and when I logged into the container, I see the ip address as "172.17.0.2"( cat /etc/hosts).
How does docker choose the IP?
Is there any IP range that Docker choose?
What if I run multiple container on the same host? Will it be different?
/etc/resolve.conf gives some IP. What is that IP and where does it get?
How to connect to Docker service using the internal IP, say 172.17.0.2
ping CONTAINER_ID -> returns the IP 172.17.0.2
How does it resolve the hostname?
I tried reading through networking but it doesn't help.
Also, I am running my service in the port 8443. Still, I am unable to connect.
I tried running,
docker run -net host -p 8443:8443 IMAGE
Still no luck.
Tried the below approach also.
docker run -p MY_MACHINE_IP:8080:8080 IMAGE
Tried with,
http://MY_MACHINE_IP:8080
http://localhost:8080
None of the above works.
ifconfig output,
lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> mtu 16384
options=1203<RXCSUM,TXCSUM,TXSTATUS,SW_TIMESTAMP>
inet 127.0.0.1 netmask 0xff000000
inet6 ::1 prefixlen 128
inet6 fe80::1%lo0 prefixlen 64 scopeid 0x1
nd6 options=201<PERFORMNUD,DAD>
gif0: flags=8010<POINTOPOINT,MULTICAST> mtu 1280
stf0: flags=0<> mtu 1280
XHC20: flags=0<> mtu 0
en0: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500
ether 60:f8:1d:b2:cb:0c
inet6 fe80::49d:a511:dc4e:7960%en0 prefixlen 64 secured scopeid 0x5
inet 10.231.168.63 netmask 0xffe00000 broadcast 10.255.255.255
nd6 options=201<PERFORMNUD,DAD>
media: autoselect
status: active
p2p0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> mtu 2304
ether 02:f8:1d:b2:cb:0c
media: autoselect
status: inactive
awdl0: flags=8943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> mtu 1484
ether 0a:71:96:61:e4:eb
inet6 fe80::871:96ff:fe61:e4eb%awdl0 prefixlen 64 scopeid 0x7
nd6 options=201<PERFORMNUD,DAD>
media: autoselect
status: active
en1: flags=8963<UP,BROADCAST,SMART,RUNNING,PROMISC,SIMPLEX,MULTICAST> mtu 1500
options=60<TSO4,TSO6>
ether 72:00:07:57:48:30
media: autoselect <full-duplex>
status: inactive
en2: flags=8963<UP,BROADCAST,SMART,RUNNING,PROMISC,SIMPLEX,MULTICAST> mtu 1500
options=60<TSO4,TSO6>
ether 72:00:07:57:48:31
media: autoselect <full-duplex>
status: inactive
bridge0: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500
options=63<RXCSUM,TXCSUM,TSO4,TSO6>
ether 72:00:07:57:48:30
Configuration:
id 0:0:0:0:0:0 priority 0 hellotime 0 fwddelay 0
maxage 0 holdcnt 0 proto stp maxaddr 100 timeout 1200
root id 0:0:0:0:0:0 priority 0 ifcost 0 port 0
ipfilter disabled flags 0x2
member: en1 flags=3<LEARNING,DISCOVER>
ifmaxaddr 0 port 8 priority 0 path cost 0
member: en2 flags=3<LEARNING,DISCOVER>
ifmaxaddr 0 port 9 priority 0 path cost 0
nd6 options=201<PERFORMNUD,DAD>
media: <unknown type>
status: inactive
utun0: flags=8051<UP,POINTOPOINT,RUNNING,MULTICAST> mtu 2000
inet6 fe80::3f17:8946:c18d:5d25%utun0 prefixlen 64 scopeid 0xb
nd6 options=201<PERFORMNUD,DAD>
utun2: flags=8051<UP,POINTOPOINT,RUNNING,MULTICAST> mtu 1380
inet6 fe80::20aa:76fd:d68:7fb2%utun2 prefixlen 64 scopeid 0xd
nd6 options=201<PERFORMNUD,DAD>
utun3: flags=8051<UP,POINTOPOINT,RUNNING,MULTICAST> mtu 1380
inet6 fe80::e42a:c616:4960:2c43%utun3 prefixlen 64 scopeid 0x10
nd6 options=201<PERFORMNUD,DAD>
utun1: flags=8051<UP,POINTOPOINT,RUNNING,MULTICAST> mtu 1342
inet 17...... --> 17.... netmask 0xff000000
inet6 fe80::93df:7780:862c:8a06%utun1 prefixlen 64 scopeid 0x12
nd6 options=201<PERFORMNUD,DAD>
for the first 4 question you can find here some information, in general the docker network is the responsable about manager the network.
Usually I specify the prots like this:
docker run -p 8443:8443 IMAGE
and it work.
An reference to an existing topic is here
1. How does docker choose the IP?
When docker installed in your machine it will create docker0 interface. It will gives ip address to your container whenever it launch.
you can verify the ip range for docker0 by ifconfig command.
2. Is there any IP range that docker choose?
Yes, Please refer my answer 1.
3. What if i run multiple container on the same host? Will it be different?
Yes, It will be different from the range of docker0 interface until you create your own network using docker network create for more refer : Docker Networking
4./etc/resolve.conf gives some IP. What is that IP and where does it get?
It's internal DNS of docker network you can give your DNS ip in vi /etc/systemd/system/docker.service.d/docker.conf add your DNS server on line like below:
[Service]
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock -g "/opt/docker_storage" --dns <replace-dns-ip>
5. How to connect to docker service using the internal IP, say 172.17.0.2
You have to expose port to connect like docker run -p 8443:8443 <image-name>
after that you can connect by telnet localhost 8443 or curl http://172.17.0.2:8443
Most important
Add the following to /etc/sysctl.conf
net.ipv4.ip_forward = 1 and apply settings by
sysctl -p /etc/sysctl.conf
Hope this will help.
Thank you!
Docker manages all of this internal networking machinery itself. This includes allocating IP(v4) addresses from a private range, a NAT setup for outbound connections, and a DNS service to allow containers to communicate with each other.
A stable, reasonable setup is:
Run docker network create mynet, once, to create a non-default network. (Docker Compose will do this for you automatically.)
Run your containers with --net mynet.
When containers need to communicate with each other, they can use other containers' --name as DNS names (you can connect to http://other-container-name).
If you need to reach a container from elsewhere, publish its service port using docker run -p or the Docker Compose ports: section. It can be reached using the host's DNS name or IP address and the published port.
Never ever use the container-private IP addresses (directly).
Never use localhost unless you're absolutely sure about what it means. (It's a correct way to reach a published port from a browser running on the host that's running the containers; it's almost definitely not what you mean from within a container.)
The problems I've seen with the container-private IP addresses tend to be around the second time you use them: because you relaunched the container and the IP address changed; because it worked from your local host and now you want to reach it from somewhere else.
To answer your initial questions briefly: (1-2) Docker assigns them itself from a network that can be configured but often defaults to 172.17.0.0/16; (3) different containers have different private IP addresses; (4-5) Docker provides its own DNS service and /etc/resolv.conf points there; (6) ICMP connectivity usually doesn't prove much and you don't need to ping containers (use dig or nslookup for DNS debugging, curl for actual HTTP requests).
I have setup Macvlan network between 2 docker host as follows:
Host Setup: HOST_1 ens192: 172.18.0.21
Create macvlan bridge interface
docker network create -d macvlan \
--subnet=172.18.0.0/22 \
--gateway=172.18.0.1 \
--ip-range=172.18.1.0/28 \
-o macvlan_mode=bridge \
-o parent=ens192 macvlan
Create macvlan interface HOST_1
ip link add ens192.br link ens192 type macvlan mode bridge
ip addr add 172.18.1.0/28 dev ens192.br
ip link set dev ens192.br up
Host Setup: HOST_2 ens192: 172.18.0.23
Create macvlan bridge interface
docker network create -d macvlan \
--subnet=172.18.0.0/22 \
--gateway=172.18.0.1 \
--ip-range=172.18.1.16/28 \
-o macvlan_mode=bridge \
-o parent=ens192 macvlan
Create macvlan interface in HOST_2
ip link add ens192.br link ens192 type macvlan mode bridge
ip addr add 172.18.1.16/28 dev ens192.br
ip link set dev ens192.br up
Container Setup
Create containers in both host
HOST_1# docker run --net=macvlan -it --name macvlan_1 --rm alpine /bin/sh
HOST_2# docker run --net=macvlan -it --name macvlan_1 --rm alpine /bin/sh
CONTAINER_1 in HOST_1
24: eth0#if2: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UNKNOWN
link/ether 02:42:ac:12:01:00 brd ff:ff:ff:ff:ff:ff
inet 172.18.1.0/22 brd 172.18.3.255 scope global eth0
valid_lft forever preferred_lft forever
CONTAINER_2 in HOST_2
21: eth0#if2: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UNKNOWN
link/ether 02:42:ac:12:01:10 brd ff:ff:ff:ff:ff:ff
inet 172.18.1.16/22 brd 172.18.3.255 scope global eth0
valid_lft forever preferred_lft forever
Route table in CONTAINER_1 and CONTAINER_2
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 172.18.0.1 0.0.0.0 UG 0 0 0 eth0
172.18.0.0 0.0.0.0 255.255.252.0 U 0 0 0 eth0
Scenario
HOST_1 (172.18.0.21) <-> HOST_2 (172.18.0.23) = OK (Vice-versa)
HOST_1 (172.18.0.21) -> CONTAINER_1 (172.18.1.0) and CONTAINER_2 (172.18.1.16) = OK
HOST_2 (172.18.0.23) -> CONTAINER_1 (172.18.1.0) and CONTAINER_2 (172.18.1.16) = OK
CONTAINER_1 (172.18.1.0) -> HOST_2 (172.18.0.23) = OK
CONTAINER_2 (172.18.1.16) -> HOST_1 (172.18.0.21) = OK
CONTAINER_1 (172.18.1.0) <-> CONTAINER_2 (172.18.1.16) = OK (Vice-versa)
CONTAINER_1 (172.18.1.0) -> HOST_1 (172.18.0.21) = FAIL
CONTAINER_2 (172.18.1.16) -> HOST_2 (172.18.0.23) = FAIL
Question
I am very close to my solution I wanted to achieve except this 1 single problem. How can I make this work for container to connect to its own host. If there is solution to this, I would like to know how to configure in ESXi virtualization perspective and also bare-metal if there is any difference
The question is "a bit old", however others might find it useful. There is a workaround described in Host access section of USING DOCKER MACVLAN NETWORKS BY LARS KELLOGG-STEDMAN. I can confirm - it's working.
Host access With a container attached to a macvlan network, you will
find that while it can contact other systems on your local network
without a problem, the container will not be able to connect to your
host (and your host will not be able to connect to your container).
This is a limitation of macvlan interfaces: without special support
from a network switch, your host is unable to send packets to its own
macvlan interfaces.
Fortunately, there is a workaround for this problem: you can create
another macvlan interface on your host, and use that to communicate
with containers on the macvlan network.
First, I’m going to reserve an address from our network range for use
by the host interface by using the --aux-address option to docker
network create. That makes our final command line look like:
docker network create -d macvlan -o parent=eno1 \
--subnet 192.168.1.0/24 \
--gateway 192.168.1.1 \
--ip-range 192.168.1.192/27 \
--aux-address 'host=192.168.1.223' \
mynet
This will prevent Docker from assigning that address to a container.
Next, we create a new macvlan interface on the host. You can call it
whatever you want, but I’m calling this one mynet-shim:
ip link add mynet-shim link eno1 type macvlan mode bridge
Now we need to configure the interface with the address we reserved
and bring it up:
ip addr add 192.168.1.223/32 dev mynet-shim
ip link set mynet-shim up
The last thing we need to do is to tell our host to use that interface
when communicating with the containers. This is relatively easy
because we have restricted our containers to a particular CIDR subset
of the local network; we just add a route to that range like this:
ip route add 192.168.1.192/27 dev mynet-shim
With that route in place, your host will automatically use ths
mynet-shim interface when communicating with containers on the mynet
network.
Note that the interface and routing configuration presented here is
not persistent – you will lose if if you were to reboot your host. How
to make it persistent is distribution dependent.
This is defined behavior for macvlan and is by design. See Docker Macvlan Documentation
When using macvlan, you cannot ping or communicate with the default namespace IP address. For example, if you create a container and try to ping the Docker host’s eth0, it will not work. That traffic is explicitly filtered by the kernel modules themselves to offer additional provider isolation and security.
A macvlan subinterface can be added to the Docker host, to allow traffic between the Docker host and containers. The IP address needs to be set on this subinterface and removed from the parent address.
In my situation, I added one more network to the container.
so CONTAINER_1 -> HOST_1 can be reached by a different IP (10.123.0.2).
CONTAINER_2 or HOST_2 can reach to 172.18.1.0.
Following is the docker-compose sample, hope this could be a workaround.
version: "3"
services:
macvlan_1:
image: alpine
container: macvlan_1
command: ....
restart: always
networks:
macvlan:
ipv4_address: 172.18.1.0
internalbr:
ipv4_address: 10.123.0.2
networks:
macvlan:
driver: macvlan
driver_opts:
parent: ens192
macvlan_mode: bridge
ipam:
driver: default
config:
- subnet: 172.18.0.0/22
gateway: 172.18.0.1
ip_range: 172.18.1.0/28
internalbr:
driver: bridge
ipam:
config:
- subnet: 10.123.0.0/24