I'm trying to containerize an application that communicates with nodes on an IPv6 WPAN network. When I run my application on the bare OS I can TX and RX on the network and the IP addresses in the packets I receive match nodes on the WPAN. When run in a container, I can receive messages from the nodes but the IP addresses and port number in the messages don't match any nodes and trying to TX or ping nodes on the WPAN return as unreachable. The Docker documentation was light on IPv6 so I am not sure if I have something configured wrong.
My network config in the docker-compose.yml
networks:
default:
driver: bridge
enable_ipv6: true
ipam:
driver: default
config:
- subnet: 2001:3984:4989::/64
gateway: 2001:3984:4989::1
Ports mapping for container
ports:
- "5000:5000"
- "5683:5683/udp"
RX snip from a CoAP message from server running on container
rsinfo: {
address: '::ffff:192.168.80.1',
family: 'IPv6',
port: 51028,
size: 13
},
I'm not sure if this is the preferred way (since it does not actually bridge the WPAN and docker network), but I was able to connect by removing my "networks.default" section and adding "network_mode: host". This caused a bit of an issue though because I couldn't resolve other container names unless I added "network_mode: host" to each container.
Related
For a testing situation, I've created a docker-compose file which stands up two containers:
version: "3.8"
services:
foo:
ports:
- 5000:5000/tcp
networks:
- default
...
bar:
networks:
- default
...
networks:
default:
ipam:
driver: internal
config:
- subnet: 172.100.0.0/16
For the purposes of my tests, I need bar to be unreachable from the host. Unfortunately, when I run docker-compose up, ip addr shows that my host is on that network at 172.100.0.1.
How do I change my YAML file to accomplish this?
Caveat The address range you've selected (172.100.0.0/16) is a routeable address range that apparently belongs to charter communications. This means that attempts to reach addresses in that range -- if there's not a specific route to your containers -- will get routed via your default gateway and potentially to actual machines somewhere on the internet.
Anyway, to your question:
I don't believe this is going to be possible using only Docker. For a bridged network, Docker will (a) always create a host bridge and (b) always assign an ip address to that bridge, which means containers will always be routable from the host.
You can create containers that have no network connectivity by using the none network:
version: "3.8"
services:
foo:
network_mode: none
image: docker.io/alpinelinux/darkhttpd
bar:
network_mode: none
image: docker.io/alpinelinux/darkhttpd
And then manually wire them up to a bridge using ip commands:
ip link add br-internal type bridge
ip link set br-internal up
foo_pid="$(docker inspect project_foo_1 --format '{{ .State.Pid }}')"
bar_pid="$(docker inspect project_bar_1 --format '{{ .State.Pid }}')"
ln -s /proc/$foo_pid/ns/net /run/netns/foo
ln -s /proc/$bar_pid/ns/net /run/netns/bar
ip link add foo-ext type veth peer name foo-int netns foo
ip -n foo addr add 172.100.0.10/26 dev foo-int
ip -n foo link set foo-int up
ip link set foo-ext up master br-internal
ip link add bar-ext type veth peer name bar-int netns bar
ip -n bar addr add 172.100.0.11/26 dev bar-int
ip -n bar link set bar-int up
ip link set bar-ext up master br-internal
Now we can ping bar from foo:
$ docker compose exec foo ping -c1 172.100.0.11
PING 172.100.0.11 (172.100.0.11): 56 data bytes
64 bytes from 172.100.0.11: seq=0 ttl=42 time=0.080 ms
--- 172.100.0.11 ping statistics ---
1 packets transmitted, 1 packets received, 0% packet loss
round-trip min/avg/max = 0.080/0.080/0.080 ms
And foo from bar:
$ docker compose exec bar ping -c1 172.100.0.10
PING 172.100.0.10 (172.100.0.10): 56 data bytes
64 bytes from 172.100.0.10: seq=0 ttl=42 time=0.047 ms
--- 172.100.0.10 ping statistics ---
1 packets transmitted, 1 packets received, 0% packet loss
round-trip min/avg/max = 0.047/0.047/0.047 ms
But our host doesn't have a route to either of these addresses:
$ ip route get 172.100.0.10
172.100.0.10 via 192.168.1.1 dev eth0 src 192.168.1.175 uid 1000
cache
That shows that attempts to reach addresses on this network will get routed via our default gateway, rather than to the containers.
Instead of listening to a single IP address like e.g. localhost:
ports:
- "127.0.0.1:80:80"
I want the container to only listen to a local network, i.e. e.g.:
ports:
- "10.0.0.0/16:80:80"
ERROR: The Compose file './docker-compose.yml' is invalid because:
services.SERVICE.ports contains an invalid type, it should be a number, or an object
Is this possible?
I don't want to use things like swarm mode etc., yet.
If IP range is not supported, maybe at least multiple IP addresses like 10.0.0.2 and 10.0.0.3?
ERROR: for CONTAINER Cannot start service SERVICE: driver failed programming external connectivity on endpoint CONTAINER (...): Error starting userland proxy: listen tcp 10.0.0.3:80: bind: cannot assign requested address
ERROR: for SERVICE Cannot start service SERVICE: driver failed programming external connectivity on endpoint CONTAINER (...): Error starting userland proxy: listen tcp 10.0.0.3:80: bind: cannot assign requested address
Or is it not even supported to listen to 10.0.0.3 ?
The host machine is connected to 10.0.0.0/16:
> ifconfig
ens10: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450
inet 10.0.0.2 netmask 255.255.255.255 broadcast 10.0.0.2
inet6 f**0::8**0:ff:f**9:b**7 prefixlen 64 scopeid 0x20<link>
ether **:00:00:**:**:** txqueuelen 1000 (Ethernet)
Listening to a single IP address seems not correct. The service is listening at an IP address.
Let's say your VM has two network interfaces (ethernet cards):
Network 1 → subnet: 10.0.0.0/24 and IP 10.0.0.100
Network 2 → subnet: 10.0.1.0/24 and IP 10.0.1.200
If you set 127.0.0.1:80:80 that means that your service listening at 127.0.0.1's (localhost) port 80.
If you want to access service from 10.0.0.0/24 subnet you should set 10.0.0.100:80:80 and use http://10.0.0.100:80 address to be able connect your container from external hosts
If you want to access service from multiple networks simultaneously you can bind the container port to multiple ports, where the IP is the connection source IP):
ports:
- 10.0.0.100:80:80
- 10.0.1.200:80:80
- 127.0.0.1:80:80
And don't forget to open 80 port at VM's firewall, if a firewall exists and restricts that network
I think you misunderstood this field.
When you map 127.0.0.1:80:80 you will map interface 127.0.0.1 from your host to your container.
In the case of the 127.0.0.1 you can only access it from inside your host.
When you map 10.0.0.3:80:80 you will map interface 10.0.0.3 from your host to your container. And all ip who can access 10.0.0.3 will have acces to your docker container mapping.
But in anycase this field will not do any filtering about who access this container
EDIT: After your modification i've seen my misunderstood about your question.
You want docker to create "bridge interface" to not share the ip of your host.
I don't think this is possible when using the port mapping
If you give Compose ports: (or docker run -p) an IP address, it must be a specific known IP address of a host interface, or 0.0.0.0 for "all interfaces". The Docker daemon gives this specific IP address to a bind(2) call, which takes an address and not a network, and follows the rules in ip(7) for IPv4.
With the output you show, you can only bind containers to 10.0.0.2. If you want to use other IP addresses on the same network, you also need to assign them to the host; see for example How can I (from CLI) assign multiple IP addresses to one interface? on Ask Ubuntu, and then you can bind a container to the newly-added address.
If your system is on multiple physical networks, you can have any number of ports: so long as the host address and host port are unique. In particular you can have multiple ports: that all forward to the same container port.
ports:
# make this visible to the external load balancer on port 80
- '192.168.17.2:80:3000'
# also make this visible to the internal network also on port 80
- '10.0.0.2:80:3000'
# and the management network but on port 3000
- '10.99.0.36:3000:3000'
Again, the host must already have these IP addresses in the ifconfig output.
I have the following setup:
Raspi with Docker and multiple Containers connected to my Router. Some containers are on a MACVLAN network and receive regular IP Address in my LAN (e.g. Pihole, Unbound, etc.), some are on bridged networks and expose certain ports (Portainer, nginx, etc.)
Router LAN (192.x.y.0/24)
|Raspi (192.x.y.5)
|Pihole (192.x.y.11)
|Webserver (192.x.y.20)
|Wireguard (192.x.y.13) - (VPN: 10.x.y.0/32, DNS 192.x.y.11) - (Allowed IPs: 192.x.y.0/24)
|
|Portainer (bridged - exposing 8000, 9000, 9443)
|NGINX (bridged - exposing 81, 80, 443)
When I connect a client through Wireguard,
I can access the internet (Pihole on 192.x.y.11 works as DNS - adblocking works)
I can access Piholes webUI on 192.x.y.11
I can access my webserver on 192.x.y.20
NOT working:
I cannot access the Portainer UI or NGINX UI on their respective forwarded IP:ports e.g. 192.x.y.5:81 for NGINX
What is missing in any config? I found nothing solving this issue - please help!
In a project there are three services in the docker compose yml:
A VPN.
A container (named first) connected to that VPN using network_mode.
A container (named second) not connected to that VPN.
From first I can get second's IP using the container name (second), but the oposite does not work.
"first" and "second" are simple python scripts sending data to each other using socket.
I can send data from "second" to "first" if I use the IP address instead of the container name, but that is not a solution I can use in the project.
This is the .yml I'm using:
version: '3.9'
services:
vpn:
build: ./vpn
container_name: vpn
env_file:
- ss.env
cap_add:
- NET_ADMIN
- NET_RAW
devices:
- /dev/net/tun:/dev/net/tun
dns:
- 1.1.1.1
first:
build: ./first
container_name: first
depends_on:
- vpn
network_mode: service:vpn
second:
build: ./second
container_name: second
depends_on:
- vpn
The relevant part of the python scripts:
#first.py
client.sendto(bytes('message from second',encoding='utf8'), ('second', 37021))
#second.py
client.sendto(bytes('message from second',encoding='utf8'), ('first', 37020))
Also, the vpn log:
vpn | 2021-10-20 00:44:21 TUN/TAP device tun0 opened
vpn | 2021-10-20 00:44:21 /sbin/ip link set dev tun0 up mtu 1500
vpn | 2021-10-20 00:44:21 /sbin/ip link set dev tun0 up
vpn | 2021-10-20 00:44:21 /sbin/ip addr add dev tun0 10.8.8.2/24
vpn | 2021-10-20 00:44:21 /sbin/ip route add 104.111.100.109/32 via 192.168.144.1
vpn | 2021-10-20 00:44:21 /sbin/ip route add 0.0.0.0/1 via 10.8.8.1
vpn | 2021-10-20 00:44:21 /sbin/ip route add 128.0.0.0/1 via 10.8.8.1
Your problems come from a misconfiguration of the networks.
First of all, when you're starting services with docker-compose up -d you're creating a default network with the name of the folder where that Compose file is located. You can check that with docker network ls.
Well, all your services will connect by default to that network, except if you define a different default or you change the network mode, as it's your case for first.
Basically, let's suppose you have your Compose file under a directory called myapp.
When you start your containers, Docker Compose is creating a network with the name myapp-default.
Your services vpn and second will join that network, but first will work with the network stack from vpn.
Since first is using the same network namespace of vpn, it can discover second without any problem using the service name.
Since first isn't in the default network, second cannot discover it.
If you want first to be discoverable by second you shouldn't use the same network stack of vpn but just let it join the default network created by Compose (or create another network by yourself and make the three of them to join that network).
I have a HTTP health check in my service, exposed on localhost:35000/health. At the moment it always returns 200 OK. The configuration for the health check is done programmatically via the HTTP API rather than with a service config, but in essence, it is:
set id: service-id
set name: health check
set notes: consul does a GET to '/health' every 30 seconds
set http: http://127.0.0.1:35000/health
set interval: 30s
When I run consul in dev mode (consul agent -dev -ui) on my host machine directly the health check passes without any problem. However, when I run consul in a docker container, the health check fails with:
2017/07/08 09:33:28 [WARN] agent: http request failed 'http://127.0.0.1:35000/health': Get http://127.0.0.1:35000/health: dial tcp 127.0.0.1:35000: getsockopt: connection refused
The docker container launches consul, as far as I am aware, in exaclty the same state as the host version:
version: '2'
services:
consul-dev:
image: "consul:latest"
container_name: "net-sci_discovery-service_consul-dev"
hostname: "consul-dev"
ports:
- "8400:8400"
- "8500:8500"
- "8600:8600"
volumes:
- ./etc/consul.d:/etc/consul.d
command: "agent -dev -ui -client=0.0.0.0 -config-dir=/etc/consul.d"
I'm guessing the problem is that consul is trying to do the GET request to the containers loopback interface rather than what I am intending, which is for the loopback interface of the host. Is that a correct assumption? More importantly, what do I need to do to correct the problem?
So it transpires that there was a bug in some previous versions of macOS that prevented use of the docker0 network. Whilst the bug is fixed in newer versions, Docker support extends to older versions and so Docker for Mac doesn't currently support docker0. See this discussion for details.
The workaround is to create an alias to the loopback interface on the host machine, set the service to listen on either that alias or 0.0.0.0, and configure Consul to send the health check GET request to the alias.
To set the alias (choose a private IP address that's not being used for anything else; I chose a class A address but that's irrelevant):
sudo ifconfig lo0 alias 10.200.10.1/24
To remove the alias:
sudo ifconfig lo0 -alias 10.200.10.1
From the service definition above, the HTTP line should now read:
set http: http://10.200.10.1:35000/health
And the HTTP server listening for the health check requests also needs to be listening on either 10.200.10.2 or 0.0.0.0. This latter option is suggested in the discussion but I've only tried it with the alias.
I've updated the title of the question to more accurately reflect the problem, now I know the solution. Hope it helps somebody else too.