Docker isolated network receives packets from outside - docker

I set up a docker bridge network (on Linux) for the purpose of testing how network traffic of individual applications (containers) looks like. Therefore, a key requirement for the network is that it is completely isolated from traffic that originates from other applications or devices.
A simple example I created with compose is a ping-container that sends ICMP-packets to another one, with a third container running tcpdump to collect the traffic:
version: '3'
services:
ping:
image: 'detlearsom/ping'
environment:
- HOSTNAME=blank
- TIMEOUT=2
sysctls:
- net.ipv6.conf.all.disable_ipv6=1
networks:
- capture
blank:
image: 'alpine'
command: sleep 300
sysctls:
- net.ipv6.conf.all.disable_ipv6=1
networks:
- capture
tcpdump:
image: 'detlearsom/tcpdump'
volumes:
- '$PWD/data:/data'
sysctls:
- net.ipv6.conf.all.disable_ipv6=1
network_mode: 'service:ping'
command: -v -w "/data/dump-011-ping2-${CAPTURETIME}.pcap"
networks:
capture:
driver: "bridge"
internal: true
Note that I have set the network to internal, and I have also disabled IPV6. However, when I run it and collect the traffic, additional to the expected ICMP packets I get IPV6 packets:
10:42:40.863619 IP6 fe80::42:2aff:fe42:e303 > ip6-allrouters: ICMP6, router solicitation, length 16
10:42:43.135167 IP6 fe80::e437:76ff:fe9e:36b4.mdns > ff02::fb.mdns: 0 [2q] PTR (QM)? _ipps._tcp.local. PTR (QM)? _ipp._tcp.local.
10:42:37.875646 IP6 fe80::e437:76ff:fe9e:36b4.mdns > ff02::fb.mdns: 0*- [0q] 2/0/0 (Cache flush) PTR he...F.local., (Cache flush) AAAA fe80::e437:76ff:fe9e:36b4 (161)
What is even stranger is that I receive UDP packets from port 57621:
10:42:51.868199 IP 172.25.0.1.57621 > 172.25.255.255.57621: UDP, length 44
This port corresponds to spotify traffic and most likely originates from my spotify application that is running on the host machine.
My question: Why do I see this traffic in my network that is supposed to be isolated?
For anyone interested, here is the network configuration:
[
{
"Name": "capture-011-ping2_capture",
"Id": "35512f852332351a9f677f75b522982aa6bd288e813a31a3c36477baa005c0fd",
"Created": "2018-08-07T10:42:31.610178964+01:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.25.0.0/16",
"Gateway": "172.25.0.1"
}
]
},
"Internal": true,
"Attachable": true,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"dac25cb8810b2c786735a76c9b8387d1cfb4d6006dbb7549f5c7c3f381d884c2": {
"Name": "capture-011-ping2_tcpdump_1",
"EndpointID": "2463a46cf00a35c8c77ff9f224ff052aea7f061684b7a24b41dab150496f5c3d",
"MacAddress": "02:42:ac:19:00:02",
"IPv4Address": "172.25.0.2/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {
"com.docker.compose.network": "capture",
"com.docker.compose.project": "capture-011-ping2",
"com.docker.compose.version": "1.22.0"
}
}
]

Related

docker does not connect to bridge network

I have a container and I want to connect to DB, the docker host machine has a IP X.X.2.26 and the database X.X.2.27. I tried to connect the network in bridge mode. But I can't connect to te database. The host machine has connection to database.
This is my docker-compose.yml
version: '3.7'
​
networks:
sfp:
name: sfp
driver: bridge
​
services:
sfpapi:
image: st/sfp-api:${VERSION-latest}
container_name: "sfp-api"
restart: always
ports:
- "8082:8081"
networks:
- sfp
environment:
- TZ=America/Mexico_City
- SPRING_DATASOURCE_URL
- SPRING_DATASOURCE_USERNAME
- SPRING_DATASOURCE_PASSWORD
​
app:
image: st/sfp-app:${VERSION-latest}
container_name: "app"
restart: always
ports:
- "8081:80"
networks:
- sfp
environment:
- API_HOST
If I check the networks, it was created successfuly.
docker network ls
NETWORK ID NAME DRIVER SCOPE
86a58ac8a053 bridge bridge local
1890c6433c09 host host local
bab0a88222a3 none null local
01a411ad42df sfp bridge local
But if I see the inspect to the network, I can't see added containers
docker network inspect bridge
[
{
"Name": "bridge",
"Id": "86a58ac8a05398bb827252b2dbe4c99e52aedf0896be6aa6c4358c41cf0e766e",
"Created": "2022-04-06T12:50:09.922881204-05:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.17.0.0/16",
"Gateway": "172.17.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {},
"Options": {
"com.docker.network.bridge.default_bridge": "true",
"com.docker.network.bridge.enable_icc": "true",
"com.docker.network.bridge.enable_ip_masquerade": "false",
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
"com.docker.network.bridge.name": "docker0",
"com.docker.network.driver.mtu": "1500"
},
"Labels": {}
}
]
This is the inspect to container
docker inspect --format "{{ json .NetworkSettings.Networks }}" sfp-api
{"sfp":{"IPAMConfig":null,"Links":null,"Aliases":["sfpapi","bab30efe892b"],"NetworkID":"2076ee845b06df6ace975e1cf3fd360eb174ee97a9ae608911c243b08e98aa42","EndpointID":"3837a6f55449a59267aea7bbafc754d0fab6fedad282e280cce9d880d0c299a7","Gateway":"172.26.0.1","IPAddress":"172.26.0.3","IPPrefixLen":16,"IPv6Gateway":"","GlobalIPv6Address":"","GlobalIPv6PrefixLen":0,"MacAddress":"02:42:ac:1a:00:03","DriverOpts":null}}

get the IP adress to use for a docker container on a network

My NAS is where I run my containers. It sits on 192.168.1.23 on my network.
I am running a few containers inside a user-defined network. Here is the docker network inspect (I've removed the containers manually) :
[
{
"Name": "traefik2_proxy",
"Id": "fb2924fe59fbb0436c72f11cb028df832a473a165162ecf08b7e3a946cfa2d3c",
"Created": "2020-05-13T23:23:16.16424119+08:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "192.168.90.0/24",
"Gateway": "192.168.90.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {},
"Options": {},
"Labels": {}
}
]
I have a specific container which is in that network at IP address 192.168.90.16 for which I have exposed the 9118 port using the following in my docker-compose :
ports:
- target: 9118
published: 9118
protocol: tcp
This is the portainer screenshot :
I was expecting to be able to connect to that container using 192.168.1.23:9118 but I tried to no avail.
What am I missing ? Which setting do I need to change for that container to be visible at that port on my NAS IP address ?
The port that the container was listening to was incorrect. I needed to modify the ports configuration to:
ports:
- target: 9117
published: 9118
protocol: tcp

Docker network not using all NIC's

My docker container is not reachable on all host network interfaces.
My host server has 2 network interfaces (and ip adresses). When running my docker container without a specific defined docker network, it works and the container is reachable on both ip adresses.
But when I'm running with a self defined docker network and add it to the docker-compose file only 1 ip is working. The other timesout. Why does this happen?
Docker-compose file
version: '3.7
services:
servicename-1:
#network_mode: "host"
image: nginxdemos/hello
init: true
ports:
- 8081:80
volumes:
omitted
environment:
ommitted
networks:
- a-netwerk-1
networks:
a-netwerk-1:
external:
name: a-network-1
docker inspect network:
[
{
"Name": "a-network-1",
"Id": "df4ab5e3285c75b71f8f88f66c4c5d85ad8f2f9b17e66f960b11778007810b96",
"Created": "2020-01-30T10:55:14.853289976+01:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "172.29.0.0/16",
"Gateway": "172.29.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"2f2d5b2e22b3066085246ea53d1ca2c9f963b5e9138ae7202d8382be98428476": {
"Name": "test_testservicename_1",
"EndpointID": "c750b0d9d6ae82fec109da15d385b936f79f09bf814dd3b8d03642a2f03d46e2",
"MacAddress": "02:42:ac:1d:00:02",
"IPv4Address": "172.29.0.2/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
]

Docker Swarm Overlay Network Not Working Between Nodes

i am trying to connect my docker services together in docker swarm.
the network is made of 2 raspberry pi's.
i can create an overlay network called test-overlay and i can see that services on either raspberry pi node can connect to the network.
my problem:
i cannot link to services between nodes with the overlay network.
given the following configuration of nodes and services, service1 can use the address http://service2 to connect to service2. but it does NOT work for http://service3. however http://service3 is accessible from service4.
node1:
- service1
- service2
node2:
- service3
- service4
i am new to docker swarm and any help is appreciated.
inspecting overlay
i have run the command sudo docker inspect network test-overlay on both nodes.
on the master node this returns the following:
[
{
"Name": "test-overlay",
"Id": "skxhz8sb3f82dhh9jt9t3j5yl",
"Created": "2018-04-15T20:31:20.629719732Z",
"Scope": "swarm",
"Driver": "overlay",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "10.0.0.0/24",
"Gateway": "10.0.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"3acb436a0cc9a4d584d537edb1546988d334afa4793cc4fae4dd6ac9b48828ea": {
"Name": "docker-registry.1.la1myuodpkq0x5h39pqo6lt7f",
"EndpointID": "66887fb1f5f253c6cbec149aa51ab85168903fdd2290719f26d2bcd8d6c68dc8",
"MacAddress": "02:42:0a:00:00:04",
"IPv4Address": "10.0.0.4/24",
"IPv6Address": ""
},
"786e1fee538f81fe41ccd082800c646a0e191b0fd912e5c15530e61c248e81ac": {
"Name": "portainer.1.qyvvlcdqo5sewuku3eiykaplz",
"EndpointID": "0d29e5452c208ed637ae2e7dcec026f39d2431e8e0e20765a9e0e6d6dfdc60ca",
"MacAddress": "02:42:0a:00:00:15",
"IPv4Address": "10.0.0.21/24",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.driver.overlay.vxlanid_list": "4101"
},
"Labels": {},
"Peers": [
{
"Name": "d049fc8f8ae1",
"IP": "192.168.1.2"
},
{
"Name": "6c0da128f308",
"IP": "192.168.1.3"
}
]
}
]
on the worker node this returns the following:
[
{
"Name": "test-overlay",
"Id": "skxhz8sb3f82dhh9jt9t3j5yl",
"Created": "2018-04-20T14:04:57.870696195Z",
"Scope": "swarm",
"Driver": "overlay",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "10.0.0.0/24",
"Gateway": "10.0.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"4cb50161119e4b58a472e1b5c380c301bbb00a23fc99fc2e0712a8c4bde6d9d4": {
"Name": "minio.1.fo2su2quv8herbmnxqfi3g8w2",
"EndpointID": "3e85786304ed08f02c09b8e1ed6a153a3b4c2ef7afe503a1b0ca6cf341521645",
"MacAddress": "02:42:0a:00:00:d6",
"IPv4Address": "10.0.0.214/24",
"IPv6Address": ""
},
"ce99b3788a4f9438e276e0f52a8f4d29fa09179e3e93b31b14f45339ce3c5315": {
"Name": "load-balancer.1.j64h1eecsc05b7d397ejvedv3",
"EndpointID": "3b7e73d27fe30151f2dc2a0ba8a5afc7f74fd283159a03a592be10e297f58d51",
"MacAddress": "02:42:0a:00:00:d0",
"IPv4Address": "10.0.0.208/24",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.driver.overlay.vxlanid_list": "4101"
},
"Labels": {},
"Peers": [
{
"Name": "d049fc8f8ae1",
"IP": "192.168.1.2"
},
{
"Name": "6c0da128f308",
"IP": "192.168.1.3"
}
]
}
]
it seems this problem was because of the nodes being not being able to connect to each other on the required ports.
TCP port 2377 for cluster management communications
TCP and UDP port 7946 for communication among nodes
UDP port 4789 for overlay network traffic
before you open those ports.
a better and simpler solution is to use the docker image portainer/agent. like the documentation says,
The Portainer Agent is a workaround for a Docker API limitation when using the Docker API to manage a Docker environment.
https://portainer.readthedocs.io/en/stable/agent.html
i hope this helps anyone else experiencing this problem.
I am not able to leave a comment yet, but i managed to solve this issue with the solution provided by X0r0N, and i am leaving this comment to help people in my position to find a solution in the future.
I was deploying 10 Droplets in DigitalOcean, with the default Docker image provided by Docker. It says in the description that it closes all ports, but them related to Docker. This is clearly not included Swarm usecases.
After allowing port 2377, 4789 and 7946 in ufw the Docker Swarm is now working as expected.
To make this answer stand on its own, the ports map to the following functionality:
TCP port 2377: Cluster Management Communication
TCP and UDP port 7649: Communication between nodes
UDP port 4789: Overlay Network Traffic
Check if your nodes have the ports the swarm needs to operate opened properly as described here https://docs.docker.com/network/overlay/ in "Prerequisites":
TCP port 2377 for cluster management communications
TCP and UDP port 7946 for communication among nodes
UDP port 4789 for overlay network traffic

Docker swarm network not recognizing service/container on worker node. Using Traefik

I'm trying to test out a Traefik load balanced Docker Swarm and added a blank Apache service to the compose file.
For some reason I'm unable to place this Apache service on a worker node. I get a 502 bad gateway error unless it's on the manager node. Did I configure something wrong in the YML file?
networks:
proxy:
external: true
configs:
traefik_toml_v2:
file: $PWD/infra/traefik.toml
services:
traefik:
image: traefik:1.5-alpine
deploy:
replicas: 1
update_config:
parallelism: 1
delay: 5s
labels:
- traefik.enable=true
- traefik.docker.network=proxy
- traefik.frontend.rule=Host:traefik.example.com
- traefik.port=8080
- traefik.backend.loadbalancer.sticky=true
- traefik.frontend.passHostHeader=true
placement:
constraints:
- node.role == manager
restart_policy:
condition: on-failure
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- $PWD/infra/acme.json:/acme.json
networks:
- proxy
ports:
- target: 80
protocol: tcp
published: 80
mode: ingress
- target: 443
protocol: tcp
published: 443
mode: ingress
- target: 8080
protocol: tcp
published: 8080
mode: ingress
configs:
- source: traefik_toml_v2
target: /etc/traefik/traefik.toml
mode: 444
server:
image: bitnami/apache:latest
networks:
- proxy
deploy:
replicas: 1
placement:
constraints:
- node.role == worker
restart_policy:
condition: on-failure
labels:
- traefik.enable=true
- traefik.docker.network=proxy
- traefik.port=80
- traefik.backend=nerdmercs
- traefik.backend.loadbalancer.swarm=true
- traefik.backend.loadbalancer.sticky=true
- traefik.frontend.passHostHeader=true
- traefik.frontend.rule=Host:www.example.com
You'll see I've enabled swarm and everything
The proxy network is an overlay network and I'm able to see it in the worker node:
ubuntu#staging-worker1:~$ sudo docker network ls
NETWORK ID NAME DRIVER SCOPE
f91525416b42 bridge bridge local
7c3264136bcd docker_gwbridge bridge local
7752e312e43f host host local
epaziubbr9r1 ingress overlay swarm
4b50618f0eb4 none null local
qo4wmqsi12lc proxy overlay swarm
ubuntu#staging-worker1:~$
And when I inspect that network ID
$ docker network inspect qo4wmqsi12lcvsqd1pqfq9jxj
[
{
"Name": "proxy",
"Id": "qo4wmqsi12lcvsqd1pqfq9jxj",
"Created": "2018-02-06T09:40:37.822595405Z",
"Scope": "swarm",
"Driver": "overlay",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "10.0.0.0/24",
"Gateway": "10.0.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"1860b30e97b7ea824ffc28319747b23b05c01b3fb11713fa5a2708321882bc5e": {
"Name": "proxy_visualizer.1.dc0elaiyoe88s0mp5xn96ipw0",
"EndpointID": "d6b70d4896ff906958c21afa443ae6c3b5b6950ea365553d8cc06104a6274276",
"MacAddress": "02:42:0a:00:00:09",
"IPv4Address": "10.0.0.9/24",
"IPv6Address": ""
},
"3ad45d8197055f22f5ce629d896236419db71ff5661681e39c50869953892d4e": {
"Name": "proxy_traefik.1.wvsg02fel9qricm3hs6pa78xz",
"EndpointID": "e293f8c98795d0fdfff37be16861afe868e8d3077bbb24df4ecc4185adda1afb",
"MacAddress": "02:42:0a:00:00:18",
"IPv4Address": "10.0.0.24/24",
"IPv6Address": ""
},
"735191796dd68da2da718ebb952b0a431ec8aa1718fe3be2880d8110862644a9": {
"Name": "proxy_portainer.1.xkr5losjx9m5kolo8kjihznvr",
"EndpointID": "de7ef4135e25939a2d8a10b9fd9bad42c544589684b30a9ded5acfa751f9c327",
"MacAddress": "02:42:0a:00:00:07",
"IPv4Address": "10.0.0.7/24",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.driver.overlay.vxlanid_list": "4102"
},
"Labels": {},
"Peers": [
{
"Name": "be4fb35c80f8",
"IP": "manager IP"
},
{
"Name": "4281cfd9ca73",
"IP": "worker IP"
}
]
}
]
You'll see Traefik, Portainer, and Visualizer all present but not the apache container on the worker node
Inspecting the network on the worker node
$ sudo docker network inspect qo4wmqsi12lc
[
{
"Name": "proxy",
"Id": "qo4wmqsi12lcvsqd1pqfq9jxj",
"Created": "2018-02-06T19:53:29.104259115Z",
"Scope": "swarm",
"Driver": "overlay",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "10.0.0.0/24",
"Gateway": "10.0.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"c5725a332db5922a16b9a5e663424548a77ab44ab021e25dc124109e744b9794": {
"Name": "example_site.1.pwqqddbhhg5tv0t3cysajj9ux",
"EndpointID": "6866abe0ae2a64e7d04aa111adc8f2e35d876a62ad3d5190b121e055ef729182",
"MacAddress": "02:42:0a:00:00:3c",
"IPv4Address": "10.0.0.60/24",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.driver.overlay.vxlanid_list": "4102"
},
"Labels": {},
"Peers": [
{
"Name": "be4fb35c80f8",
"IP": "manager IP"
},
{
"Name": "4281cfd9ca73",
"IP": "worker IP"
}
]
}
]
It shows up in the network's container list but the manager node containers are not there either.
Portainer is unable to see the apache site when it's on the worker node as well.
This problem is related to this: Creating new docker-machine instance always fails validating certs using openstack driver
Basically the answer is
It turns out my hosting service locked down everything other than 22,
80, and 443 on the Open Stack Security Group Rules. I had to add 2376
TCP Ingress for docker-machine's commands to work.
It helps explain why docker-machine ssh worked but not docker-machine
env
should look at this https://docs.docker.com/datacenter/ucp/2.2/guides/admin/install/system-requirements/#ports-used and make sure they're all open

Resources