I am trying to get my head around networking with docker.
First I create a docker image (based on ubuntu22) with net cat installed:
FROM ubuntu:22.04
RUN apt-get update -y && apt-get upgrade -y
RUN apt-get install -y netcat-openbsd
Build using:
docker build -f foobar.Dockerfile -t foobar .
I then run this as a container with a static IP (on RHEL8) as follows:
docker network create --driver bridge --subnet 172.20.0.0/16 foobar-net &&
docker run -v /home:/home -it --name foobar --network foobar-net --ip 172.20.0.2 -p 1781:1781 foobar /bin/bash
Inside the docker image I get netcat to listen for UDP packets on port 1781 using:
nc -l -u -p 1781
From the host I can see the IP address:
>docker inspect foobar
"NetworkSettings": {
"Bridge": "",
"SandboxID": "b6019cec78bac4536f1ff66159dc5c724c9f5e205ef68f64fa23b41223a9e66c",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"1781/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "1781"
},
{
"HostIp": "::",
"HostPort": "1781"
}
]
},
"SandboxKey": "/var/run/docker/netns/b6019cec78ba",
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"dcs-net": {
"IPAMConfig": {
"IPv4Address": "172.20.0.2"
},
"Links": null,
"Aliases": [
"71efaf0b1ec4"
],
"NetworkID": "84fc84937c72e2461751eb5edb74183c3ce9a5bad83c3310ba81847cd052327d",
"EndpointID": "28f447f842036eb30e90574a4b6a4d94c826224afc42e2ffa68678c51f0d3331",
"Gateway": "172.20.0.1",
"IPAddress": "172.20.0.2",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:ac:14:00:02",
"DriverOpts": null
}
}
}
I want to expose this port to the network.
To do this I am trying:
firewall-cmd --zone=external --add-port=1781/udp
firewall-cmd --zone=public --add-forward-port=port=1781:proto=udp:toport=1781:toaddr=172.20.0.2
But I am having trouble connecting to it from the local machine:
>nc -u 127.0.0.1 -p 1781
hello world!
Ncat: Connection refused.
>nc -u 127.0.0.20 -p 1781
hello world!
Ncat: Connection refused.
>nc -u <local machines IP> -p 1781
hello world!
Ncat: Connection refused.
I can see the rules I think are relevant using iptables-save:
-A DOCKER -p udp -m udp --dport 1781 -j DNAT --to-destination 172.20.0.2:1781
-A DOCKER ! -i br-84fc84937c72 -p tcp -m tcp --dport 1781 -j DNAT --to-destination 172.20.0.2:1781
-A POSTROUTING -s 172.20.0.2/32 -d 172.20.0.2/32 -p tcp -m tcp --dport 1781 -j MASQUERADE
or:
>firewall-cmd --list-all
public (active)
target: default
icmp-block-inversion: no
interfaces: eno2np1
sources:
services: cockpit dhcpv6-client ssh
ports: 9200/tcp 1781/udp
protocols:
forward: no
masquerade: no
forward-ports:
port=1781:proto=udp:toport=1781:toaddr=172.20.0.2
source-ports:
icmp-blocks:
rich rules:
I do not entirely grok what I am doing here. Either my firewalld or docker network or both are wrong. Please help me grok.
This is a very common use case and so likely a duplicate but I cannot find anything which helps me identify what I'm missing so far.
By default when you use -p option in docker, it published TCP port.
You should use -p 1781:1781/udp
docker run -v /home:/home -it --name foobar --network foobar-net --ip 172.20.0.2 -p 1781:1781 /udp foobar /bin/bash
Related
Overview
I am using a course to learn how to Dockerize my ASP.NET Core application. I have a networking issue with the token server I am trying to use in my configuration.
The ASP.NET Core Web application (webmvc) allows authorization through a token server (tokenserver).
docker-compose for the services
tokenserver
tokenserver:
build:
context: .\src\Services\TokenServiceApi
dockerfile: Dockerfile
image: shoes/token-service
environment:
- ASPNETCORE_ENVIRONMENT=ContainerDev
- MvcClient=http://localhost:5500
container_name: tokenserviceapi
ports:
- "5600:80"
networks:
- backend
- frontend
depends_on:
- mssqlserver
tokenserver knows about the webmvc url.
webmvc
webmvc:
build:
context: .\src\Web\WebMvc
dockerfile: Dockerfile
environment:
- ASPNETCORE_ENVIRONMENT=ContainerDev
- CatalogUrl=http://catalog
- IdentityUrl=http://10.0.75.1:5600
container_name: webfront
ports:
- "5500:80"
networks:
- frontend
depends_on:
- catalog
- tokenserver
Running the container confirms that webmvc will try to reach the identity server at http://10.0.75.1:5600.
By running ipconfig in my Windows machine I confirm that DockerNAT is using 10.0.75.1:
Ethernet adapter vEthernet (DockerNAT):
Connection-specific DNS Suffix . :
IPv4 Address. . . . . . . . . . . : 10.0.75.1
Subnet Mask . . . . . . . . . . . : 255.255.255.0
Default Gateway . . . . . . . . . :
http://10.0.75.1:5600/ is not accessible when accessed from the host machine while http://localhost:5600 is accessible.
However, I have to rely on DockerNAT IP because webmvc must access the service from its own container where localhost:5600 does not make sense:
docker exec -it webfront bash
root#be382eb4608b:/app# curl -i -X GET http://10.0.75.1:5600
HTTP/1.1 404 Not Found
Date: Fri, 03 Jan 2020 08:55:48 GMT
Server: Kestrel
Content-Length: 0
root#be382eb4608b:/app# curl -i -X GET http://localhost:5600
curl: (7) Failed to connect to localhost port 5600: Connection refused
Token service container inspect (relevant parts)
"HostConfig": {
"Binds": [],
....
"NetworkMode": "shoesoncontainers_backend",
"PortBindings": {
"80/tcp": [
{
"HostIp": "",
"HostPort": "5600"
}
]
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "6637a47944251a4dc59205dc6e03670bc4b03f8bf38a7be0dc11b72adf6a3afa",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"80/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "5600"
}
]
},
"SandboxKey": "/var/run/docker/netns/6637a4794425",
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"shoesoncontainers_backend": {
"IPAMConfig": null,
"Links": null,
"Aliases": [
"tokenserver",
"d31d9b5f4ec7"
],
"NetworkID": "a50a9cee66e6a65a2bb90a7035bae4d9716ce6858a17d5b22e147dfa8e33d686",
"EndpointID": "405b1beb5e20636bdf0d019b36494fd85ece86cfbb8c2d57283d64cc20e5d760",
"Gateway": "172.28.0.1",
"IPAddress": "172.28.0.4",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:ac:1c:00:04",
"DriverOpts": null
},
"shoesoncontainers_frontend": {
"IPAMConfig": null,
"Links": null,
"Aliases": [
"tokenserver",
"d31d9b5f4ec7"
],
"NetworkID": "b7b3e8599cdae7027d0bc871858593f41fa9b938c13f906b4b29f8538f527ca0",
"EndpointID": "e702b29016b383b7d5872f8c55cad0f189d6f58f2631316cf0313f3df30331c0",
"Gateway": "172.29.0.1",
"IPAddress": "172.29.0.3",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:ac:1d:00:03",
"DriverOpts": null
}
}
}
I have also created an inbound rule for port 5600 in Windows Defender Firewall with Advanced Security.
Question: How to access Docker container through DockerNAT IP address on Windows 10?
I think you are looking for host.docker.internal. It's a special DNS name which allow you to connect from a container to a service on the host or a container exposed on the host.
The official documentation.
You can fine longer explanations here.
I am not sure why it does not work as expected, but using the information provided here I was able to figure out how to make it work:
You can try add incoming rule in firewall:
Example:
protocol: any/tcp/udp/...
program: any
action: allow
local port: any
remote port: any
local address: 10.0.75.1
remote address: 10.0.75.0/24
or you can try use address 10.0.75.2 instead of 10.0.75.1
For me the second solution worked.
I have ran ubuntu docker-containers (mysql) and (nodejs server app) on windows
docker run -d --network bridge --name own -p 80:3000 own:latest
docker run -d --name mysql -p 3306:3306 -e MYSQL_ROOT_PASSWORD=12345678 mysql:5
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3ce966e43414 own:latest "docker-entrypoint.s…" 5 minutes ago Up 5 minutes 0.0.0.0:80->3000/tcp own
ed10cfc93dd5 mysql:5 "docker-entrypoint.s…" 20 minutes ago Up 20 minutes 0.0.0.0:3306->3306/tcp, 33060/tcp mysql
When i open port localhost:3000 with server app just via cmd (NOT via docker VM) all is good, I see success connection to the docker-container 0.0.0.0:3306, but when i:
docker start own
check browser 0.0.0.0:80 and i see Error: connect ECONNREFUSED 127.0.0.1:3306
docker network ls
NETWORK ID NAME DRIVER SCOPE
019f0886d253 bridge bridge local
fa1842bad14c host host local
85e7d1e38e14 none null local
docker inspect bridge
[
{
"Name": "bridge",
"Id": "019f0886d253091c1367863e38a199fe9b539a72ddb7575b26f40d0d1b1f78dc",
"Created": "2019-11-19T09:15:53.2096944Z",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.17.0.0/16",
"Gateway": "172.17.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"a79ec12c4cc908326c54abc2b47f80ffa3da31c5e735bf5ff2755f23b9d562dd": {
"Name": "own",
"EndpointID": "2afc225e29138ff9f1da0f557e9f7659d3c4ccaeb5bfaa578df88a672dac003f",
"MacAddress": "02:42:ac:11:00:02",
"IPv4Address": "172.17.0.2/16",
"IPv6Address": ""
},
"ed10cfc93dd5eda7cfb8a26e5e4b2a8ccb4e9db7a4957b3d1048cb93f5137fd4": {
"Name": "mysql",
"EndpointID": "ea23d009f959d954269c0554cecf37d01f8fe71481965077f1372df27f05208a",
"MacAddress": "02:42:ac:11:00:03",
"IPv4Address": "172.17.0.3/16",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.bridge.default_bridge": "true",
"com.docker.network.bridge.enable_icc": "true",
"com.docker.network.bridge.enable_ip_masquerade": "true",
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
"com.docker.network.bridge.name": "docker0",
"com.docker.network.driver.mtu": "1500"
},
"Labels": {}
}
]
maybe could i somehow assign the own container to the bridge network like mysql. After localhost of the mysql container will be seen for own container? Help please what should i do?
#create network mynetwork
docker network create --subnet 172.17.0.0/16 mynetwork
#create own container (without starting it)
docker create -d --name own -p 80:3000 own:latest
#add own container to the network mynetwork
docker network connect --ip 172.17.0.2 mynetwork own
#start container own
docker start own
#same as above but with different ip
docker create -d --name mysql -p 3306:3306 -e MYSQL_ROOT_PASSWORD=12345678 mysql:5
docker network connect --ip 172.17.0.3 mynetwork mysql
docker start mysql
when you stop and remove your containers do, you may remove network this way:
docker network rm mynetwork
or if you don't do it then there is no need to create it again as above but just connect your new/other containers to it.
In your application you should use 172.17.0.3 as the MySQL address.
I am sure the answer here is something real obvious that I am missing here. I have Docker for Windows installed on a Win 10 Pro machine. The Windows machine is on the 192.168.40/24 network.
I pull and install RabbitMQ as follows:
docker run -d --hostname my-rabbit --name some-rabbit rabbitmq:3-management
And I can see that it is running successfully:
docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3cabceeade6e rabbitmq:3-management "docker-entrypoint.s…" 7 minutes ago Up 7 minutes 4369/tcp, 5671-5672/tcp, 15671-15672/tcp, 25672/tcp some-rabbit
However I cannot telnet to either 5671 or 15672 on 127.0.0.1. I have also tried disabling the Windows firewall with no luck.
Not sure how this relates but Docker is configured with the following networking settings:
EDIT: The IP address information is:
"NetworkSettings": {
"Bridge": "",
"SandboxID": "707c66b726b25c80abfebb1712d3bb0ae588dd77c996013bb528de7ac061edd4",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"15671/tcp": null,
"15672/tcp": null,
"25672/tcp": null,
"4369/tcp": null,
"5671/tcp": null,
"5672/tcp": null
},
"SandboxKey": "/var/run/docker/netns/707c66b726b2",
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "6e5ba9a4596967d98def608e18c9fd925a6ce036a84cd9d616f9f35d561ce68d",
"Gateway": "172.17.0.1",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "172.17.0.2",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"MacAddress": "02:42:ac:11:00:02",
"Networks": {
"bridge": {
"IPAMConfig": null,
"Links": null,
"Aliases": null,
"NetworkID": "38f30e8dcf669b9419be3a03f1f296e0bed71d970516c4a1e581d37772bd1b55",
"EndpointID": "6e5ba9a4596967d98def608e18c9fd925a6ce036a84cd9d616f9f35d561ce68d",
"Gateway": "172.17.0.1",
"IPAddress": "172.17.0.2",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:ac:11:00:02",
"DriverOpts": null
}
}
}
So what have I missed here that is not enabling me to access the web management interface on http://127.0.0.1:15672? While I can see the server is running on 172.17.0.2 that is clearly not on my network.
So I finally figured out my stupidity:
I was adding the ports on the end of the command viz:
docker run -d --hostname my-rabbit --name some-rabbit rabbitmq:3-management -p 15672:15672 -p 5672:5672
instead of before the actual name of the container etc.:
docker run -d --hostname my-rabbit -p 15672:15672 -p 5672:5672 --name some-rabbit rabbitmq:3-management
I have a problem where docker-compose containers aren't able to reach the internet. Manually created containers via the docker cli or kubelet work just fine.
This is on an AWS EC2 node created using Kops with Calico overlay (I think that may be unrelated, however).
Here's the docker-compose:
version: '2.1'
services:
app:
container_name: app
image: "debian:jessie"
command: ["sleep", "99999999"]
app2:
container_name: app2
image: "debian:jessie"
command: ["sleep", "99999999"]
This fails:
# docker exec -it app ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8): 56 data bytes
docker-compose container<->container works (as expected):
# docker exec -it app ping app2
PING app2 (172.19.0.2): 56 data bytes
64 bytes from 172.19.0.2: icmp_seq=0 ttl=64 time=0.098 ms
Manually created container works fine:
# docker run -it -d --name app3 debian:jessie sh -c "sleep 99999999"
# docker exec -it app3 ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8): 56 data bytes
64 bytes from 8.8.8.8: icmp_seq=0 ttl=37 time=9.972 ms
So it seems like docker-compose containers can't reach the internet.
Here's the NetworkSettings from app3, which works:
"NetworkSettings": {
"Bridge": "",
"SandboxID": "54168ea912b9caa842b208f36dac80a588ebdc63501a700379fb1b732a41d3ac",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {},
"SandboxKey": "/var/run/docker/netns/54168ea912b9",
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "cdddee0f3e25e7861a98ba6aff33652619a3970c061d0ed2a5dc5bd2b075b30d",
"Gateway": "172.17.0.1",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "172.17.0.2",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"MacAddress": "02:42:ac:11:00:02",
"Networks": {
"bridge": {
"IPAMConfig": null,
"Links": null,
"Aliases": null,
"NetworkID": "46e8bc586d48c9a57e2886f7f35f7c2c8396f8084650fcc2bf1e74788df09e3f",
"EndpointID": "cdddee0f3e25e7861a98ba6aff33652619a3970c061d0ed2a5dc5bd2b075b30d",
"Gateway": "172.17.0.1",
"IPAddress": "172.17.0.2",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:ac:11:00:02"
}
}
}
From one of the docker-compose containers (fails):
"NetworkSettings": {
"Bridge": "",
"SandboxID": "6b79a6b45f099c65f89adf59eb50eadff2362942f316b05cf20ae1959ca9b88b",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {},
"SandboxKey": "/var/run/docker/netns/6b79a6b45f09",
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"root_default": {
"IPAMConfig": null,
"Links": null,
"Aliases": [
"app2",
"4f48647ba5bb"
],
"NetworkID": "ffb540b2b9e2945908477a755a43d3505aea6ed94ef5fd944909a91fb104ce8e",
"EndpointID": "48aff2f00bb4bd670b5178b459a353ac45f7d3efbfb013c1026064022e7c4e59",
"Gateway": "172.19.0.1",
"IPAddress": "172.19.0.2",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:ac:13:00:02"
}
}
}
So it seems like the major difference is that the docker-compose containers aren't created with an IPAddress or Gateway.
Some background info:
# docker version
Client:
Version: 1.12.6
API version: 1.24
Go version: go1.6.4
Git commit: 78d1802
Built: Tue Jan 10 20:17:57 2017
OS/Arch: linux/amd64
Server:
Version: 1.12.6
API version: 1.24
Go version: go1.6.4
Git commit: 78d1802
Built: Tue Jan 10 20:17:57 2017
OS/Arch: linux/amd64
# docker-compose version
docker-compose version 1.15.0, build e12f3b9
docker-py version: 2.4.2
CPython version: 2.7.13
OpenSSL version: OpenSSL 1.0.1t 3 May 2016
# ip route
default via 10.20.128.1 dev eth0
10.20.128.0/20 dev eth0 proto kernel scope link src 10.20.140.184
100.104.10.64/26 via 10.20.136.0 dev eth0 proto bird
100.109.150.192/26 via 10.20.152.115 dev tunl0 proto bird onlink
100.111.225.192 dev calic6f21d462fc scope link
blackhole 100.111.225.192/26 proto bird
100.111.225.193 dev calief8dddb6a0d scope link
100.111.225.195 dev cali8ca1dd867c3 scope link
100.111.225.196 dev cali34426885f86 scope link
100.111.225.197 dev cali6cae60de42a scope link
100.111.225.231 dev calibd569acd2f3 scope link
100.115.17.64/26 via 10.20.148.89 dev tunl0 proto bird onlink
100.115.237.64/26 via 10.20.167.9 dev tunl0 proto bird onlink
100.117.246.128/26 via 10.20.150.249 dev tunl0 proto bird onlink
100.118.80.0/26 via 10.20.162.215 dev tunl0 proto bird onlink
100.119.204.0/26 via 10.20.135.183 dev eth0 proto bird
100.123.178.128/26 via 10.20.170.43 dev tunl0 proto bird onlink
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1
172.18.0.0/16 dev br-bd6445b00ccf proto kernel scope link src 172.18.0.1
172.19.0.0/16 dev br-ffb540b2b9e2 proto kernel scope link src 172.19.0.1
iptables are a bit long, so not posting for now (I would expect them to interfere with the non-docker-compose generated containers, so I think the iptables are unrelated).
Anyone know what's going on?
I have a container with EXPOSE 27017 in Dockerfile, public ip and published port 27017. There is related info from ice inspect output:
...
"PortBindings": {
"27017/tcp": [
{
"HostPort": "27017"
}
]
},
...
"NetworkSettings": {
"Bridge": "",
"Gateway": "",
"IpAddress": "172.31.0.16",
"IpPrefixLen": 0,
"PortMapping": null,
"Ports": {
"27017/tcp": [
{
"HostIp": "134.168.18.146",
"HostPort": "27017"
}
]
},
"PublicIpAddress": "134.168.18.146"
},
Still, i can't connect to database using public ip, and nmap shows port 27017 as filtered.
Is there additional steps to expose container's port?
It is because when nmap send a packet with SYN flag to the server. It is not responding, but when send SYN to other closed port responds with the flag RST,ACK. Then, the port 27017 is filtered because the firewall (or the own server) are blocking the response packet.
You can check it with hping
$ hping3 -c 3 -S -p 27017 IP