Firewalld And Container Published Ports - docker

On a KVM guest of my RHEL8 host, whose KVM guest is running CentOS7, I was expecting firewalld to by default block outside access to an ephemeral port published to by a Docker Container running nginx. To my surprise the access ISN'T blocked.
Again, the host (myhost) is running RHEL8, and it has a KVM guest (myguest) running CentOS7.
The firewalld configuration on myguest is standard, nothin' fancy:
[root#myguest ~]# firewall-cmd --list-all
public (active)
target: default
icmp-block-inversion: no
interfaces: eth0 eth1
sources:
services: http https ssh
ports:
protocols:
masquerade: no
forward-ports:
source-ports:
icmp-blocks:
rich rules:
Here are the eth0 and eth1 interfaces that fall under the firewalld public zone:
[root#myguest ~]# ip a s dev eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 52:54:00:96:9c:fc brd ff:ff:ff:ff:ff:ff
inet 192.168.100.111/24 brd 192.168.100.255 scope global noprefixroute eth0
valid_lft forever preferred_lft forever
inet6 fe80::5054:ff:fe96:9cfc/64 scope link noprefixroute
valid_lft forever preferred_lft forever
[root#myguest ~]# ip a s dev eth1
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 52:54:00:66:6c:a1 brd ff:ff:ff:ff:ff:ff
inet 192.168.1.111/24 brd 192.168.1.255 scope global noprefixroute eth1
valid_lft forever preferred_lft forever
inet6 fe80::5054:ff:fe66:6ca1/64 scope link noprefixroute
valid_lft forever preferred_lft forever
On myguest I'm running Docker, and the nginx container is publishing its Port 80 to an ephemeral port:
[me#myguest ~]$ docker container ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
06471204f091 nginx "/docker-entrypoint.…" About an hour ago Up About an hour 0.0.0.0:49154->80/tcp focused_robinson
Notice that in the prior firewall-cmd output I was not permitting access via this ephemeral TCP Port 49154 (or to any other ephemeral ports for that matter). So, I was expecting that unless I did so, outside access to nginx would be blocked. But to my surprise, from another host in the home network running Windows, I was able to access it:
C:\Users\me>curl http://myguest:49154
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
.
.etc etc
If a container publishes its container port to an ephemeral one on the host (myguest in this case), shouldn't the host firewall utility protect access to that port in the same manner as it would a standard port? Am I missing something?
But I also noticed that in fact the nginx container is listening on a TCP6 socket:
[root#myguest ~]# netstat -tlpan | grep 49154
tcp6 0 0 :::49154 :::* LISTEN 23231/docker-proxy
It seems, then, that firewalld may not be blocking tcp6 sockets? I'm confused.
This is obviously not a production issue, nor something to lose sleep over. I'd just like to make sense of it. Thanks.

The integration between docker and firewalld has changed over the years, but based on your OS versions and CLI output I think you can get the behavior you expect by setting AllowZoneDrifting=no it /etc/firewalld/firewalld.conf 1 on the RHEL-8 host.
Due to zone drifting, it possible for packets received in a zone with --set-target=default (e.g. public zone) to drift to a zone with --set-target=accept (e.g. trusted zone). This means FORWARDed packets received in zone public will be forwarded to zone trusted. If your docker containers are using a real bridge interface, then this issue may apply to your setup. Docker defaults to SNAT so usually this problem is hidden.
Newer firewalld 2 releases have completely removed this behavior, because as you have found it's both unexpected and a security issue.

Related

Can't Connect to Docker container within local network

Trying to run QuakeJS within the docker container. I'm new to docker (and networking). Couldn't connect. Decided to start easier and ran nginxdemos/helloworld. Still can't connect to the server (running Ubuntu Server).
Tried:
docker run -d -p 8080:80 nginxdemos/hello
Probably relevant ip addr:
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 18:03:73:be:70:eb brd ff:ff:ff:ff:ff:ff
altname enp0s25
inet 10.89.233.61/20 metric 100 brd 10.89.239.255 scope global dynamic eno1
valid_lft 27400sec preferred_lft 27400sec
inet6 fe80::1a03:73ff:febe:70eb/64 scope link
valid_lft forever preferred_lft forever
3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:7c:bb:47 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:acff:fe7c:bb47/64 scope link
valid_lft forever preferred_lft forever
Here's docker-network-ls:
NETWORK ID NAME DRIVER SCOPE
5671ad4b57fe bridge bridge local
a9348e40fb3c host host local
fdb16382afbd none null local
ufw-status
To Action From
-- ------ ----
8080 ALLOW Anywhere
8080 (v6) ALLOW Anywhere (v6)
Anywhere ALLOW OUT 172.17.0.0/16 on docker0
But when I try to access in a web browser (chrome and firefox) at 172.17.0.0:8080 (or many other permutations) I just end up in a time out. I'm sure this is a stupid think but I'm very stuck.
UPDATE
I installed a basic apache server and it worked fine. So it's something with Docker. I think.
UPDATE AGAIN
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES a7bbfee83954 nginxdemos/hello "/docker-entrypoint.…" About an hour ago Up About an hour 0.0.0.0:8080->80/tcp, :::8080->80/tcp relaxed_morse
I can use curl localhost:8080 and see the nginx page
I was playing with ufw but disabled it, not worried about network security. Tried ufw-docker too
FINAL UPDATE
Restarting Docker worked :|
When you publish a port with the -p option to docker run, the syntax is -p <host port>:<container port>, and you are saying, "expose the service running in the container on port <container port> on the Docker host as port <host port>.".
So when you run:
docker run -d -p 8080:80 nginxdemos/hello
You could open a browser on your host and connect to http://localhost:8080 (because port 8080 is the <host_port>). If you have the address of the container, you could also connect to http://<container_ip>:80, but you almost never want to do that, because every time you start a new container it receives a new ip address.
We publish ports to the host so that we don't need to muck about finding container ip address.
running 172.17.0.0:8080 (0.1, 0.2) or 10.89.233.61:8080 result in a timeout
172.17.0.0:8080 doesn't make any sense.
Both 172.17.0.1:8080 and 10.89.233.61:8080 ought to work (as should any other address assigned to a host interface). Some diagnostics to try:
Is the container actually running (docker ps)?
On the docker host are you able to connect to localhost:8080?
It looks like you're using UFW. Have you made any recent changes to the firewall?
If you restart docker (systemctl restart docker), do you see any changes in behavior?

Allow docker swarm service to bind host ip interface

I create a go script who switch between IP linked to my eno1 interface.
In case my program perform a curl --interface "51.15.xx.xx" ... and switch between my 3 IPs
Bellow my interface :
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether ac:1f:6b:27:xx:xx brd ff:ff:ff:ff:ff:ff
inet 51.15.xx.xx/24 brd 51.15.21.255 scope global eno1
valid_lft forever preferred_lft forever
inet 212.83.xx.x1/24 brd 212.83.154.255 scope global eno1
valid_lft forever preferred_lft forever
inet 212.83.xx.x2/24 brd 212.83.154.255 scope global secondary eno1
valid_lft forever preferred_lft forever
My program run perfectly when he run on host directly, he work too on
docker run --network=host.
But in production with a docker swarm (deployed on 1 node), i can't bind host network and my program can't run. I get the error: bind failed with errno 99: Address not available
I just want to bind host network with local scope on a swarm service or allow my swarm service to bind host ip
PS: I already try to bind network host but when I execute docker stack deploy, docker select a host network with scope swarm.
Best regards
Did you publish a port when declaring your swarm service or stack ?
According to the documentation:
If your container or service publishes no ports, host networking has no effect.
See https://docs.docker.com/network/host/
Try to use a direct command to create service, for example:
Code
docker service create --network host caa06d9c/echo
Docker image does not expose any ports
Check
curl localhost:8080
Output
{
"method": "GET",
"path": "/",
"ip": "127.0.0.1"
}
So port attached to the host and can be accessed from any interfaces.

Weblogic port binding in Docker container

I manually installed weblogic, configured domain inside a Docker container [Overlay network - Docker swarm].
[root#host ~]# docker network ls
NETWORK ID NAME DRIVER SCOPE
e3f32e71cc16 bridge bridge local
26628a46774c docker_gwbridge bridge local
20c80427519f host host local
9ejgeett1y4y ingress overlay swarm
52d14f492cda none null local
f628wowngc6z myoverlay overlay swarm
After starting the Admin Server, I see that the admin server port 7001 is bound to only the overlay interface and not to the ANY interface 0.0.0.0. So, even though I exposed the 7001 port when creating container, I am not able to access it from external public network.
[root#host /]# netstat -anp | grep 7001
tcp 0 0 10.0.0.5:7001 0.0.0.0:* LISTEN 8168/java
[root#host /]# ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
265: eth0#if266: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default
link/ether 02:42:0a:00:00:05 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 10.0.0.5/24 scope global eth0
valid_lft forever preferred_lft forever
267: eth1#if268: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:13:00:06 brd ff:ff:ff:ff:ff:ff link-netnsid 1
inet 172.19.0.6/16 scope global eth1
valid_lft forever preferred_lft forever
I also started sshd in the same container, but this was bound to the 0.0.0.0 ANY interface.
[root#host /]# netstat -anp | grep 22
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1/sshd
I need weblogic admin server console to be accessible to public network, not just within the overlay network containers. So, how to bind weblogic admin server port to all interfaces (0.0.0.0) ?
Problem lies in Weblogic configuration. If Listen address is set to empty, weblogic itself will bind the port to all interfaces. This solved my problem.

cannot access Internet inside docker container when docker0 has non-default address

Problem: the Internet isn't accessible within a docker container.
on my bare metal Ubuntu 17.10 box...
$ ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=52 time=10.8 ms
but...
$ docker run --rm debian:latest ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8): 56 data bytes
92 bytes from 7911d89db6a4 (192.168.220.2): Destination Host Unreachable
I think the root cause is that I had to set up a non-default network for docker0 because the default one 172.17.0.1 was already in use within my organization.
My /etc/docker/daemon.json file needs to look like this in order for docker to start successfully.
$ cat /etc/docker/daemon.json
{
"bip": "192.168.220.1/24",
"fixed-cidr": "192.168.220.0/24",
"fixed-cidr-v6": "0:0:0:0:0:ffff:c0a8:dc00/120",
"mtu": 1500,
"default-gateway": "192.168.220.10",
"default-gateway-v6": "0:0:0:0:0:ffff:c0a8:dc0a",
"dns": ["10.0.0.69","10.0.0.70","10.1.1.11"],
"debug": true
}
Note that the default-gateway setting looks wrong. However, if I correct it to read 192.168.220.1 the docker service fails to start. Running dockerd at the command line directly produces the most helpful logging, thus:
With "default-gateway": 192.168.220.1 in daemon.json...
$ sudo dockerd
-----8<-----
many lines removed
----->8-----
Error starting daemon: Error initializing network controller: Error creating default "bridge" network: failed to allocate secondary ip address (DefaultGatewayIPv4:192.168.220.1): Address already in use
Here's the info for docker0...
$ ip addr show docker0
10: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:10:bc:66:fd brd ff:ff:ff:ff:ff:ff
inet 192.168.220.1/24 brd 192.168.220.255 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:10ff:febc:66fd/64 scope link
valid_lft forever preferred_lft forever
And routing table...
$ route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 10.62.131.1 0.0.0.0 UG 100 0 0 enp14s0
10.62.131.0 0.0.0.0 255.255.255.0 U 100 0 0 enp14s0
169.254.0.0 0.0.0.0 255.255.0.0 U 1000 0 0 enp14s0
192.168.220.0 0.0.0.0 255.255.255.0 U 0 0 0 docker0
Is this the root cause? How do I achieve the, seemingly mutually exclusive states of:
docker0 interface address is x.x.x.1
gateway address is same, x.x.x.1
dockerd runs ok
?
Thanks!
Longer answer to Wedge Martin's question. I made the changes to daemon.json as you suggested:
{
"bip": "192.168.220.2/24",
"fixed-cidr": "192.168.220.0/24",
"fixed-cidr-v6": "0:0:0:0:0:ffff:c0a8:dc00/120",
"mtu": 1500,
"default-gateway": "192.168.220.1",
"default-gateway-v6": "0:0:0:0:0:ffff:c0a8:dc0a",
"dns": ["10.0.0.69","10.0.0.70","10.1.1.11"],
"debug": true
}
so at least the daemon starts, but I still don't have internet access within a container...
$ docker run -it --rm debian:latest bash
root#bd9082bf70a0:/# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
15: eth0#if16: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:c0:a8:dc:03 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 192.168.220.3/24 brd 192.168.220.255 scope global eth0
valid_lft forever preferred_lft forever
root#bd9082bf70a0:/# ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8): 56 data bytes
92 bytes from bd9082bf70a0 (192.168.220.3): Destination Host Unreachable
It turned out that less is more. Simplifying daemon.json to the following resolved my issues.
{
"bip": "192.168.220.2/24"
}
If you don't set the gw, docker will set it to first non-network address in the network, or .1, but if you set it, docker will conflict when allocating the bridge as the address .1 is in use. You should only set default_gateway if its outside of the network range.
Now the bip can tell docker to use a different address than the .1 and so setting the bip can avoid the conflict, but I am not sure that it will end up doing what you want. Probably will cause routing issues as non-network route will go to address that has no host responding.

Docker connect to mocked service on host port

I am using docker to run my web app on my local machine and I have created mocked web service using SoapUI on host machine.
The mocked service is accessible through localhost:8099 and IP 127.0.0.1:8099 (using telnet), I am however unable to access it from running docker container.
I have read some articles about discovering host IP address through
ip addr show docker0
with results:
5: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:e3:36:43:5b brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:e3ff:fe36:435b/64 scope link
valid_lft forever preferred_lft forever
When I tried to ping IP 172.17.0.1 from docker container I am getting responses just OK, but when trying to call the mocked web service from my web app I get responses No route to host.
I have also tried to modify iptables using iptables -A INPUT -i docker0 -j ACCEPT but with no success.
Is there any other setting that I am missing?
Any help is appreciated.
Thanks, shimon
If I have read your question right your local and host machines are not the same machine. In which case you won't be able to (unless you have set a tunnel up on localhost:8099) be able to access your mocker service on the host machine using localhost as it will resolve to your local ip (on your local machine).
What you need to do is make sure both machines can talk to each other and use the host machines IP instead of localhost.

Resources