Ubuntu: using docker with snap and firewall is a problem - docker

On a ubuntu 21.10, using docker as a snap (snap install docker). With my firewall enabled, i have a problem resolving DNS:
$ docker run bash ping www.google.com
==> error
$ docker run bash ping 8.8.8.8
==> ok
If I disable the firewall, all is ok (sudo ufw disable):
$ docker run bash ping www.google.com
==> ok
My ufw status is like:
Status: active
To Action From
-- ------ ---- 22/tcp ALLOW Anywhere OpenSSH
ALLOW Anywhere Samba
ALLOW 192.168.100.0/24 22/tcp (v6)
ALLOW Anywhere (v6) OpenSSH (v6)
ALLOW Anywhere (v6)
I suspect that the docker container has no access to the internet over TCP (and probably UDP).
What would be the correct config to use to allow docker/snap to pass through the firewall?

Related

How do I allow external connections to my Ubuntu server and its docker containers?

I have an ubuntu server with a few docker containers running using docker-compose.
I mapped the mysql port as 3306:3306 and the adminer as 8080:8080.
It didn't work, but when I tried mapping 80:8080, adminer was working and can connect to the mysql db.
It seems no other ports allow any connections from outside other than the ssh port, http, and https ports.
I tried using ufw allow ..., but even with the rule clearly present upon checking ufw status, it still doesn't allow me to connect from port 8080.
What command am I supposed to run to make this work?
EDIT: this is how the ufw rules look now after I asked the person who started this server to open up the ports for me. Now, the 8080 and 3306 ports work, but the guy didn't tell me anything about what he did. I'm guessing there was some kind of other firewall only he had access to :/
Status: active
To Action From
-- ------ ----
22/tcp ALLOW Anywhere
8080/tcp ALLOW Anywhere
3306/tcp ALLOW Anywhere
80/tcp ALLOW Anywhere
443/tcp ALLOW Anywhere
8080 ALLOW Anywhere
80:61000/tcp ALLOW Anywhere
172.17.0.1 8081/tcp ALLOW Anywhere
OpenSSH ALLOW Anywhere
8081/tcp ALLOW Anywhere
22/tcp (v6) ALLOW Anywhere (v6)
8080/tcp (v6) ALLOW Anywhere (v6)
3306/tcp (v6) ALLOW Anywhere (v6)
80/tcp (v6) ALLOW Anywhere (v6)
443/tcp (v6) ALLOW Anywhere (v6)
8080 (v6) ALLOW Anywhere (v6)
80:61000/tcp (v6) ALLOW Anywhere (v6)
OpenSSH (v6) ALLOW Anywhere (v6)
8081/tcp (v6) ALLOW Anywhere (v6)
8080/tcp ALLOW FWD Anywhere
8080/tcp (v6) ALLOW FWD Anywhere (v6)

UFW to allow traffic from Docker

I have a development server with this UFW config:
$ sudo ufw status
Status: active
To Action From
-- ------ ----
22/tcp LIMIT Anywhere
22/tcp (v6) LIMIT Anywhere (v6)
123/udp ALLOW OUT Anywhere
DNS ALLOW OUT Anywhere
80/tcp ALLOW OUT Anywhere
443/tcp ALLOW OUT Anywhere
22/tcp ALLOW OUT Anywhere
123/udp (v6) ALLOW OUT Anywhere (v6)
DNS (v6) ALLOW OUT Anywhere (v6)
80/tcp (v6) ALLOW OUT Anywhere (v6)
443/tcp (v6) ALLOW OUT Anywhere (v6)
22/tcp (v6) ALLOW OUT Anywhere (v6)
My problem is that this also blocks traffic internally from Docker.
I run a Docker container that maps 8000:8000 for http, and if I disable UFW I can make requests as expected. However, when UFW is enabled, I can't reach port 8000 even internally.
How do I allow this traffic for internal use? I want to access via ssh -L 8000:127.0.0.1:8000 example.com, so I don't want to open port 8000 for external access.
UPDATE:
Thinking that the problem might be that UFW also applies the rules to the loop-back interface I updated my rule with these new rules:
To Action From
-- ------ ----
Anywhere on lo ALLOW Anywhere
Anywhere on 127.0.0.1 ALLOW Anywhere
Anywhere (v6) on lo ALLOW Anywhere (v6)
Anywhere (v6) on 127.0.0.1 ALLOW Anywhere (v6)
Anywhere ALLOW OUT Anywhere on lo
Anywhere ALLOW OUT Anywhere on 127.0.0.1
Anywhere (v6) ALLOW OUT Anywhere (v6) on lo
Anywhere (v6) ALLOW OUT Anywhere (v6) on 127.0.0.1
This does not solve the problem.
ufw allow from <some_address> to any app <app_name>
The manpage states not to enter a port number:
You should not specify the protocol with either syntax, and with the extended syntax, use app in place of the port clause.
This probably means it will let <app_name> use whatever port it needs to
Other commands which might be useful:
ufw app info <app_name>
Which lists the information on <app_name>'s profile.
ufw app update <app_name>
Which updates <app_name>'s profile. You can use all to update all application profiles.
You can use the:
ufw app update --add-new <app_name>
command to add a new profile for <app_name> and update it, following the rules you set out with ufw app default <policy>.
App profiles are stored in /etc/ufw/applications.d and sometimes /etc/services.
For more information, to view the man page for ufw
man ufw
Update: Docker uses a private interface called docker0, you can allow access for docker to your host system.
You can use the information on the interface to create a rule, for example,
ufw allow out on docker0 from 172.17.0.0/16
Using the port, you can make this rule more strict by using the following command, for example
ufw allow out on docker0 from 172.17.0.0/16 port 80 proto tcp
Docker creates a new interface for containers and to view this, you can use the ifconfig command:
docker0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255
ether 02:42:a4:5e:e9:9c txqueuelen 0 (Ethernet)
RX packets 87 bytes 17172 (17.1 KB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 117 bytes 14956 (14.9 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
This interface routes traffic through 172.17.xxx.xxx

Can connect to docker container when it runs apache, but not when it runs netcat on port 80

I am running a Debian server (stable), with the docker.io Debian package.This is the one distributed by Debian, not the one from the Docker developers. Since docker.io is only available in sid, I have installed from there (apt install -t unstable docker.io).
My firewall does allow connections to/from docker containers:
$ sudo ufw status
(...)
Anywhere ALLOW 172.17.0.0/16
172.17.0.0/16 ALLOW Anywhere
I also have this in /etc/ufw/before.rules :
*nat
:POSTROUTING ACCEPT [0:0]
-A POSTROUTING -s 172.17.0.0/16 -o eth0 -j MASQUERADE
So -- I have created an image with
$ sudo debootstrap stable ./stable-chroot http://deb.debian.org/debian > /dev/null
$ sudo tar -C stable-chroot -c . | docker import - debian-stable
Then started a container and installed apache2 and netcat. Port 1111 on the host machine will be redirected to port 80 on the container:
$ docker run -ti -p 1111:80 debian-stable bash
root#dc4996de9fe6:/# apt update
(... usual output from apt update ...)
root#dc4996de9fe6:/# apt install apache2 netcat
(... expected output, installation successful ...)
root#dc4996de9fe6:/# service apache2 start
root#dc4996de9fe6:/# service apache2 status
[ ok ] apache2 is running.
And from the host machine I can connect to the apache server:
$ curl 127.0.0.1:1111
(... HTML from the Debian apache placeholder page ...)
$ telnet 127.0.0.1 1111
Trying 127.0.0.1...
Connected to 127.0.0.1.
Escape character is '^]'.
And it waits for me to type (if I type GET / I get the Debian apache placeholder page). Ok. And if I stop apache inside the container,
root#06da401a5724:/# service apache2 stop
[ ok ] Stopping Apache httpd web server: apache2.
root#06da401a5724:/# service apache2 status
[FAIL] apache2 is not running ... failed!
Then connections to port 1111 on the host will be rejected (as expected):
$ telnet 127.0.0.1 1111
Trying 127.0.0.1...
Connected to 127.0.0.1.
Escape character is '^]'.
Connection closed by foreign host.
Now, if I start netcat on the container, listening on port 80:
root#06da401a5724:/# nc -l 172.17.0.2 80
Then I cannot connect from the host!
$ telnet localhost 1111
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
Connection closed by foreign host.
The same happens if I try nc -l 127.0.0.1 80 in the container.
What could be happening? Both apache and netcat were listening on port 80. What have I missed?
I'd appreciate any hints...
update: if I try this:
root#12b8fd142e00:/# nc -vv -l -p 80
listening on [any] 80 ...
172.17.0.1: inverse host lookup failed: Unknown host
invalid connection to [172.17.0.2] from (UNKNOWN) [172.17.0.1] 54876
Then it works!
Now it's weird... ifconfig inside the container tells me it has IP 172.17.0.2, but I can only use netcat binding to 172.17.0.1:
root#12b8fd142e00:/# ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.17.0.2 netmask 255.255.0.0 broadcast 0.0.0.0
inet6 fe80::42:acff:fe11:2 prefixlen 64 scopeid 0x20<link>
And Apache seems to want to 172.17.0.2 instead:
2AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 172.17.0.2. Set the 'ServerName' directive globally to suppress this message
but it actually uses 172.17.0.1:
root#12b8fd142e00:/# netstat
Active Internet connections (w/o servers)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 12b8fd142e00:http 172.17.0.1:54942 TIME_WAIT
tcp 0 0 12b8fd142e00:39528 151.101.48.204:http TIME_WAIT
Apache is not listening on 172.17.0.1, that's the address of the host (in the docker bridge).
In the netstat output, the local address has been resolved to 12b8fd142e00. Use the -n option with netstat to see unresolved (numeric) addresses (for example netstat -plnet to see listening sockets). 172.17.0.1 is the foreign address that connected to Apache (an it's indeed the host).
The last line in the netstat output shows that some process made a connection to 151.101.48.204:80, probably to make an HTTP request. You can see the PID/name of the process with netstat -p.

Docker Swarm container reachable although port is not open?

I followed these instructions here to build a 3 node Docker Swarm cluster.
In the beginning I opened multiple ports with ufw in order to communicate between the docker nodes:
# ufw status verbose
Status: active
Logging: on (low)
Default: deny (incoming), allow (outgoing), deny (routed)
New profiles: skip
To Action From
-- ------ ----
22/tcp ALLOW IN Anywhere
2376/tcp ALLOW IN Anywhere
2377/tcp ALLOW IN Anywhere
7946/tcp ALLOW IN Anywhere
7946/udp ALLOW IN Anywhere
4789/udp ALLOW IN Anywhere
22/tcp (v6) ALLOW IN Anywhere (v6)
2376/tcp (v6) ALLOW IN Anywhere (v6)
2377/tcp (v6) ALLOW IN Anywhere (v6)
7946/tcp (v6) ALLOW IN Anywhere (v6)
7946/udp (v6) ALLOW IN Anywhere (v6)
4789/udp (v6) ALLOW IN Anywhere (v6)
As you can see port 80 is not open.
So, at the end of the tutorial I deployed the official nginx docker image to the cluster:
docker service create -p 80:80 --name webserver nginx
I was able to enter the IP address of my server and was presented the nginx hello world page.
Now I am wondering, why am I able to reach the webserver although port 80 is not open?
Docker sets iptables rules itself, interfering with UFW.
Try running the docker daemon with the additional command line option --iptables=false.

Accessing external hosts from docker container

I am trying to dockerize my application. I have two servers, say server1 and server2. Server1 uses webservice hosted on server2. I have this in my
/etc/default/docker on server1:
DOCKER_OPTS="--dns 8.8.8.8 --dns 8.8.4.4 --iptables=false"
As I understand this prevents docker from making any changes to iptables, overriding UFW settings. The UFW status shows this:
Status: active
To Action From
-- ------ ----
22 ALLOW Anywhere
443 ALLOW Anywhere
2375/tcp ALLOW Anywhere
22 (v6) ALLOW Anywhere (v6)
443 (v6) ALLOW Anywhere (v6)
2375/tcp (v6) ALLOW Anywhere (v6)
Now the trouble is I am not able to access server2 from my app which runs
in a container on server1. If I don't use the --iptables=false flag then I can access server2. What can I do to access server2 from the container without having to sacrifice UFW ?
If it matters , both server1 and server2 are on digitalocean and have private networking enabled.

Resources