ufw seems not to block all ports (Ubuntu with Docker) - docker

There is a server with Ubuntu 20. It has Docker installed, and several containers are running. The reverseproxy is a Nginx that should take traffic on 80 and 443, and route it to the containers. It works perfectly. But now I wanted to block all traffic (apart from 80, 443 and ssh) with ufw.
Somehow traffic on http ports 3000, 3001, 8081, 15672 (ports published by containers) still gets through.
Why? How to block all traffic using ufw?
ufw configuration
www#broowqh:~$ sudo ufw status verbose
Status: active
Logging: on (low)
Default: deny (incoming), allow (outgoing), deny (routed)
New profiles: skip
To Action From
-- ------ ----
22/tcp ALLOW IN Anywhere
9000 ALLOW IN Anywhere
3001 DENY IN Anywhere
3001/tcp DENY IN Anywhere
3001/udp DENY IN Anywhere
22/tcp (v6) ALLOW IN Anywhere (v6)
9000 (v6) ALLOW IN Anywhere (v6)
3001 (v6) DENY IN Anywhere (v6)
3001/tcp (v6) DENY IN Anywhere (v6)
3001/udp (v6) DENY IN Anywhere (v6)
docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
48709042d67f nginx:1.23-alpine "/docker-entrypoint.…" 10 hours ago Up 10 hours. 0.0.0.0:80->80/tcp, :::80->80/tcp, 0.0.0.0:443->443/tcp, :::443->443/tcp. reverseproxy
401d6576b3e0 adminer:4.8.1 "entrypoint.sh docke…" 10 hours ago Up 10 hours. 0.0.0.0:8081->8080/tcp, :::8081->8080/tcp adminer
c47966cae717 postgres:14.1-alpine "docker-entrypoint.s…" 10 hours ago Up 10 hours. 5432/tcp db
1c3709a07fb0 www:current "docker-entrypoint.s…" 15 hours ago Up 10 hours. 0.0.0.0:3001->3001/tcp, :::3001->3001/tcp www
db252e2833bc postgrest/postgrest:v10.0.0 "/bin/postgrest" 18 hours ago Up 10 hours. 0.0.0.0:3000->3000/tcp, :::3000->3000/tcp api
68396bebcaa8 rabbitmq:3.9.13-management-alpine "docker-entrypoint.s…" 19 hours ago Up 10 hours. 0.0.0.0:5672->5672/tcp, 0.0.0.0:15672->15672/tcp broker
Nginx configuration
upstream www {
server www:3001;
}
upstream api {
server api:3000;
}
upstream adminer {
server adminer:8080;
}
upstream rabbit {
server broker:15672;
}
server {
listen 80;
listen [::]:80;
server_name example.com
location / {
return 301 https://example.com$request_uri;
}
}
server {
listen 443 default_server ssl http2;
listen [::]:443 ssl http2;
server_name example.com;
ssl_certificate /etc/nginx/ssl/live/smartplaylist.me/example.crt;
ssl_certificate_key /etc/nginx/ssl/live/smartplaylist.me/example.key;
location /adminer/ {
proxy_pass http://adminer/;
}
location /rabbit/ {
proxy_pass http://rabbit/;
}
location /api/ {
proxy_pass http://api/;
}
location / {
proxy_pass http://www/;
}
}

Docker bypasses the UFW rules and the published ports can be accessed from outside. You can publish the port onto a specific interface, e.g. 127.0.0.1:8080:80 which would publish the port 8080 on the host's loopback interface (127.0.0.1) to connect to a container's port 80, and that loopback interface is not externally accessible.
With UFW you are modifying the INPUT rules, but docker adds it rules in PREROUTING table, that means you can't put filter rules at INPUT chain because it will never match and bypass all.

Related

accessing my container via my ip:host not working anymore

I use docker/docker-compose and nginx on my own server.
I was able to access to my container via external port
like my_adress:8080
then i made a redirect via nginx
{
listen 80
servername my_adress.8080
return 301 https://my_adress.8080
}
and then i removed the nginx conf.
I restarted nginx services
but now i can't access to http://my_adress:8080 anymore
there is an automatic redirect 301 to https://my_adress without port 8080
I search online how to remove nginx cache or something similar by didn't find it
i searched in https://serverfault.com/questions/825331/nginx-still-redirects-even-though-i-removed-the-rule-from-the-conf
but diden't find solution
when i do service docker status
i get in CGroup: /system.slice/docker.service
/usr/bin/docker-proxy -proto tcp -host-ip :: -host-port 8080 -container-ip 172.19.0.3 -container-port 80
Any ideas why i doesn't work anymore?
I found where the problem was.
I was using the image https://hub.docker.com/r/onlyoffice/documentserver
and i set up the https config
see "Running ONLYOFFICE Docs using HTTPS" in this page :
https://helpcenter.onlyoffice.com/installation/docs-community-install-docker.aspx
And on this image they were an automatic proxy to redirect https to https, so i was not linked to the nginx conf on my server it was only inside the docker-proxy
So the 2 solutions that i found:
remove the https configuration so it would be available in http
or binding host server port 443 (https) to the 443 port of the ONLYOFFICE
container, so the redirection works

Need help troubleshooting custom docker image for nginx

I want to install a simple web service to browse a file directory tree on an internal server and to comply with company policy it needs to use TLS ("https://...").
First I tried several images including davralin/nginx-autoindex and mounted the directory I want this service to share. It worked like a charm, but it didn't use a TLS connection.
To get something to work with TLS, I started from scratch and created my own default.conf file for nginx:
server {
listen 443 ssl;
listen [::]:443 ssl;
server_name localhost;
ssl_certificate /etc/ssl/certs/my-cert.crt;
ssl_certificate_key /etc/ssl/certs/server.key;
location / {
root /usr/share/nginx/html;
autoindex on;
}
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
Then I created the following Dockerfile:
FROM nginx:stable-alpine
MAINTAINER lsiden at gmail.com
COPY default.conf /etc/nginx/conf.d
COPY my-cert.crt /etc/ssl/certs/
COPY server.key /etc/ssl/certs/
Then I build it:
docker build -t lsiden/nginx-autoindex-tls .
Then I run it:
docker run -dt -v /var/www/data/files:/usr/share/nginx/html:ro -p 3453:80 lsiden/nginx-autoindex-tls
However, I can't reach it even from the host machine. I tried:
$ telnet localhost 3453
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
Connection closed by foreign host.
I tried to read log messages:
docker logs <container-id>
Silence.
I've already confirmed that the docker proxy is listening to the port:
tcp6 0 0 :::3453 :::* LISTEN 14828/docker-proxy
The port shows up on tcp6 but not "tcp" (ipv4) but I read here that netstat will show only the ipv6 connection even if it is available on both. To be sure, I verified:
sudo sysctl net.ipv6.bindv6only
net.ipv6.bindv6only = 0
To be thorough, I already opened this port in iptables, although iptables can't be playing a role here if I can't even get to it from the same machine via localhost.
I'm hoping someone with good networking chops can tell me where to look next. I can't figure out what I missed.
In case the configuration you shared is complete, you are not listing on port 80 inside your container at all.
change your configuration to something like that in case you want to redirect incomming traffic on port 80 to 443:
server {
listen 80;
listen [::]:80;
location / {
return 301 https://$server_name$request_uri;
}
}
server {
listen 443 ssl;
listen [::]:443 ssl;
server_name localhost;
ssl_certificate /etc/ssl/certs/my-cert.crt;
ssl_certificate_key /etc/ssl/certs/server.key;
location / {
root /usr/share/nginx/html;
autoindex on;
}
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
If you don't want to do this, just change your docker run command:
docker run -dt -v /var/www/data/files:/usr/share/nginx/html:ro -p 3453:443 lsiden/nginx-autoindex-tls

nginx responds to HTTPS but not HTTP

I am using the dockerized Nextcloud as shown here: https://github.com/nextcloud/docker/tree/master/.examples/docker-compose/with-nginx-proxy-self-signed-ssl/mariadb/fpm
I set this up with port 80 mapped to 12345 and port 443 mapped to 12346. When I go to https://mycloud.example.com:12346, I get the self-signed certificate prompt, but otherwise everything is fine and I see the NextCloud web UI. But when I go to http://mycloud.example.com:12345 nginx (the proxy container) gives error "503 Service Temporarily Unavailable". The error also shows up in proxy's logs.
How can I diagnose the issue? Why is HTTPS working but not HTTP?
Can you provide your docker command starting nextcloud, or docker-compose file ?
Diagnosis is as usual with docker stuff : get the id for the currently running container
docker ps
Then check the logs
docker logs [id or name of your container]
docker-compose logs [name of your service]
Connect in the container
docker exec -ti [id or name of your container] [bash or ash if alpine based container]
There read the nginx conf files involved. In your case I'ld check the redirection being made from http to https, most likely it's something like below with no specific port specified for https, hence port 443, hence not working
server {
listen 80;
server_name my.domain.com;
return 301 https://$server_name$request_uri; <======== no port = 443
}
server {
listen 443 ssl;
server_name my.domain.com;
# add Strict-Transport-Security to prevent man in the middle attacks
add_header Strict-Transport-Security "max-age=31536000" always;
[....]
}

spring boot application behind nginx with https

I have two docker containers:
One container runs my spring boot application which listens on port 8080:
This container exposes 8080 port to other docker containers.
Container ip in the docker network is 172.17.0.2.
The other container runs nginx which publishes port 80.
I can successfully put my spring boot app behind nginx with the following conf in my nginx container:
server {
server_name <my-ip>;
listen 80;
location / {
proxy_pass http://172.17.0.2:8080/;
}
}
Doing a GET request to my REST API (http://my-ip/context-url) works fine.
I am trying now to put my application behind nginx with https. My nginx conf is as follows:
server {
server_name <my-ip>;
listen 80;
return 301 https://$server_name$request_uri;
}
server {
server_name <my-ip>;
listen 443;
ssl on;
ssl_certificate /etc/ssl/certs/nginx-selfsigned.crt;
ssl_certificate_key /etc/ssl/private/nginx-selfsigned.key;
location / {
proxy_pass http://172.17.0.2:8080/;
}
}
However I cannot access my application now either through http or https.
http redirects to https and result is ERR_CONNECTION_REFUSED
Problem was that I was not publishing 443 port when running nginx container but only port 80.The nginx configuration is right.

Accessing external hosts from docker container

I am trying to dockerize my application. I have two servers, say server1 and server2. Server1 uses webservice hosted on server2. I have this in my
/etc/default/docker on server1:
DOCKER_OPTS="--dns 8.8.8.8 --dns 8.8.4.4 --iptables=false"
As I understand this prevents docker from making any changes to iptables, overriding UFW settings. The UFW status shows this:
Status: active
To Action From
-- ------ ----
22 ALLOW Anywhere
443 ALLOW Anywhere
2375/tcp ALLOW Anywhere
22 (v6) ALLOW Anywhere (v6)
443 (v6) ALLOW Anywhere (v6)
2375/tcp (v6) ALLOW Anywhere (v6)
Now the trouble is I am not able to access server2 from my app which runs
in a container on server1. If I don't use the --iptables=false flag then I can access server2. What can I do to access server2 from the container without having to sacrifice UFW ?
If it matters , both server1 and server2 are on digitalocean and have private networking enabled.

Resources