Traefik and Nginx with HTTPS on Docker / 400 Bad Request - docker

I'm trying to build stack with Traefik and Nginx based on Docker. Without HTTPS is everything fine, but I get an error as soon as I put on HTTPS configuration.
I'm getting this error from Nginx on example.com: 400 Bad Request / The plain HTTP request was sent to HTTPS port. In the address bar I can see the green lock saying connection is secure.
Certbot works fine so I have real SSL certificate inside the proper folder.
I can get to the Traefik dasboard when I visit traefik.example.com but I have to accept no SSL browser warning and dasboard is also working without HTTPS.
docker-compose.yml
version: '3.4'
services:
traefik:
image: traefik:latest
ports:
- "80:80"
- "443:443"
- "8080:8080"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ./traefik/traefik.toml:/etc/traefik/traefik.toml
- ../letsencrypt:/etc/letsencrypt
labels:
- traefik.backend=traefik
- traefik.frontend.rule=Host:traefik.example.com
- traefik.port=8080
networks:
- traefik
nginx:
image: nginx:latest
volumes:
- ../www:/var/www
- ./nginx/nginx.conf:/etc/nginx/conf.d/default.conf
- ../letsencrypt:/etc/letsencrypt
labels:
- traefik.backend=nginx
- traefik.frontend.rule=Host:example.com
- traefik.port=80
- traefik.port=443
networks:
- traefik
networks:
traefik:
driver: overlay
external: true
attachable: true
traefik.toml
defaultEntryPoints = ["http", "https"]
[web]
address = ":8080"
[entryPoints]
[entryPoints.http]
address = ":80"
[entryPoints.http.redirect]
entryPoint = "https"
[entryPoints.https]
address = ":443"
[entryPoints.https.tls]
[[entryPoints.https.tls.certificates]]
certFile = "/etc/letsencrypt/live/example.com/fullchain.pem"
keyFile = "/etc/letsencrypt/live/example.com/privkey.pem"
[docker]
domain="example.com"
watch = true
exposedByDefault = true
swarmMode = false
nginx.conf
server {
listen 80;
server_name example.com www.example.com;
return 301 https://www.example.com$request_uri;
}
server {
listen 443 ssl http2;
server_name example.com;
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
return 301 https://www.example.com$request_uri;
}
server {
listen 443 ssl http2;
server_name www.example.com;
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
root /var/www/public;
index index.html;
}
Thanks for your help.

First there is no need to have SSL redirection configured in both Traefik and Nginx. Also Traefik frontend is matching only non www variant but backend app expects www. Finally Traefik web provider is deprecated so there should be newer api provider.

As I just stumbled upon a similar problem with Traefik v2
400 Bad Request / The plain HTTP request was sent to HTTPS port
with an Nginx error log stating
400 client sent plain HTTP request to HTTPS port while reading client request headers
and scratching my head over it, I finally found the source of that error. It's not that the TLS certs were invalid or something in the transport broken, but that the wiring between routers, services and port mappings were off.
Previously I did not see, that the Docker Compose stack had an Nginx container only listening on 80/tcp. I assumed everything's ok as I attached the ports to Traefik load balancers attached to a separate service per http/https endpoints with separated routers. This somehow did not work:
- "traefik.http.services.proxy.loadbalancer.server.port=80"
- "traefik.http.services.proxy-secure.loadbalancer.server.port=443"
Intermediary I now opened port: - "8008:80" - "8443:443" and got it working. Investigating further what's wrong with Traefik ports as those should get exposed per default. This is not a solution as those ports are now available to the outside world, but I am leaving this explanation here as I could not find anything on this topic that would point me in the right direction, so hopefully it's helpful for someone else later on.

Related

How to fix 502 Bad Gateway error in nginx?

I have a server running docker-compose. Docker-compose has 2 services: nginx (as a reverse proxy) and back (as an api that handles 2 requests). In addition, there is a database that is not located on the server, but separately (database as a service).
Requests processed by back:
get('/api') - the back service simply replies "API is running" to it
get('/db') - the back service sends a simple query to an external database ('SELECT random() as random, current_database() as db')
request 1 - works fine, request 2 - the back service crashes, nginx continues to work and a 502 Bad Gateway error appears in the console.
An error occurs in the nginx service Logs: upstream prematurely
closed connection while reading response header from upstream.
The back service Logs: connection terminated due to connection timeout.
These are both rather vague errors. And I don’t know how else to get close to them, given that the code is not in a container, without Nginx and with the same database, it works as it should.
What I have tried:
increase the number of cores and RAM (now 2 cores and 4 GB of Ram);
add/remove/change proxy_read_timeout, proxy_send_timeout and proxy_connect_timeout parameters;
test the www.test.com/db request via postman and curl (fails with the same error);
run the code on your local machine without a container and compose and connect to the same database using the same pool and the same ip (everything is ok, both requests work and send what you need);
change the parameter worker_processes (tested with a value of 1 and auto);
add/remove attribute proxy_set_header Host $http_host, replace $http_host with "www.test.com".
Question:
What else can I try to fix the error and make the db request work?
My nginx.conf:
worker_processes 1;
events {
worker_connections 1024;
}
http{
upstream back-stream {
server back:8080;
}
server {
listen 80;
listen [::]:80;
server_name test.com www.test.com;
location / {
root /usr/share/nginx/html;
resolver 121.0.0.11;
proxy_pass http://back-stream;
}
}
}
My docker-compose.yml:
version: '3.9'
services:
nginx-proxy:
image: nginx:stable-alpine
container_name: nginx-proxy
ports:
- 80:80
- 443:443
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf
networks:
- network
back:
image: "mycustomimage"
container_name: back
restart: unless-stopped
ports:
- '81:8080'
networks:
- network
networks:
network:
driver: bridge
I can upload other files if needed. Just taking into account the fact that the code does not work correctly in the container, the problem is rather in setting up the container.
I will be grateful for any help.
Code of the back: here
The reason for the error is this: I forgot to add my server's ip to the list of allowed addresses in the database cluster.

How to reverse proxy a docker image at domain root and another image at a subdomain with swag?

I'm using "linuxserver"'s swag image to reverse-proxy my docker-compose images. I want to serve one of my images as my domain root and the other one as a subdomain (e.g. root-image # mysite.com and subdomain-image # staging.mysite.com). Here are the steps I went through:
redirected my domain and subdomain names to my server in Cloudflare (pinging them shows my server's IP and this is step OK! )
Configured Cloudflare DNS config for swag (this working OK!)
Configured docker-compose file:
swag:
image: linuxserver/swag:version-1.14.0
container_name: swag
cap_add:
- NET_ADMIN
environment:
- PUID=1000
- PGID=1000
- URL=mysite.com
- SUBDOMAINS=www,staging
- VALIDATION=dns
- DNSPLUGIN=cloudflare #optional
- EMAIL=me#mail.com #optional
- ONLY_SUBDOMAINS=false #optional
- STAGING=false #optional
volumes:
- /docker-confs/swag/config:/config
ports:
- 443:443
- 80:80 #optional
restart: unless-stopped
root-image:
image: ghcr.io/me/root-image
container_name: root-image
restart: unless-stopped
subdomain-image:
image: ghcr.io/me/subdomain-image
container_name: subdomain-image
restart: unless-stopped
Defined a proxy conf for my root-image (at swag/config/nginx/proxy-confs/root-image.subfolder.conf)
location / {
include /config/nginx/proxy.conf;
resolver 127.0.0.11 valid=30s;
set $upstream_app root-image;
set $upstream_port 3000;
set $upstream_proto http;
proxy_pass $upstream_proto://$upstream_app:$upstream_port;
}
Commented out the nginx's default location / {} block (at swag/config/nginx/site-confs/default)
Defined a proxy conf for my subdomain-image (at swag/config/nginx/proxy-confs/subdomain-image.subdomain.conf)
server {
listen 443 ssl;
listen [::]:443 ssl;
server_name staging.*;
include /config/nginx/ssl.conf;
client_max_body_size 0;
location / {
include /config/nginx/proxy.conf;
resolver 127.0.0.11 valid=30s;
set $upstream_app subdomain-image;
set $upstream_port 443;
set $upstream_proto https;
proxy_pass $upstream_proto://$upstream_app:$upstream_port;
}
}
Both images expose port 3000. Now my root image is working fine (OK!), but I'm getting 502 error for my subdomain image. I checked the nginx error log, but it doesn't show anything meaningful for me:
*35 connect() failed (111: Connection refused) while connecting to upstream, client: xxx.xxx.xxxx.xxx, server: staging.*, request: "GET /favicon.ico HTTP/2.0", upstream: "https://xxx.xxx.xxx:443/favicon.ico", host: "staging.mysite.com", referrer: "https://staging.mysite.com"
Docker logs for all the 3 containers are also fine (without showing any warnings or errors).
Which step I'm going wrong or is there anything I missed? Thanks for helping

Hosted server returning localhost/web instead

I have a website host on a server in Digital Ocean that is behaving weirdly.
The website is written in Flask which is deployed in Docker and using reverse proxy with a combination of Let's Encrypt to host on the web.
The website's domain is mes.th3pl4gu3.com.
If I go on mes.th3pl4gu3.com/web/ the website appears and works normal.
If I go on mes.th3pl4gu3.com/web it gives me http://localhost/web/ in the URl instead and conenction fails.
However, when I run it locally, it works fine.
I've checked my nginx logs, when i browse mes.th3pl4gu3.com/web/ the access_logs returns success but when i use mes.th3pl4gu3.com/web nothing comes to the log.
Does anyone have any idea what might be causing this ?
Below are some codes that might help in troubleshooting.
server {
server_name mes.th3pl4gu3.com;
location / {
access_log /var/log/nginx/mes/access_mes.log;
error_log /var/log/nginx/mes/error_mes.log;
proxy_pass http://localhost:9003; # The mes_pool nginx vip
}
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/..........
ssl_certificate_key /etc/letsencrypt/........
include /etc/letsencrypt/.........
ssl_dhparam /etc/letsencrypt/......
}
server {
if ($host = mes.th3pl4gu3.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
listen 80;
server_name mes.th3pl4gu3.com;
return 404; # managed by Certbot
}
Docker Instances:
7121492ad994 docker.pkg.github.com/mervin16/mauritius-emergency-services-api/mes:1.2.5 "uwsgi
app.ini" 4 weeks ago Up 4 weeks 0.0.0.0:9002->5000/tcp mes-instace-2
f4dc063e33b8 docker.pkg.github.com/mervin16/mauritius-emergency-services-api/mes:1.2.5 "uwsgi app.ini" 4 weeks ago Up 4 weeks 0.0.0.0:9001->5000/tcp mes-instace-1
fb269ed2229a nginx "/docker-entrypoint.…" 4 weeks ago Up 4 weeks 0.0.0.0:9003->80/tcp nginx_mes
2ad5afe0afd1 docker.pkg.github.com/mervin16/mauritius-emergency-services-api/mes:1.2.5 "uwsgi app.ini" 4 weeks ago Up 4 weeks 0.0.0.0:9000->5000/tcp mes-backup
docker-compose-instance.yml
version: "3.8"
# Contains all Production instances
# Should always stay up
# In case both instances fails, backup instance will takeover
services:
mes-instace-1:
container_name: mes-instace-1
image: "docker.pkg.github.com/mervin16/mauritius-emergency-services-api/mes:${MES_VERSION}"
networks:
- mes_net
volumes:
- ./data:/app/data
env_file:
- secret.env
ports:
- "9001:5000"
restart: always
environment:
- MES_VERSION=${MES_VERSION}
mes-instace-2:
container_name: mes-instace-2
image: "docker.pkg.github.com/mervin16/mauritius-emergency-services-api/mes:${MES_VERSION}"
networks:
- mes_net
volumes:
- ./data:/app/data
env_file:
- secret.env
ports:
- "9002:5000"
restart: always
environment:
- MES_VERSION=${MES_VERSION}
networks:
mes_net:
name: mes_network
driver: bridge
docker-compose.yml
version: "3.8"
# Contains the backup instance and the nginx server
# This should ALWAYS stay up
services:
mes-backup:
container_name: mes-backup
image: "docker.pkg.github.com/mervin16/mauritius-emergency-services-api/mes:${MES_VERSION}"
networks:
- mes_net
volumes:
- ./data:/app/data
env_file:
- secret.env
ports:
- "9000:5000"
restart: always
environment:
- MES_VERSION=${MES_VERSION}
nginx_mes:
image: nginx
container_name: nginx_mes
ports:
- "9003:80"
networks:
- mes_net
volumes:
- ./nginx/nginx.conf:/etc/nginx/conf.d/default.conf
- ./log/nginx:/var/log/nginx
depends_on:
- mes-backup
restart: always
networks:
mes_net:
name: mes_network
driver: bridge
I have multiple instances for load balancing across apps.
Can someone please help or if anyone has any clue why this might be happening ?
As long as I tested the page https://mes.th3pl4gu3.com/web with or without / trailing at the end it worked fine. (Firefox version 87 on Ubuntu)
Maybe there is a bug / problem with your web browser or any kind of VPN / Proxy you are running. Make sure all of them are off.
Plus on Nginx you can get rid of trailing / using rewrite rule
e.g.
location = /stream {
rewrite ^/stream /stream/;
}
which tells Nginx parse stream as it is stream/
and for making sure you are not facing any issue because of cached data, disable and clear all the cache. On your web browser hit F12 -> go to console tab , hit F1 and there disable cache. On Nginx set "no cache" for header, e.g.
add_header Last-Modified $date_gmt;
add_header Cache-Control 'no-store, no-cache, must-revalidate, proxy-revalidate, max-age=0';
if_modified_since off;
expires off;
etag off;
I tested your site with Chrome, Safari, and Curl, and I can't see that issue.
Try clear your cache.
Method 1: Ctrl-Shift-R
Method 2: DevTool -> Application/Storage -> Clear site data
My guess is it is related to your Flask SERVER_NAME
As you said, locally your SERVER_NAME might be set to localhost:8000.
However, on production, it would need to be something like
SERVER_NAME = "th3pl4gu3.com"
Your issue is that you are pulling the SERVER_NAME from the Flask SERVER_NAME variable, so it ends up as https://localhost/web instead of your desired URL.

Keycloak docker Administration Console shows blank page with reverse proxy

I am using HAProxy with Keycloak, welcome page is showing fine but each time I enter Administration Console it shows me a blank page with no info with status code 200.
I am using let's encrypt SSL certificate and here is my HAProxy config and docker-compose.
Screenshot of the page:-
link to screenshot
HAProxy config:-
global
log stdout local0 debug
daemon
maxconn 4096
defaults
mode http
option httplog
option dontlognull
timeout connect 5000ms
timeout client 50000ms
timeout server 50000ms
log global
log-format {"type":"haproxy","timestamp":%Ts,"http_status":%ST,"http_request":"%r","remote_addr":"%ci","bytes_read":%B,"upstream_addr":"%si","backend_name":"%b","retries":%rc,"bytes_uploaded":%U,"upstream_response_time":"%Tr","upstream_connect_time":"%Tc","session_duration":"%Tt","termination_state":"%ts"}
frontend public
bind *:80
bind *:443 ssl crt /usr/local/etc/haproxy/haproxy.pem alpn h2,http/1.1
http-request redirect scheme https unless { ssl_fc }
default_backend web_servers
backend web_servers
option forwardfor
server auth1 auth:8080
docker-compose.yaml :-
version: "3"
networks:
internal-network:
services:
reverse-proxy:
build: ./reverse-proxy/.
image: reverseproxy:latest
ports:
- "80:80"
- "443:443"
networks:
- internal-network
depends_on:
- auth
auth:
image: quay.io/keycloak/keycloak:latest
networks:
internal-network:
environment:
PROXY_ADDRESS_FORWARDING: "true"
KEYCLOAK_USER: ***
KEYCLOAK_PASSWORD: ***
# Uncomment the line below if you want to specify JDBC parameters. The parameter below is just an example, and it shouldn't be used in production without knowledge. It is highly recommended that you read the PostgreSQL JDBC driver documentation in order to use it.
#JDBC_PARAMS: "ssl=false"
the URL to the page I am trying to access is https:///auth/admin/master/console/
Notes: when trying to remove SSL from HAProxy, the Keycloak opens a page with error "https required"
One obvious issue (there can be more issues, so fix of this one may still not fix everything):
https://www.keycloak.org/docs/latest/server_installation/index.html#_setting-up-a-load-balancer-or-proxy
Configure your reverse proxy or loadbalancer to properly set X-Forwarded-For and X-Forwarded-Proto HTTP headers.
You didn't configure this part in your haproxy frontend section. You need that:
http-request set-header X-Forwarded-Port %[dst_port]
http-request set-header X-Forwarded-For %[src]
http-request set-header X-Forwarded-Proto https
Https protocol is required for OIDC, so "https required" is correct response, when you reach Keycloak via http protocol.

gitea in docker behind jwilder/nginx-proxy and jrcs/letsencrypt-nginx-proxy-companion

I am stuck deploying docker image gitea/gitea:1 behind a reverse proxy jwilder/nginx-proxy with jrcs/letsencrypt-nginx-proxy-companion for automatic certificate updates.
gitea is running and I can connect by the http adress with port 3000.
The proxy is running also, as I have multiple apps and services e.g. sonarqube working well.
This is my docker-compose.yml:
version: "2"
services:
server:
image: gitea/gitea:1
environment:
- USER_UID=998
- USER_GID=997
- DB_TYPE=mysql
- DB_HOST=172.17.0.1:3306
- DB_NAME=gitea
- DB_USER=gitea
- DB_PASSWD=mysqlpassword
- ROOT_URL=https://gitea.myhost.de
- DOMAIN=gitea.myhost.de
- VIRTUAL_HOST=gitea.myhost.de
- LETSENCRYPT_HOST=gitea.myhost.de
- LETSENCRYPT_EMAIL=me#web.de
restart: always
ports:
- "3000:3000"
- "222:22"
expose:
- "3000"
- "22"
networks:
- frontproxy_default
volumes:
- /mnt/storagespace/gitea_data:/data
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
networks:
frontproxy_default:
external: true
default:
When i call https://gitea.myhost.de the result is
502 Bad Gateway (nginx/1.17.6)
This is the log entry:
2020/09/13 09:57:30 [error] 14323#14323: *15465 no live upstreams while connecting to upstream, client: 77.20.122.169, server: gitea.myhost.de, request: "GET / HTTP/2.0", upstream: "http://gitea.myhost.de/", host: "gitea.myhost.de"
and this is the relevant entry in nginx/conf/default.conf:
# gitea.myhost.de
upstream gitea.myhost.de {
## Can be connected with "frontproxy_default" network
# gitea_server_1
server 172.23.0.10 down;
}
server {
server_name gitea.myhost.de;
listen 80 ;
access_log /var/log/nginx/access.log vhost;
# Do not HTTPS redirect Let'sEncrypt ACME challenge
location /.well-known/acme-challenge/ {
auth_basic off;
allow all;
root /usr/share/nginx/html;
try_files $uri =404;
break;
}
location / {
return 301 https://$host$request_uri;
}
}
server {
server_name gitea.myhost.de;
listen 443 ssl http2 ;
access_log /var/log/nginx/access.log vhost;
ssl_session_timeout 5m;
ssl_session_cache shared:SSL:50m;
ssl_session_tickets off;
ssl_certificate /etc/nginx/certs/gitea.myhost.de.crt;
ssl_certificate_key /etc/nginx/certs/gitea.myhost.de.key;
ssl_dhparam /etc/nginx/certs/gitea.myhost.de.dhparam.pem;
ssl_stapling on;
ssl_stapling_verify on;
ssl_trusted_certificate /etc/nginx/certs/gitea.myhost.de.chain.pem;
add_header Strict-Transport-Security "max-age=31536000" always;
include /etc/nginx/vhost.d/default;
location / {
proxy_pass http://gitea.myhost.de;
}
}
Maybe it's a problem, I used a gitea backup for this container as suggested in https://docs.gitea.io/en-us/backup-and-restore/
What can I do to get this running? I have read this https://docs.gitea.io/en-us/reverse-proxies/ but maybe I missed something. The main point is to get letsencrypt-nginx-proxy-companion automatically managing the certificates.
Any help and tip is highly appreciated.
I believe all you are missing is your VIRTUAL_PORT setting in your gitea container's environment. This tells the reverse proxy container which port to connect with when routing incoming requests from your VIRTUAL_HOST domain, effectively adding along the lines of ":3000" to your upstream server in the nginx conf. This is also the case when your containers are all on the same host. By default, the reverse proxy container only listens on port 80 on that service, but since gitea docker container uses another default port of 3000, you need to tell that to the reverse proxy container essentially. See below using snippet from your compose file.
services:
server:
image: gitea/gitea:1
environment:
- USER_UID=998
- USER_GID=997
- DB_TYPE=mysql
- DB_HOST=172.17.0.1:3306
- DB_NAME=gitea
- DB_USER=gitea
- DB_PASSWD=mysqlpassword
- ROOT_URL=https://gitea.myhost.de
- DOMAIN=gitea.myhost.de
- VIRTUAL_HOST=gitea.myhost.de
- VIRTUAL_PORT=3000 <-------------------***Add this line***
- LETSENCRYPT_HOST=gitea.myhost.de
- LETSENCRYPT_EMAIL=me#web.de
restart: always
ports:
- "3000:3000"
- "222:22"
expose:
- "3000"
- "22"
networks:
- frontproxy_default
volumes:
- /mnt/storagespace/gitea_data:/data
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
networks:
frontproxy_default:
external: true
default:
P.S.: It is not required to expose the ports if all containers are on the same host and there was no other reason other than attempting to get this to work for it.

Resources