I have a website host on a server in Digital Ocean that is behaving weirdly.
The website is written in Flask which is deployed in Docker and using reverse proxy with a combination of Let's Encrypt to host on the web.
The website's domain is mes.th3pl4gu3.com.
If I go on mes.th3pl4gu3.com/web/ the website appears and works normal.
If I go on mes.th3pl4gu3.com/web it gives me http://localhost/web/ in the URl instead and conenction fails.
However, when I run it locally, it works fine.
I've checked my nginx logs, when i browse mes.th3pl4gu3.com/web/ the access_logs returns success but when i use mes.th3pl4gu3.com/web nothing comes to the log.
Does anyone have any idea what might be causing this ?
Below are some codes that might help in troubleshooting.
server {
server_name mes.th3pl4gu3.com;
location / {
access_log /var/log/nginx/mes/access_mes.log;
error_log /var/log/nginx/mes/error_mes.log;
proxy_pass http://localhost:9003; # The mes_pool nginx vip
}
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/..........
ssl_certificate_key /etc/letsencrypt/........
include /etc/letsencrypt/.........
ssl_dhparam /etc/letsencrypt/......
}
server {
if ($host = mes.th3pl4gu3.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
listen 80;
server_name mes.th3pl4gu3.com;
return 404; # managed by Certbot
}
Docker Instances:
7121492ad994 docker.pkg.github.com/mervin16/mauritius-emergency-services-api/mes:1.2.5 "uwsgi
app.ini" 4 weeks ago Up 4 weeks 0.0.0.0:9002->5000/tcp mes-instace-2
f4dc063e33b8 docker.pkg.github.com/mervin16/mauritius-emergency-services-api/mes:1.2.5 "uwsgi app.ini" 4 weeks ago Up 4 weeks 0.0.0.0:9001->5000/tcp mes-instace-1
fb269ed2229a nginx "/docker-entrypoint.…" 4 weeks ago Up 4 weeks 0.0.0.0:9003->80/tcp nginx_mes
2ad5afe0afd1 docker.pkg.github.com/mervin16/mauritius-emergency-services-api/mes:1.2.5 "uwsgi app.ini" 4 weeks ago Up 4 weeks 0.0.0.0:9000->5000/tcp mes-backup
docker-compose-instance.yml
version: "3.8"
# Contains all Production instances
# Should always stay up
# In case both instances fails, backup instance will takeover
services:
mes-instace-1:
container_name: mes-instace-1
image: "docker.pkg.github.com/mervin16/mauritius-emergency-services-api/mes:${MES_VERSION}"
networks:
- mes_net
volumes:
- ./data:/app/data
env_file:
- secret.env
ports:
- "9001:5000"
restart: always
environment:
- MES_VERSION=${MES_VERSION}
mes-instace-2:
container_name: mes-instace-2
image: "docker.pkg.github.com/mervin16/mauritius-emergency-services-api/mes:${MES_VERSION}"
networks:
- mes_net
volumes:
- ./data:/app/data
env_file:
- secret.env
ports:
- "9002:5000"
restart: always
environment:
- MES_VERSION=${MES_VERSION}
networks:
mes_net:
name: mes_network
driver: bridge
docker-compose.yml
version: "3.8"
# Contains the backup instance and the nginx server
# This should ALWAYS stay up
services:
mes-backup:
container_name: mes-backup
image: "docker.pkg.github.com/mervin16/mauritius-emergency-services-api/mes:${MES_VERSION}"
networks:
- mes_net
volumes:
- ./data:/app/data
env_file:
- secret.env
ports:
- "9000:5000"
restart: always
environment:
- MES_VERSION=${MES_VERSION}
nginx_mes:
image: nginx
container_name: nginx_mes
ports:
- "9003:80"
networks:
- mes_net
volumes:
- ./nginx/nginx.conf:/etc/nginx/conf.d/default.conf
- ./log/nginx:/var/log/nginx
depends_on:
- mes-backup
restart: always
networks:
mes_net:
name: mes_network
driver: bridge
I have multiple instances for load balancing across apps.
Can someone please help or if anyone has any clue why this might be happening ?
As long as I tested the page https://mes.th3pl4gu3.com/web with or without / trailing at the end it worked fine. (Firefox version 87 on Ubuntu)
Maybe there is a bug / problem with your web browser or any kind of VPN / Proxy you are running. Make sure all of them are off.
Plus on Nginx you can get rid of trailing / using rewrite rule
e.g.
location = /stream {
rewrite ^/stream /stream/;
}
which tells Nginx parse stream as it is stream/
and for making sure you are not facing any issue because of cached data, disable and clear all the cache. On your web browser hit F12 -> go to console tab , hit F1 and there disable cache. On Nginx set "no cache" for header, e.g.
add_header Last-Modified $date_gmt;
add_header Cache-Control 'no-store, no-cache, must-revalidate, proxy-revalidate, max-age=0';
if_modified_since off;
expires off;
etag off;
I tested your site with Chrome, Safari, and Curl, and I can't see that issue.
Try clear your cache.
Method 1: Ctrl-Shift-R
Method 2: DevTool -> Application/Storage -> Clear site data
My guess is it is related to your Flask SERVER_NAME
As you said, locally your SERVER_NAME might be set to localhost:8000.
However, on production, it would need to be something like
SERVER_NAME = "th3pl4gu3.com"
Your issue is that you are pulling the SERVER_NAME from the Flask SERVER_NAME variable, so it ends up as https://localhost/web instead of your desired URL.
Related
Hope someone can help me with the error I could not resolve.
Scenario:
Changing default upload limit 1MB by using client_max_body_size 10m; works without any problem while uploading file without using docker container. But when using it with nginx:latest docker container the setting configured for the docker is not being reflected and 413 Request Entity Too Large is being produced while uploading files > 1MB(nginx default limit).
There are three containers
nginx
mariadb
express pm2 app using axios
P.S upload limit is already handled for express middleware.
app.use(express.json({ extended: true, limit: '10mb' }));
app.use(express.urlencoded({ extended: true, limit: '10mb' }));
docker-compose.yml
services:
nginx:
image: nginx:latest
container_name: app_nginx
restart: always
ports:
- 80:80
- 443:443
volumes:
- ./nginx/conf.d/app.conf:/etc/nginx/conf.d/app.conf
- ./nginx/ssl:/etc/nginx/ssl
- ./nginx/logs:/var/log/nginx
app.conf
server {
listen 443 ssl;
ssl_certificate /etc/nginx/ssl/server.crt;
ssl_certificate_key /etc/nginx/ssl/server.key;
location / {
...
client_max_body_size 10m;
...
}
}
I have a server running docker-compose. Docker-compose has 2 services: nginx (as a reverse proxy) and back (as an api that handles 2 requests). In addition, there is a database that is not located on the server, but separately (database as a service).
Requests processed by back:
get('/api') - the back service simply replies "API is running" to it
get('/db') - the back service sends a simple query to an external database ('SELECT random() as random, current_database() as db')
request 1 - works fine, request 2 - the back service crashes, nginx continues to work and a 502 Bad Gateway error appears in the console.
An error occurs in the nginx service Logs: upstream prematurely
closed connection while reading response header from upstream.
The back service Logs: connection terminated due to connection timeout.
These are both rather vague errors. And I don’t know how else to get close to them, given that the code is not in a container, without Nginx and with the same database, it works as it should.
What I have tried:
increase the number of cores and RAM (now 2 cores and 4 GB of Ram);
add/remove/change proxy_read_timeout, proxy_send_timeout and proxy_connect_timeout parameters;
test the www.test.com/db request via postman and curl (fails with the same error);
run the code on your local machine without a container and compose and connect to the same database using the same pool and the same ip (everything is ok, both requests work and send what you need);
change the parameter worker_processes (tested with a value of 1 and auto);
add/remove attribute proxy_set_header Host $http_host, replace $http_host with "www.test.com".
Question:
What else can I try to fix the error and make the db request work?
My nginx.conf:
worker_processes 1;
events {
worker_connections 1024;
}
http{
upstream back-stream {
server back:8080;
}
server {
listen 80;
listen [::]:80;
server_name test.com www.test.com;
location / {
root /usr/share/nginx/html;
resolver 121.0.0.11;
proxy_pass http://back-stream;
}
}
}
My docker-compose.yml:
version: '3.9'
services:
nginx-proxy:
image: nginx:stable-alpine
container_name: nginx-proxy
ports:
- 80:80
- 443:443
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf
networks:
- network
back:
image: "mycustomimage"
container_name: back
restart: unless-stopped
ports:
- '81:8080'
networks:
- network
networks:
network:
driver: bridge
I can upload other files if needed. Just taking into account the fact that the code does not work correctly in the container, the problem is rather in setting up the container.
I will be grateful for any help.
Code of the back: here
The reason for the error is this: I forgot to add my server's ip to the list of allowed addresses in the database cluster.
I'm using "linuxserver"'s swag image to reverse-proxy my docker-compose images. I want to serve one of my images as my domain root and the other one as a subdomain (e.g. root-image # mysite.com and subdomain-image # staging.mysite.com). Here are the steps I went through:
redirected my domain and subdomain names to my server in Cloudflare (pinging them shows my server's IP and this is step OK! )
Configured Cloudflare DNS config for swag (this working OK!)
Configured docker-compose file:
swag:
image: linuxserver/swag:version-1.14.0
container_name: swag
cap_add:
- NET_ADMIN
environment:
- PUID=1000
- PGID=1000
- URL=mysite.com
- SUBDOMAINS=www,staging
- VALIDATION=dns
- DNSPLUGIN=cloudflare #optional
- EMAIL=me#mail.com #optional
- ONLY_SUBDOMAINS=false #optional
- STAGING=false #optional
volumes:
- /docker-confs/swag/config:/config
ports:
- 443:443
- 80:80 #optional
restart: unless-stopped
root-image:
image: ghcr.io/me/root-image
container_name: root-image
restart: unless-stopped
subdomain-image:
image: ghcr.io/me/subdomain-image
container_name: subdomain-image
restart: unless-stopped
Defined a proxy conf for my root-image (at swag/config/nginx/proxy-confs/root-image.subfolder.conf)
location / {
include /config/nginx/proxy.conf;
resolver 127.0.0.11 valid=30s;
set $upstream_app root-image;
set $upstream_port 3000;
set $upstream_proto http;
proxy_pass $upstream_proto://$upstream_app:$upstream_port;
}
Commented out the nginx's default location / {} block (at swag/config/nginx/site-confs/default)
Defined a proxy conf for my subdomain-image (at swag/config/nginx/proxy-confs/subdomain-image.subdomain.conf)
server {
listen 443 ssl;
listen [::]:443 ssl;
server_name staging.*;
include /config/nginx/ssl.conf;
client_max_body_size 0;
location / {
include /config/nginx/proxy.conf;
resolver 127.0.0.11 valid=30s;
set $upstream_app subdomain-image;
set $upstream_port 443;
set $upstream_proto https;
proxy_pass $upstream_proto://$upstream_app:$upstream_port;
}
}
Both images expose port 3000. Now my root image is working fine (OK!), but I'm getting 502 error for my subdomain image. I checked the nginx error log, but it doesn't show anything meaningful for me:
*35 connect() failed (111: Connection refused) while connecting to upstream, client: xxx.xxx.xxxx.xxx, server: staging.*, request: "GET /favicon.ico HTTP/2.0", upstream: "https://xxx.xxx.xxx:443/favicon.ico", host: "staging.mysite.com", referrer: "https://staging.mysite.com"
Docker logs for all the 3 containers are also fine (without showing any warnings or errors).
Which step I'm going wrong or is there anything I missed? Thanks for helping
I am stuck deploying docker image gitea/gitea:1 behind a reverse proxy jwilder/nginx-proxy with jrcs/letsencrypt-nginx-proxy-companion for automatic certificate updates.
gitea is running and I can connect by the http adress with port 3000.
The proxy is running also, as I have multiple apps and services e.g. sonarqube working well.
This is my docker-compose.yml:
version: "2"
services:
server:
image: gitea/gitea:1
environment:
- USER_UID=998
- USER_GID=997
- DB_TYPE=mysql
- DB_HOST=172.17.0.1:3306
- DB_NAME=gitea
- DB_USER=gitea
- DB_PASSWD=mysqlpassword
- ROOT_URL=https://gitea.myhost.de
- DOMAIN=gitea.myhost.de
- VIRTUAL_HOST=gitea.myhost.de
- LETSENCRYPT_HOST=gitea.myhost.de
- LETSENCRYPT_EMAIL=me#web.de
restart: always
ports:
- "3000:3000"
- "222:22"
expose:
- "3000"
- "22"
networks:
- frontproxy_default
volumes:
- /mnt/storagespace/gitea_data:/data
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
networks:
frontproxy_default:
external: true
default:
When i call https://gitea.myhost.de the result is
502 Bad Gateway (nginx/1.17.6)
This is the log entry:
2020/09/13 09:57:30 [error] 14323#14323: *15465 no live upstreams while connecting to upstream, client: 77.20.122.169, server: gitea.myhost.de, request: "GET / HTTP/2.0", upstream: "http://gitea.myhost.de/", host: "gitea.myhost.de"
and this is the relevant entry in nginx/conf/default.conf:
# gitea.myhost.de
upstream gitea.myhost.de {
## Can be connected with "frontproxy_default" network
# gitea_server_1
server 172.23.0.10 down;
}
server {
server_name gitea.myhost.de;
listen 80 ;
access_log /var/log/nginx/access.log vhost;
# Do not HTTPS redirect Let'sEncrypt ACME challenge
location /.well-known/acme-challenge/ {
auth_basic off;
allow all;
root /usr/share/nginx/html;
try_files $uri =404;
break;
}
location / {
return 301 https://$host$request_uri;
}
}
server {
server_name gitea.myhost.de;
listen 443 ssl http2 ;
access_log /var/log/nginx/access.log vhost;
ssl_session_timeout 5m;
ssl_session_cache shared:SSL:50m;
ssl_session_tickets off;
ssl_certificate /etc/nginx/certs/gitea.myhost.de.crt;
ssl_certificate_key /etc/nginx/certs/gitea.myhost.de.key;
ssl_dhparam /etc/nginx/certs/gitea.myhost.de.dhparam.pem;
ssl_stapling on;
ssl_stapling_verify on;
ssl_trusted_certificate /etc/nginx/certs/gitea.myhost.de.chain.pem;
add_header Strict-Transport-Security "max-age=31536000" always;
include /etc/nginx/vhost.d/default;
location / {
proxy_pass http://gitea.myhost.de;
}
}
Maybe it's a problem, I used a gitea backup for this container as suggested in https://docs.gitea.io/en-us/backup-and-restore/
What can I do to get this running? I have read this https://docs.gitea.io/en-us/reverse-proxies/ but maybe I missed something. The main point is to get letsencrypt-nginx-proxy-companion automatically managing the certificates.
Any help and tip is highly appreciated.
I believe all you are missing is your VIRTUAL_PORT setting in your gitea container's environment. This tells the reverse proxy container which port to connect with when routing incoming requests from your VIRTUAL_HOST domain, effectively adding along the lines of ":3000" to your upstream server in the nginx conf. This is also the case when your containers are all on the same host. By default, the reverse proxy container only listens on port 80 on that service, but since gitea docker container uses another default port of 3000, you need to tell that to the reverse proxy container essentially. See below using snippet from your compose file.
services:
server:
image: gitea/gitea:1
environment:
- USER_UID=998
- USER_GID=997
- DB_TYPE=mysql
- DB_HOST=172.17.0.1:3306
- DB_NAME=gitea
- DB_USER=gitea
- DB_PASSWD=mysqlpassword
- ROOT_URL=https://gitea.myhost.de
- DOMAIN=gitea.myhost.de
- VIRTUAL_HOST=gitea.myhost.de
- VIRTUAL_PORT=3000 <-------------------***Add this line***
- LETSENCRYPT_HOST=gitea.myhost.de
- LETSENCRYPT_EMAIL=me#web.de
restart: always
ports:
- "3000:3000"
- "222:22"
expose:
- "3000"
- "22"
networks:
- frontproxy_default
volumes:
- /mnt/storagespace/gitea_data:/data
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
networks:
frontproxy_default:
external: true
default:
P.S.: It is not required to expose the ports if all containers are on the same host and there was no other reason other than attempting to get this to work for it.
To get the hang of nginx with docker, I have a very simple nginx.conf file + docker-compose, running 2 containers for 1 service (service itself+db).
What I want:
localhost --> show static page
localhost/pics --> show another static page
localhost/wekan --> redirect to my container, which is running on port 3001.
the last part (redirect to docker-container) does not work. The app can be reached under localhost:3001, tho.
My nginx.conf:
user www-data;
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
events {
worker_connections 768;
# multi_accept on;
}
http {
server{
listen 80;
location / {
root /home/user/serverTest/up1; #index.html is here
}
location /wekan {
proxy_pass http://localhost:3001;
rewrite ^/wekan(.*)$ $1 break; # this didnt help either
}
location /pics {
proxy_pass http://localhost/example.jpg;
}
location ~ \.(gif|jpg|png)$ {
root /home/user/serverTest/data/images;
}
}
docker-compose.yml:
version: '2'
services:
wekandb:
image: mongo:3.2.21
container_name: wekan-db
restart: always
command: mongod --smallfiles --oplogSize 128
networks:
- wekan-tier
expose:
- 27017
volumes:
- /home/user/wekan/wekan-db:/data/db
- /home/user/wekan/wekan-db-dump:/dump
wekan:
image: quay.io/wekan/wekan
container_name: wekan-app
restart: always
networks:
- wekan-tier
ports:
# Docker outsideport:insideport
- 127.0.0.1:3001:8080
environment:
- MONGO_URL=mongodb://wekandb:27017/wekan
- ROOT_URL=http://localhost
Looking at the nginx-error logs, I get this:
2018/12/17 11:57:16 [error] 9060#9060: *124 open() "/home/user/serverTest/up1/31fb090e9e6464a4d62d3588afc742d2e11dc1f6.js" failed (2: No such file or directory),
client: 127.0.0.1, server: ,
request: "GET /31fb090e9e6464a4d62d3588afc742d2e11dc1f6.js?meteor_js_resource=true HTTP/1.1", host: "localhost",
referrer: "http://localhost/wekan"
So I guess this makes sense because in my understanding, nginx is now adding the redirect to the root given # /, but clearly this is not where the container is running.
How do I prevent that?
Your nginx cannot access the local network interface of your docker composition.
Try to bind wekan's port like this:
wekan:
ports:
- 127.0.0.1:3001:8080
Mind the 127.0.0.1
See https://docs.docker.com/compose/compose-file/#ports
the problem was within the docker-compose configuration.
For anyone wondering, all you need is a proxy pass addr:port or addr:port/ whereas the 2nd option does the same as the rewrite part, so this can be skipped.
Apart from that, i had to add the /wekan into the ROOT_URL inside my docker-compose