Run qgis server with nginx in docker container (docker-compose) - docker

I need to run QGIS server together with NGINX. I have to setup the environment using docker-compose. I am using the docker-compose file as mentioned in the comment.
And nginx.conf as below -
events {
worker_connections 4096;
}
http {
# error_log /etc/nginx/error/error.log warn; #./nginx/error.log warn;
client_max_body_size 20m;
proxy_cache_path /etc/nginx/cache keys_zone=one:500m max_size=1000m;
server {
listen 80;
server_name xx.xx.xx.xxx;
# return 301 https://localhost:80$request_uri;
return 301 https://$server_name$request_uri;
# return 301 https://localhost:8008;
}
server {
listen 443 ssl http2;
server_name xx.xx.xx.xxx; # localhost;
ssl_session_cache shared:SSL:50m;
ssl_session_timeout 1d;
ssl_session_tickets off;
#ssl_certificate /etc/nginx/ssl.crt;
#ssl_certificate_key /etc/nginx/ssl.key;
ssl_protocols TLSv1.2;
ssl_ciphers EECDH+AESGCM:EDH+AESGCM:EECDH:EDH:!MD5:!RC4:!LOW:!MEDIUM:!CAMELLIA:!ECDSA:!DES:!DSS:!3DES:!NULL;
ssl_prefer_server_ciphers on;
keepalive_timeout 70;
location /qgis/ {
proxy_pass http://qgis:8080;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
}
}
}
After docker-compose up the nginx container is always in restarting state. docker-compose logs are as below -
web_server_1 | 2021/05/12 16:53:45 [emerg] 1#1: host not found in upstream "qgis" in /etc/nginx/nginx.conf:40
web_server_1 | nginx: [emerg] host not found in upstream "qgis" in /etc/nginx/nginx.conf:40
Thanks in advance!!

Use something like this as docker-compose.yml
services:
web_server:
image: nginx
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf
- ./mime.types:/etc/nginx/conf/mime.types
- ./public:/data/www
- ./tile_cache:/tile_cache
- ./logs:/logs
ports:
- "80:80"
- "443:443"
restart: always
networks:
tile_network:
aliases:
- webserver
qgis_server:
image: camptocamp/qgis-server
volumes:
- ./qgisserver:/etc/qgisserver/
restart: always
environment:
- QGIS_PROJECT_FILE=/etc/qgisserver/project.qgs
networks:
tile_network:
aliases:
- qgis
Add in you nginx.conf the following location:
location /qgis/ {
proxy_pass http://qgis/;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
}
So you have a web server, that hide the QGIS Server and show it on ULR localhost/qgis

Related

Attempting to use docker / docker compose to setup nginx to proxy requests to different ports on localhost - what am I missing?

I am attempting to forward requests this way:
https://xxx.domain1.com -> http://localhost:3000
https://yyy.domain2.com -> http://localhost:3001
To make it easier to get nginx up and running, I'm using docker. Here is my Dockerfile:
version: '3.7'
services:
proxy:
image: nginx:alpine
container_name: proxy
ports:
- '443:443'
- '80:80'
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf
- ./.cert/cert.pem:/etc/nginx/.cert/cert.pem
- ./.cert/key.pem:/etc/nginx/.cert/key.pem
restart: 'unless-stopped'
networks:
- backend
networks:
backend:
driver: bridge
And here is my nginx.conf:
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
server {
listen 80;
server_name yyy.domain2.com;
chunked_transfer_encoding on;
location / {
proxy_pass http://localhost:3001/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
server {
listen 80;
server_name xxx.domain1.com;
chunked_transfer_encoding on;
location / {
proxy_pass http://localhost:3000/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
}
stream {
map $ssl_preread_server_name $name {
xxx.domain1.com backend;
yyy.domain2.com frontend;
}
upstream backend {
server localhost:3000;
}
upstream frontend {
server localhost:3001;
}
server {
listen 443;
listen [::]:443;
proxy_pass $name;
ssl_preread on;
ssl_certificate ./.cert/cert.pem;
ssl_certificate_key ./.cert/key.pem;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_ciphers "EECDH+ECDSA+AESGCM EECDH+aRSA+AESGCM EECDH+ECDSA+SHA384 EECDH+ECDSA+SHA256 EECDH+aRSA+SHA384 EECDH+aRSA+SHA256 EECDH+aRSA+RC4 EECDH EDH+aRSA RC4 !aNULL !eNULL !LOW !3DES !MD5 !EXP !PSK !SRP !DSS";
}
}
I can access my services locally if I just open http://localhost:3000/test and http://localhost:3001/test, no problem.
But if I attempt to access with https://xxx.domain1.com/test, it spins for a while and then fails with ERR_CONNECTION_TIMED_OUT.
What am I missing?
UPDATE: I tried setting up the nginx service with a host network, but same result so far. I tried:
services:
proxy:
image: nginx:alpine
# ports:
# - '443:443'
# - '80:80'
...
extra_hosts:
- "host.docker.internal:host-gateway"
and
services:
proxy:
image: nginx:alpine
ports:
- '443:443'
- '80:80'
...
network_mode: "host"
But no luck...
I think I'm missing the part on how to tell nginx to forward the request to the host, instead to localhost inside of it's own container.
But how to fix that?
Thanks,
Eduardo

NginX in docker 2nd domain return 502 bad gateway

I have 2 domains pointing to one virtual private server ubuntu 21. The first domain(running on port 3000) works as expected, the second domain(running on port 4000 on container and 5000 on host) does not and return nginx 502 bad gateway. I have added port 4000 point to 80 on nginx container:
I have configured like below:
docker-compose.yml:
version: '3'
services:
nginx:
image: nginx:stable-alpine
ports:
- "3000:80" # nginx listen on 80
- "4000:80"
volumes:
- ./nginx/default.conf:/etc/nginx/conf.d/default.conf:ro
pwm-node:
build: .
image: my_acc/pwm-node
environment:
- PORT=3000
depends_on:
- mongo
mongo:
image: mongo
volumes:
- mongo-db:/data/db
redis:
image: redis
volumes:
mongo-db:
nginx conf:
server {
listen 80;
server_name first_domain.com www.first_domain.com;
# Redirect http to https
location / {
return 301 https://first_domain.com$request_uri;
}
}
server {
listen 80;
server_name second_domain.com www.second_domain.com;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://localhost:4000;
proxy_redirect off;
}
}
server {
listen 443 ssl http2;
server_name first_domain.com www.first_domain.com;
ssl on;
server_tokens off;
ssl_certificate /etc/nginx/ssl/live/first_domain.com/fullchain.pem;
ssl_certificate_key /etc/nginx/ssl/live/first_domain.com/privkey.pem;
ssl_dhparam /etc/nginx/dhparam/dhparam-2048.pem;
ssl_buffer_size 8k;
ssl_protocols TLSv1.2 TLSv1.1 TLSv1;
ssl_prefer_server_ciphers on;
ssl_ciphers ECDH+AESGCM:ECDH+AES256:ECDH+AES128:DH+3DES:!ADH:!AECDH:!MD5;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://pwm-node:3000;
proxy_redirect off;
}
}
looks like nginx does not accept http://localhost:4000;. I may have to add node-app-4000 to docker-compose.yml as a service and replace localhost with node-app-4000

How to pass SSL processing from local NginX to Docker NginX?

there. I have a docker nginx reverse proxy configured with ssl by configuration like this:
server {
listen 80;
server_name example.com;
location / {
return 301 https://$host$request_uri;
}
location /.well-known/acme-challenge/ {
root /var/www/certbot;
}
}
server {
listen 443 ssl;
server_name example.com;
root /var/www/example.com;
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
# some locations location ... { ... }
}
The certificates are configured with certbot and working pretty fine.
All the containers are up and running. But I have multiple websites running on my server. They are managed by local NginX. So I set ports for the docker NginX like this:
nginx:
image: example/example_nginx:latest
container_name: example_nginx
ports:
- "8123:80"
- "8122:443"
volumes:
- ${PROJECT_ROOT}/data/certbot/conf:/etc/letsencrypt
- ${PROJECT_ROOT}/data/certbot/www:/var/www/certbot
command: "/bin/sh -c 'while :; do sleep 6h & wait $${!}; nginx -s reload; done & nginx -g \"daemon off;\"'"
The docker port 80 maps to local port 8123 (http). The docker port 443 maps to local port 8122 (https). To pass the request from local NginX to docker container NginX I use the following config:
server {
listen 80;
server_name example.com;
location / {
access_log off;
proxy_pass http://localhost:8123;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Upgrade $http_upgrade;
proxy_redirect off;
proxy_set_header X-Forwarded-Host $server_name;
}
}
server {
listen 443 ssl;
server_name example.com;
location / {
access_log off;
proxy_pass https://localhost:8122;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Upgrade $http_upgrade;
proxy_redirect off;
proxy_set_header X-Forwarded-Host $server_name;
}
}
When I open the website it works but certificate seems to be broken and my WebSockets crash.
My question is: how can I pass ssl processing from local NginX to docker NginX so that it will work as expected?
~
~
~

Nginx for multiple Docker Container

I want to implement different ports for different docker containers . If request to 8080 port ,it should go nginx container and so on.
my docker-compose file
version: "3"
services:
ngnix:
image: nginx
ports:
- "8080:80"
volumes:
- ./nginx.conf/:/etc/nginx/nginx.conf
command: [nginx-debug, "-g", "daemon off;"]
apache:
image: httpd
command: bash -c "httpd -D FOREGROUND"
ports:
- "8081:80"
and my nginx.conf file below :
worker_processes 1;
events { worker_connections 1024; }
http {
sendfile on;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
server {
listen 8080;
server_name localhost;
location / {
proxy_pass http://127.0.0.1:8080;
proxy_redirect off;
}
}
server {
listen 8081;
server_name localhost;
location / {
proxy_pass http://127.0.0.1:8081;
proxy_redirect off;
}
}
}
But it does not work .Where is my mistake I spent my all time ?

Docker compose network confused the nodes

Sometimes nginx server forwards request to wrong docker-compose service.
I have docker-compose config
version: "3.0"
services:
proj-reader:
image: repositry:5000/my-company.com/proj-reader
ports:
- "28090:8080"
- "28095:5005"
proj-helpdesk:
image: repository:5000/my-company.com/proj-helpdesk
ports:
- "29080:8080"
- "29085:5005"
proj-frontend:
image: repository:5000/my-company.com/proj-frontend
ports:
- "80:80"
links:
- "proj-helpdesk:backend"
- "proj-reader:reader"
...
Frontend is a nginx container with NodeJs application. And we have next nginx configuration:
upstream backend {
server backend:8080;
keepalive 30;
}
upstream reader {
server reader:8080;
keepalive 30;
}
server {
listen 80;
client_max_body_size 2m;
server_name localhost;
root /usr/share/nginx/html;
location /api/ {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://backend;
proxy_read_timeout 600;
}
location /web-callback/ {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://reader;
}
}
And sometimes I can see that request for /web-callback/ is recieved in other service which doesn't pointed in links section for frontend service. At first time I have thougt it happened after I have restarted reader service. But yerstaday this situation repeated and I know that reader haven't been restarted.
What cn it be? ANd how can I prevent this situation in future?
You should try to use:
location ^~ /api/ {
# Queries beginning with /api/ and then stops searching.
}
location ^~ /web-callback/ {
# Queries beginning with /web-callback/ and then stops searching.
}
https://www.keycdn.com/support/nginx-location-directive/

Resources