Best Practice NGINX reverse proxy and Docker container depending on it - docker

I need your help. I tried several searches but as I think my problem is pretty unique, I did not find any solution. So I try to find one by asking directly.
My situation is resp. should be as follows:
I have a GitLab instance with a container-registry and a container-backend-service "running" on the same server, the access is managed by a NGINX-reverse-proxy (same server, as a docker container, too).
Right now, the backend-service and the reverse-proxy are built and run with a single docker-compose file.
The problem is that this does not work, as the backend-service needs to be pulled over the reverse-proxy from GitLab container registry. So, I cannot start the backend-service without the reverse-proxy running. But I cannot start the reverse-proxy as in its config I declared a proxy_pass and upstream to the backend-service:
nginx: [emerg] host not found in upstream "xxx:yyyy" in /etc/nginx/conf.d/upstream.conf:6
So I am generally interested in best practices here:
I think I should separate the reverse-proxy and the backend-service. Right?
How can I start the proxy with a config that is not yet ready/running at starttime? I tried something like resolver and static IP, found here in the forums, but it does not work for me. depends_on does not work, too.
Is it possible to start the proxy without the configuration for the backend-service and copy the NGINX-config of the backend-service at the starttime of the container. Or is there a better best-practice?
docker-compose.yml
version: '3.6'
services:
nginx_reverse_proxy:
image: nginx:mainline-alpine
container_name: nginx_reverse_proxy
restart: unless-stopped
ports:
- "80:80"
- "443:443"
volumes:
- web-root:/var/www/html
- ./nginx-conf:/etc/nginx/conf.d
- certbot-etc:/etc/letsencrypt
- certbot-var:/var/lib/letsencrypt
- dhparam:/etc/ssl/certs
- logs:/var/log/nginx
networks:
- app-network
backend-service:
image: xxx.xxx.de:port/group/backend-service
container_name: backend-service
restart: unless-stopped
ports:
- "4000:4000"
depends_on:
- nginx_reverse_proxy
networks:
- app-network
... [some certbot instructions]
volumes:
certbot-etc:
certbot-var:
web-root:
driver: local
driver_opts:
type: none
device: xxx
o: bind
dhparam:
driver: local
driver_opts:
type: none
device: xxx
o: bind
logs:
driver: local
driver_opts:
type: none
device: xxx
o: bind
networks:
app-network:
driver: bridge
backend-service.conf (excerpt)
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name xxx.xxx.de;
server_tokens off;
ssl_certificate /etc/letsencrypt/live/xxx.xxx.de/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/xxx.xxx.de/privkey.pem;
ssl_buffer_size 8k;
ssl_dhparam /etc/ssl/certs/dhparam-2048.pem;
ssl_protocols TLSv1.2 TLSv1.1 TLSv1;
ssl_prefer_server_ciphers on;
ssl_ciphers ECDH+AESGCM:ECDH+AES256:ECDH+AES128:DH+3DES:!ADH:!AECDH:!MD5;
ssl_ecdh_curve secp384r1;
ssl_session_tickets off;
ssl_stapling on;
ssl_stapling_verify on;
location / {
try_files $uri #backend-service;
}
location #backend-service{
resolver 127.0.0.11 valid=30s;
set $upstream_backend-service backend-service;
proxy_pass http://$upstream_backend-service;
add_header Access-Control-Allow-Origin https://xx.xx.de;
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-XSS-Protection "1; mode=block" always;
add_header X-Content-Type-Options "nosniff" always;
add_header Referrer-Policy "no-referrer-when-downgrade" always;
add_header Content-Security-Policy "default-src * data: 'unsafe-eval' 'unsafe-inline'" always;
# add_header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload" always;
# enable strict transport security only if you understand the implications
}
}
upstream.conf
...
upstream backend-service {
server backend-service:4000;
}
...
I appreciate your help and look forward to your answers, thank you in advance.

Related

File permissions for mounted volumes in docker

Currently using WSL2 ubuntu with docker-desktop for windows with WSL integration.
docker-compose.yml file
version: '3.9'
services:
wordpress:
# default port 9000 (FastCGI)
image: wordpress:6.1.1-fpm
container_name: wp-wordpress
env_file:
- .env
restart: unless-stopped
networks:
- wordpress
depends_on:
- database
volumes:
- ${WORDPRESS_LOCAL_HOME}:/var/www/html
- ${WORDPRESS_UPLOADS_CONFIG}:/usr/local/etc/php/conf.d/uploads.ini
# - /path/to/repo/myTheme/:/var/www/html/wp-content/themes/myTheme
environment:
- WORDPRESS_DB_HOST=${WORDPRESS_DB_HOST}
- WORDPRESS_DB_NAME=${WORDPRESS_DB_NAME}
- WORDPRESS_DB_USER=${WORDPRESS_DB_USER}
- WORDPRESS_DB_PASSWORD=${WORDPRESS_DB_PASSWORD}
database:
# default port 3306
image: mysql:latest
container_name: wp-database
env_file:
- .env
restart: unless-stopped
networks:
- wordpress
environment:
- MYSQL_DATABASE=${MYSQL_DATABASE}
- MYSQL_USER=${MYSQL_USER}
- MYSQL_PASSWORD=${MYSQL_PASSWORD}
- MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD}
volumes:
- ${MYSQL_LOCAL_HOME}:/var/lib/mysql
command:
- '--default-authentication-plugin=mysql_native_password'
nginx:
# default ports 80, 443 - expose mapping as needed to host
image: nginx:latest
container_name: wp-nginx
env_file:
- .env
restart: unless-stopped
networks:
- wordpress
depends_on:
- wordpress
ports:
- 8080:80 # http
- 8443:443 # https
volumes:
- ${WORDPRESS_LOCAL_HOME}:/var/www/html
- ${NGINX_CONF}:/etc/nginx/conf.d/default.conf
- ${NGINX_SSL_CERTS}:/etc/nginx/certs
- ${NGINX_LOGS}:/var/log/nginx
adminer:
# default port 8080
image: adminer:latest
container_name: wp-adminer
restart: unless-stopped
networks:
- wordpress
depends_on:
- database
ports:
- "9000:8080"
networks:
wordpress:
name: wp-wordpress
driver: bridge
I'm just starting out with development using docker. The file on the local storage(in the Linux file system) was initially owned by www-data so I changed it to my linux username using sudo chown -R username:username wordpress/ because it wasn't writeable. But doing this doesn't allow me to upload files(from wordpress interface) or write to files inside the nginx container unless the ownership is changed back to www-data:www-data.
Things I've tried:
Starting a bash session inside the nginx container with docker exec -it <cname> bash and changing the ownership of the uploads directory and writing files to my username.(after adding user with adduser username)
Changing the nginx user within the bash session to my username using user username username
I don't know what else to try except sudo chmod -R a+rwx in the main directory.
default.conf:
# default.conf
# redirect to HTTPS
server {
listen 80;
listen [::]:80;
server_name wordpress-docker.test;
location / {
# update port as needed for host mapped https
rewrite ^ https://wordpress-docker.test:8443$request_uri? permanent;
}
}
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name wordpress-docker.test;
index index.php index.html index.htm;
root /var/www/html;
server_tokens off;
client_max_body_size 75M;
# update ssl files as required by your deployment
ssl_certificate /etc/nginx/certs/localhost+2.pem;
ssl_certificate_key /etc/nginx/certs/localhost+2-key.pem;
# logging
access_log /var/log/nginx/wordpress.access.log;
error_log /var/log/nginx/wordpress.error.log;
# some security headers ( optional )
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-XSS-Protection "1; mode=block" always;
add_header X-Content-Type-Options "nosniff" always;
add_header Referrer-Policy "no-referrer-when-downgrade" always;
add_header Content-Security-Policy "default-src * data: 'unsafe-eval' 'unsafe-inline'" always;
location / {
try_files $uri $uri/ /index.php$is_args$args;
}
location ~ \.php$ {
try_files $uri = 404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass wordpress:9000;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
}
location ~ /\.ht {
deny all;
}
location = /favicon.ico {
log_not_found off; access_log off;
}
location = /favicon.svg {
log_not_found off; access_log off;
}
location = /robots.txt {
log_not_found off; access_log off; allow all;
}
location ~* \.(css|gif|ico|jpeg|jpg|js|png)$ {
expires max;
log_not_found off;
}
}
Folder struct:
|-config
|--uploads.ini
|-dbdata
|-logs
|-nginx
|--certs
|--default.conf
|-wordpress
|-.env
|-docker-compose.yml
Refering to this answer, this is how I resolved my issue:
Add your user to the www-data group
sudo usermod -a -G www-data username
Give rw permissions to the www-data group(f flag applies the permissions only to files and leaves the directories)
sudo find wordpress -type f -exec chmod g+rw {} +

502 Error on Production Deployment Django & Nginx using a docker compose file

I am using docker-compose to build containers and to serve the frontend of my website at https:// example.com and the backend at a subdomain, https:// api.example.com. The SSL certificates for both the root and subdomain are working properly, and I can access the live site (static files served by Nginx) at https:// example.com so at least half of the configuration is working properly. The problem occurs when the frontend tries to communicate with the backend. All calls are met with a "No 'Access-Control-Allow-Origin'" 502 Error in the console logs. In the logs of the docker container, this is the error response.
Docker Container Error
2022/03/09 19:01:21 [error] 30#30: *7 connect() failed (111: Connection refused) while connecting
to upstream, client: xxx.xx.xxx.xxx, server: api.example.com, request: "GET /api/services/images/
HTTP/1.1", upstream: "http://127.0.0.1:8000/api/services/images/",
host: "api.example.com", referrer: "https://example.com/"
I think it's likely that something is wrong with my Nginx or docker-compose configuration. When setting the SECURE_SSL_REDIRECT, SECURE_HSTS_INCLUDE_SUBDOMAINS, and the SECURE_HSTS_SECONDS to False or None (in the Django settings) I am able to hit http:// api.example.com:8000/api/services/images/ and get the data I am looking for. So it is running and hooked up, just not taking requests from where I want it to be. I've attached the Nginx configuration and the docker-compose.yml. Please let me know if you need more info, I would greatly appreciate any input, and thanks in advance for the help.
Nginx-custom.conf
# Config for the frontend application under example.com
server {
listen 80;
server_name example.com www.example.com;
if ($host = www.example.com) {
return 301 https://$host$request_uri;
}
if ($host = example.com) {
return 301 https://$host$request_uri;
}
return 404;
}
server {
server_name example.com www.example.com;
index index.html index.htm;
add_header Access-Control-Allow-Origin $http_origin;
add_header Access-Control-Allow-Credentials true;
add_header Access-Control-Allow-Headers $http_access_control_request_headers;
add_header Access-Control-Allow-Methods $http_access_control_request_method;
location / {
root /usr/share/nginx/html;
try_files $uri /index.html =404;
}
listen 443 ssl;
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
}
##### Config for the backend server at api.example.com
server {
listen 80;
server_name api.example.com;
return 301 https://$host$request_uri;
}
server {
server_name api.example.com;
add_header Access-Control-Allow-Origin $http_origin;
add_header Access-Control-Allow-Credentials true;
add_header Access-Control-Allow-Headers $http_access_control_request_headers;
add_header Access-Control-Allow-Methods $http_access_control_request_method;
location / {
proxy_pass http://127.0.0.1:8000/; #API Server
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Forwarded-Proto https;
proxy_redirect off;
}
listen 443 ssl;
ssl_certificate /etc/letsencrypt/live/api.example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/api.example.com/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
}
Docker-Compose File
version: '3.9'
# services that make up the development env
services:
# DJANGO BACKEND
backend:
container_name: example-backend
restart: unless-stopped
image: example-backend:1.0.1
build:
context: ./backend/src
dockerfile: Dockerfile
command: gunicorn example.wsgi:application --bind 0.0.0.0:8000
ports:
- 8000:8000
environment:
- SECRET_KEY=xxx
- DEBUG=0
- ALLOWED_HOSTS=example.com,api.example.com,xxx.xxx.xxx.x
- DB_HOST=postgres-db
- DB_NAME=xxx
- DB_USER=xxx
- DB_PASS=xxx
- EMAIL_HOST_PASS=xxx
# sets a dependency on the db container and there should be a network connection between the two
networks:
- db-net
- shared-network
links:
- postgres-db:postgres-db
depends_on:
- postgres-db
# POSTGRES DATABASE
postgres-db:
container_name: postgres-db
image: postgres
restart: always
volumes:
- example-data:/var/lib/postgresql/data
ports:
- 5432:5432
environment:
- POSTGRES_DB=exampledb
- POSTGRES_USER=user
- POSTGRES_PASSWORD=pass
networks:
- db-net
# ANGULAR & NGINX FRONTEND
frontend:
container_name: example-frontend
build:
context: ./frontend
ports:
- "80:80"
- "443:443"
networks:
- shared-network
links:
- backend
depends_on:
- backend
networks:
shared-network:
driver: bridge
db-net:
volumes:
example-data:

Certbot command in docker-compose issues SSL certificate with invalid CA

The problem
I'm trying to use certbot to auto-generate a TLS certificate for Nginx in my multi-container Docker configuration. Everything works as expected except the Certificate Authority (CA) is invalid.
When I visit my site, I see that Fake LE Intermediate X1, an invalid authority, issued the certificate:
My setup
Here is the docker-compose.yml file where I call certbot to generate the certificate:
version: '2'
services:
apollo:
restart: always
networks:
- app-network
build: .
ports:
- '1337:1337'
certbot:
image: certbot/certbot
container_name: certbot
volumes:
- certbot-etc:/etc/letsencrypt
- certbot-var:/var/lib/letsencrypt
- web-root:/var/www/html
depends_on:
- webserver
command: certonly --noninteractive --keep-until-expiring --webroot --webroot-path=/var/www/html --email myemail#example.com --agree-tos --no-eff-email -d mydomain.com
webserver:
image: nginx:latest
container_name: webserver
restart: unless-stopped
ports:
- "80:80"
- "443:443"
volumes:
- web-root:/var/www/html
- ./nginx.conf:/etc/nginx/nginx.conf
- certbot-etc:/etc/letsencrypt
- certbot-var:/var/lib/letsencrypt
- dhparam:/etc/ssl/certs
depends_on:
- apollo
networks:
- app-network
volumes:
postgres: ~
certbot-etc:
certbot-var:
dhparam:
driver: local
driver_opts:
type: none
device: /home/user/project_name/dhparam/
o: bind
web-root:
networks:
app-network:
I don't think that Nginx is the issue because the HTTP -> HTTPS redirect works, and the browser receives a certificate. But just in case it's relevant: here's the nginx.conf where I refer to the certificate and configure an HTTP -> HTTPS redirect.
events {}
http {
server {
listen 80;
listen [::]:80;
server_name mydomain.com;
location ~ /.well-known/acme-challenge {
allow all;
root /var/www/html;
}
location / {
rewrite ^ https://$host$request_uri? permanent;
}
}
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name mydomain.com;
server_tokens off;
ssl_certificate /etc/letsencrypt/live/mydomain.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/mydomain.com/privkey.pem;
ssl_buffer_size 8k;
ssl_dhparam /etc/ssl/certs/dhparam-2048.pem;
ssl_protocols TLSv1.2 TLSv1.1 TLSv1;
ssl_prefer_server_ciphers on;
ssl_ciphers ECDH+AESGCM:ECDH+AES256:ECDH+AES128:DH+3DES:!ADH:!AECDH:!MD5;
ssl_ecdh_curve secp384r1;
ssl_session_tickets off;
ssl_stapling on;
ssl_stapling_verify on;
resolver 8.8.8.8;
location / {
try_files $uri #apollo;
}
location #apollo {
proxy_pass http://apollo:1337;
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-XSS-Protection "1; mode=block" always;
add_header X-Content-Type-Options "nosniff" always;
add_header Referrer-Policy "no-referrer-when-downgrade" always;
add_header Content-Security-Policy "default-src * data: 'unsafe-eval' 'unsafe-inline'" always;
}
root /var/www/html;
index index.html index.htm index.nginx-debian.html;
}
}
What I've tried
Initially, I called certonly with the --staging argument in the cerbot container definition in docker-compose.yml. This could definitely cause the invalid CA problem. However, I have since tried revoking the CA and re-running the command multiple times, but no luck.
I have tried removing the --keep-until-expiring flag in the cerbot container definition of docker-compose.yml. This causes cerbot to generate a new certificate, but it did not resolve the CA issue.
Visiting crt.sh, I can see that certbot did issue valid certificates for my domain:
So, the problem seems to lie not in the generation of these certificates, but in the way my docker-compose/cerbot configuration is referring to them.
You can try to add the --force-renewal flag:
command: >-
certonly
--webroot
--webroot-path=/var/www/html
--email myemail#example.com
--agree-tos
--no-eff-email
--force-renewal
-d mydomain.com

how to add subdomain in letsencrypt? I am using docker, nginx, wordpress

I just added one subdomain and try to expand my existing certificate. But there was an error while renewing the certificate. I added my subdomain on DNS records already. I've tried both CNAME and A but neither works. do I need to try AAAA???
certbot | Saving debug log to /var/log/letsencrypt/letsencrypt.log
certbot | Plugins selected: Authenticator webroot, Installer None
certbot | Renewing an existing certificate
certbot | Performing the following challenges:
certbot | http-01 challenge for edu.mrtrobotics.com
certbot | Using the webroot path /var/www/html for all unmatched domains.
certbot | Waiting for verification...
certbot | Challenge failed for domain edu.mrtrobotics.com
certbot | http-01 challenge for edu.mrtrobotics.com
certbot | Cleaning up challenges
certbot | Some challenges have failed.
certbot | IMPORTANT NOTES:
certbot | - The following errors were reported by the server:
certbot |
certbot | Domain: edu.mrtrobotics.com
certbot | Type: unauthorized
certbot | Detail: Invalid response from
certbot | https://www.mrtrobotics.com/content-18/ [149.28.180.33]: "
certbot | html>\n\n\n
certbot | charset=\"UTF-8\">\ncontent - MRT Robotics | Coding,
certbot | Robotics, and STEM Edu"
certbot |
certbot | To fix these errors, please make sure that your domain name was
certbot | entered correctly and the DNS A/AAAA record(s) for that domain
certbot | contain(s) the right IP address.
nginx.conf
server {
listen 80;
listen [::]:80;
server_name mrtrobotics.com www.mrtrobotics.com;
location ~ /.well-known/acme-challenge {
allow all;
root /var/www/html;
}
location / {
rewrite ^ https://$host$request_uri? permanent;
}
}
server {
listen 80;
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name edu.mrtrobotics.com;
ssl_certificate /etc/letsencrypt/live/mrtrobotics.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/mrtrobotics.com/privkey.pem;
include /etc/nginx/conf.d/options-ssl-nginx.conf;
location / {
proxy_pass https://127.0.0.1/edu$request_uri;
proxy_set_header Host mrtrobotics.com;
}
}
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name mrtrobotics.com www.mrtrobotics.com;
index index.php index.html index.htm;
root /var/www/html;
server_tokens off;
ssl_certificate /etc/letsencrypt/live/mrtrobotics.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/mrtrobotics.com/privkey.pem;
include /etc/nginx/conf.d/options-ssl-nginx.conf;
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-XSS-Protection "1; mode=block" always;
add_header X-Content-Type-Options "nosniff" always;
add_header Referrer-Policy "no-referrer-when-downgrade" always;
add_header Content-Security-Policy "default-src * data: 'unsafe-eval' 'unsafe-inline'" always;
# add_header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload" always;
# enable strict transport security only if you understand the implications
location / {
try_files $uri $uri/ /index.php$is_args$args;
}
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass wordpress:9000;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
}
location ~ /\.ht {
deny all;
}
location = /favicon.ico {
log_not_found off; access_log off;
}
location = /robots.txt {
log_not_found off; access_log off; allow all;
}
location ~* \.(css|gif|ico|jpeg|jpg|js|png)$ {
expires max;
log_not_found off;
}
}
# Set client upload size - 100Mbyte
client_max_body_size 100M;
# to avoid 504 time out error - defalut is 60s
proxy_send_timeout 180s;
proxy_read_timeout 180s;
fastcgi_send_timeout 180s;
fastcgi_read_timeout 180s;
docker-compose.yml
version: '3'
services:
db:
image: mysql:8.0
container_name: db
restart: unless-stopped
env_file: .env
environment:
- MYSQL_DATABASE=wordpress
volumes:
- ./db-data:/var/lib/mysql
command: '--default-authentication-plugin=mysql_native_password'
networks:
- app-network
wordpress:
depends_on:
- db
image: wordpress:5.1.1-fpm-alpine
container_name: wordpress
restart: unless-stopped
env_file: .env
environment:
- WORDPRESS_DB_HOST=db:3306
- WORDPRESS_DB_USER=$MYSQL_USER
- WORDPRESS_DB_PASSWORD=$MYSQL_PASSWORD
- WORDPRESS_DB_NAME=wordpress
volumes:
- ./wordpress/:/var/www/html
- ./wordpress/php.ini:/usr/local/etc/php/conf.d/uploads.ini
networks:
- app-network
phpmyadmin:
depends_on:
- db
image: phpmyadmin/phpmyadmin:latest
restart: unless-stopped
ports:
- '8080:80'
env_file: .env
environment:
- PMA_HOST=db
- MYSQL_ROOT_PASSWORD=$MYSQL_ROOT_PASSWORD
volumes:
- ./wordpress/php.ini:/usr/local/etc/php/php.ini
networks:
- app-network
webserver:
depends_on:
- wordpress
image: nginx:1.15.12-alpine
container_name: webserver
restart: unless-stopped
ports:
- "80:80"
- "443:443"
volumes:
- ./wordpress:/var/www/html
- ./nginx-conf:/etc/nginx/conf.d
- ./certbot-etc:/etc/letsencrypt
networks:
- app-network
certbot:
depends_on:
- webserver
image: certbot/certbot
container_name: certbot
volumes:
- ./certbot-etc:/etc/letsencrypt
- ./wordpress:/var/www/html
command: certonly --webroot --webroot-path=/var/www/html --email elearning#wemakerobot.com --agree-tos --no-eff-email **--expand** -d mrtrobotics.com -d www.mrtrobotics.com **-d edu.mrtrobotics.com**
volumes:
certbot-etc:
wordpress:
db-data:
networks:
app-network:
driver: bridge

502 Bad Gateway with pgadmin4 behind nginx?

I have an api running in docker with:
nginx
nodejs api
postgresql
pgadmin4
certbot
When I try to add an endpoint for pgadmin so I can work with the database, I get a 502 bad gateway regardless how I'm setting up the ports for proxy_pass or in docker-compose for the actual container for pgadmin.
docker-compose.yml
version: "3"
services:
webserver:
image: nginx:mainline-alpine
container_name: webserver
restart: unless-stopped
ports:
- "80:80"
- "443:443"
volumes:
- web-root:/var/www/html
- ./nginx-conf:/etc/nginx/conf.d
- certbot-etc:/etc/letsencrypt
- certbot-var:/var/lib/letsencrypt
- dhparam:/etc/ssl/certs
depends_on:
- api-graphql
- api-postgres-pgadmin
networks:
- app-network
certbot:
image: certbot/certbot
container_name: certbot
volumes:
- certbot-etc:/etc/letsencrypt
- certbot-var:/var/lib/letsencrypt
- web-root:/var/www/html
depends_on:
- webserver
command: certonly --force-renewal --webroot --expand --webroot-path=/var/www/html --email contact#name.dev --agree-tos --no-eff-email -d api.name.dev
api-graphql:
container_name: api-graphql
restart: always
build: .
depends_on:
- api-postgres
networks:
- app-network
api-postgres-pgadmin:
container_name: api-postgres-pgadmin
image: dpage/pgadmin4:latest
networks:
- app-network
ports:
- "8080:8080"
environment:
- PGADMIN_DEFAULT_EMAIL=name#gmail.com
- PGADMIN_DEFAULT_PASSWORD=pass
depends_on:
- api-postgres
api-postgres:
container_name: api-postgres
image: postgres:10
volumes:
- ./data:/data/db
networks:
- app-network
environment:
- POSTGRES_PASSWORD=pass
networks:
app-network:
driver: bridge
volumes:
certbot-etc:
certbot-var:
web-root:
driver: local
driver_opts:
type: none
device: /home/name/api/data
o: bind
dhparam:
driver: local
driver_opts:
type: none
device: /home/name/api/dhparam
o: bind
nginx.conf
server {
listen 80;
listen [::]:80;
server_name api.name.dev;
location ~ /.well-known/acme-challenge {
allow all;
root /var/www/html;
}
location / {
rewrite ^ https://$host$request_uri? permanent;
}
}
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name api.name.dev;
server_tokens off;
ssl_certificate /etc/letsencrypt/live/api.name.dev/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/api.name.dev/privkey.pem;
ssl_buffer_size 8k;
ssl_dhparam /etc/ssl/certs/dhparam-2048.pem;
ssl_protocols TLSv1.2 TLSv1.1 TLSv1;
ssl_prefer_server_ciphers on;
ssl_ciphers ECDH+AESGCM:ECDH+AES256:ECDH+AES128:DH+3DES:!ADH:!AECDH:!MD5;
ssl_ecdh_curve secp384r1;
ssl_session_tickets off;
ssl_stapling on;
ssl_stapling_verify on;
resolver 8.8.8.8;
location / {
try_files $uri #api-graphql;
}
location #api-graphql {
proxy_pass http://api-graphql:8080;
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-XSS-Protection "1; mode=block" always;
add_header X-Content-Type-Options "nosniff" always;
add_header Referrer-Policy "no-referrer-when-downgrade" always;
add_header Content-Security-Policy "default-src * data: 'unsafe-eval' 'unsafe-inline'" always;
# add_header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload" always;
# enable strict transport security only if you understand the implications
}
location /pg {
try_files $uri #api-postgres-pgadmin;
}
location #api-postgres-pgadmin {
proxy_pass http://api-postgres-pgadmin:8080;
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-XSS-Protection "1; mode=block" always;
add_header X-Content-Type-Options "nosniff" always;
add_header Referrer-Policy "no-referrer-when-downgrade" always;
add_header Content-Security-Policy "default-src * data: 'unsafe-eval' 'unsafe-inline'" always;
# add_header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload" always;
# enable strict transport security only if you understand the implications
}
root /var/www/html;
index index.html index.htm index.nginx-debian.html;
}
Is this just not going to work with having a http://something.com/stuff work with pgadmin? Do we have to us a parent sub domain like stuff.something.com ?
Bad gateway means nginx cannot reach the backend service you've identified. That could be DNS issues (which don't appear to be an issue here), containers on different networks (again, not an issue), or communicating to a port the container isn't listening on.
Checking the pgadmin image docs, this image appears to listen on port 80, not port 8080. So you'll need to adjust nginx to connect to that port instead:
location #api-postgres-pgadmin {
# adjust the next line, removing port 8080, port 80 is the default for http
proxy_pass http://api-postgres-pgadmin;
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-XSS-Protection "1; mode=block" always;
add_header X-Content-Type-Options "nosniff" always;
add_header Referrer-Policy "no-referrer-when-downgrade" always;
add_header Content-Security-Policy "default-src * data: 'unsafe-eval' 'unsafe-inline'" always;
# add_header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload" always;
# enable strict transport security only if you understand the implications
}

Resources