502 Bad Gateway with pgadmin4 behind nginx? - docker

I have an api running in docker with:
nginx
nodejs api
postgresql
pgadmin4
certbot
When I try to add an endpoint for pgadmin so I can work with the database, I get a 502 bad gateway regardless how I'm setting up the ports for proxy_pass or in docker-compose for the actual container for pgadmin.
docker-compose.yml
version: "3"
services:
webserver:
image: nginx:mainline-alpine
container_name: webserver
restart: unless-stopped
ports:
- "80:80"
- "443:443"
volumes:
- web-root:/var/www/html
- ./nginx-conf:/etc/nginx/conf.d
- certbot-etc:/etc/letsencrypt
- certbot-var:/var/lib/letsencrypt
- dhparam:/etc/ssl/certs
depends_on:
- api-graphql
- api-postgres-pgadmin
networks:
- app-network
certbot:
image: certbot/certbot
container_name: certbot
volumes:
- certbot-etc:/etc/letsencrypt
- certbot-var:/var/lib/letsencrypt
- web-root:/var/www/html
depends_on:
- webserver
command: certonly --force-renewal --webroot --expand --webroot-path=/var/www/html --email contact#name.dev --agree-tos --no-eff-email -d api.name.dev
api-graphql:
container_name: api-graphql
restart: always
build: .
depends_on:
- api-postgres
networks:
- app-network
api-postgres-pgadmin:
container_name: api-postgres-pgadmin
image: dpage/pgadmin4:latest
networks:
- app-network
ports:
- "8080:8080"
environment:
- PGADMIN_DEFAULT_EMAIL=name#gmail.com
- PGADMIN_DEFAULT_PASSWORD=pass
depends_on:
- api-postgres
api-postgres:
container_name: api-postgres
image: postgres:10
volumes:
- ./data:/data/db
networks:
- app-network
environment:
- POSTGRES_PASSWORD=pass
networks:
app-network:
driver: bridge
volumes:
certbot-etc:
certbot-var:
web-root:
driver: local
driver_opts:
type: none
device: /home/name/api/data
o: bind
dhparam:
driver: local
driver_opts:
type: none
device: /home/name/api/dhparam
o: bind
nginx.conf
server {
listen 80;
listen [::]:80;
server_name api.name.dev;
location ~ /.well-known/acme-challenge {
allow all;
root /var/www/html;
}
location / {
rewrite ^ https://$host$request_uri? permanent;
}
}
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name api.name.dev;
server_tokens off;
ssl_certificate /etc/letsencrypt/live/api.name.dev/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/api.name.dev/privkey.pem;
ssl_buffer_size 8k;
ssl_dhparam /etc/ssl/certs/dhparam-2048.pem;
ssl_protocols TLSv1.2 TLSv1.1 TLSv1;
ssl_prefer_server_ciphers on;
ssl_ciphers ECDH+AESGCM:ECDH+AES256:ECDH+AES128:DH+3DES:!ADH:!AECDH:!MD5;
ssl_ecdh_curve secp384r1;
ssl_session_tickets off;
ssl_stapling on;
ssl_stapling_verify on;
resolver 8.8.8.8;
location / {
try_files $uri #api-graphql;
}
location #api-graphql {
proxy_pass http://api-graphql:8080;
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-XSS-Protection "1; mode=block" always;
add_header X-Content-Type-Options "nosniff" always;
add_header Referrer-Policy "no-referrer-when-downgrade" always;
add_header Content-Security-Policy "default-src * data: 'unsafe-eval' 'unsafe-inline'" always;
# add_header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload" always;
# enable strict transport security only if you understand the implications
}
location /pg {
try_files $uri #api-postgres-pgadmin;
}
location #api-postgres-pgadmin {
proxy_pass http://api-postgres-pgadmin:8080;
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-XSS-Protection "1; mode=block" always;
add_header X-Content-Type-Options "nosniff" always;
add_header Referrer-Policy "no-referrer-when-downgrade" always;
add_header Content-Security-Policy "default-src * data: 'unsafe-eval' 'unsafe-inline'" always;
# add_header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload" always;
# enable strict transport security only if you understand the implications
}
root /var/www/html;
index index.html index.htm index.nginx-debian.html;
}
Is this just not going to work with having a http://something.com/stuff work with pgadmin? Do we have to us a parent sub domain like stuff.something.com ?

Bad gateway means nginx cannot reach the backend service you've identified. That could be DNS issues (which don't appear to be an issue here), containers on different networks (again, not an issue), or communicating to a port the container isn't listening on.
Checking the pgadmin image docs, this image appears to listen on port 80, not port 8080. So you'll need to adjust nginx to connect to that port instead:
location #api-postgres-pgadmin {
# adjust the next line, removing port 8080, port 80 is the default for http
proxy_pass http://api-postgres-pgadmin;
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-XSS-Protection "1; mode=block" always;
add_header X-Content-Type-Options "nosniff" always;
add_header Referrer-Policy "no-referrer-when-downgrade" always;
add_header Content-Security-Policy "default-src * data: 'unsafe-eval' 'unsafe-inline'" always;
# add_header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload" always;
# enable strict transport security only if you understand the implications
}

Related

Best Practice NGINX reverse proxy and Docker container depending on it

I need your help. I tried several searches but as I think my problem is pretty unique, I did not find any solution. So I try to find one by asking directly.
My situation is resp. should be as follows:
I have a GitLab instance with a container-registry and a container-backend-service "running" on the same server, the access is managed by a NGINX-reverse-proxy (same server, as a docker container, too).
Right now, the backend-service and the reverse-proxy are built and run with a single docker-compose file.
The problem is that this does not work, as the backend-service needs to be pulled over the reverse-proxy from GitLab container registry. So, I cannot start the backend-service without the reverse-proxy running. But I cannot start the reverse-proxy as in its config I declared a proxy_pass and upstream to the backend-service:
nginx: [emerg] host not found in upstream "xxx:yyyy" in /etc/nginx/conf.d/upstream.conf:6
So I am generally interested in best practices here:
I think I should separate the reverse-proxy and the backend-service. Right?
How can I start the proxy with a config that is not yet ready/running at starttime? I tried something like resolver and static IP, found here in the forums, but it does not work for me. depends_on does not work, too.
Is it possible to start the proxy without the configuration for the backend-service and copy the NGINX-config of the backend-service at the starttime of the container. Or is there a better best-practice?
docker-compose.yml
version: '3.6'
services:
nginx_reverse_proxy:
image: nginx:mainline-alpine
container_name: nginx_reverse_proxy
restart: unless-stopped
ports:
- "80:80"
- "443:443"
volumes:
- web-root:/var/www/html
- ./nginx-conf:/etc/nginx/conf.d
- certbot-etc:/etc/letsencrypt
- certbot-var:/var/lib/letsencrypt
- dhparam:/etc/ssl/certs
- logs:/var/log/nginx
networks:
- app-network
backend-service:
image: xxx.xxx.de:port/group/backend-service
container_name: backend-service
restart: unless-stopped
ports:
- "4000:4000"
depends_on:
- nginx_reverse_proxy
networks:
- app-network
... [some certbot instructions]
volumes:
certbot-etc:
certbot-var:
web-root:
driver: local
driver_opts:
type: none
device: xxx
o: bind
dhparam:
driver: local
driver_opts:
type: none
device: xxx
o: bind
logs:
driver: local
driver_opts:
type: none
device: xxx
o: bind
networks:
app-network:
driver: bridge
backend-service.conf (excerpt)
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name xxx.xxx.de;
server_tokens off;
ssl_certificate /etc/letsencrypt/live/xxx.xxx.de/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/xxx.xxx.de/privkey.pem;
ssl_buffer_size 8k;
ssl_dhparam /etc/ssl/certs/dhparam-2048.pem;
ssl_protocols TLSv1.2 TLSv1.1 TLSv1;
ssl_prefer_server_ciphers on;
ssl_ciphers ECDH+AESGCM:ECDH+AES256:ECDH+AES128:DH+3DES:!ADH:!AECDH:!MD5;
ssl_ecdh_curve secp384r1;
ssl_session_tickets off;
ssl_stapling on;
ssl_stapling_verify on;
location / {
try_files $uri #backend-service;
}
location #backend-service{
resolver 127.0.0.11 valid=30s;
set $upstream_backend-service backend-service;
proxy_pass http://$upstream_backend-service;
add_header Access-Control-Allow-Origin https://xx.xx.de;
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-XSS-Protection "1; mode=block" always;
add_header X-Content-Type-Options "nosniff" always;
add_header Referrer-Policy "no-referrer-when-downgrade" always;
add_header Content-Security-Policy "default-src * data: 'unsafe-eval' 'unsafe-inline'" always;
# add_header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload" always;
# enable strict transport security only if you understand the implications
}
}
upstream.conf
...
upstream backend-service {
server backend-service:4000;
}
...
I appreciate your help and look forward to your answers, thank you in advance.

Why is https not working for my site hosted in docker?

I have a site running in docker with 4 containers, a react front end, .net backend, sql database and nginx server. My docker compose file looks like this:
version: '3'
services:
sssfe:
image: mydockerhub:myimage-fe-1.3
ports:
- 9000:9000
volumes:
- sssfev:/usr/share/nginx/html
depends_on:
- sssapi
sssapi:
image: mydockerhub:myimage-api-1.3
environment:
- SQL_CONNECTION=myconnection
ports:
- 44384:44384
depends_on:
- jbdatabase
jbdatabase:
image: mcr.microsoft.com/mssql/server:2019-latest
environment:
- ACCEPT_EULA=Y
- SA_PASSWORD=mypass
volumes:
- dbdata:/var/opt/mssql
ports:
- 1433:1433
reverseproxy:
build:
context: ./nginx
dockerfile: Dockerfile
ports:
- "80:80"
- "443:443"
volumes:
- example_certbot-etc:/etc/letsencrypt
links :
- sssfe
certbot:
depends_on:
- reverseproxy
image: certbot/certbot
container_name: certbot
volumes:
- example_certbot-etc:/etc/letsencrypt
- sssfev:/usr/share/nginx/html
command: certonly --webroot --webroot-path=/usr/share/nginx/html --email myemail --agree-tos --no-eff-email --force-renewal -d example.com -d www.example.com
volumes:
example_certbot-etc:
external: true
dbdata:
sssfev:
I was following this link and am using cerbot and letsencrypt for the certificate. My nginx conf file is this:
server {
listen 80;
listen [::]:80;
server_name example.com www.example.com;
location / {
rewrite ^ https://$host$request_uri? permanent;
}
location ~ /.well-known/acme-challenge {
allow all;
root /usr/share/nginx/html;
}
}
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name example.com www.example.com;
index index.html index.htm;
root /usr/share/nginx/html;
server_tokens off;
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
include /etc/nginx/conf.d/options-ssl-nginx.conf;
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-XSS-Protection "1; mode=block" always;
add_header X-Content-Type-Options "nosniff" always;
add_header Referrer-Policy "no-referrer-when-downgrade" always;
add_header Content-Security-Policy "default-src * data: 'unsafe-eval' 'unsafe-inline'" always;
# add_header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload" always;
# enable strict transport security only if you understand the implications
location = /favicon.ico {
log_not_found off; access_log off;
}
location = /robots.txt {
log_not_found off; access_log off; allow all;
}
location ~* \.(css|gif|ico|jpeg|jpg|js|png)$ {
expires max;
log_not_found off;
}
}
My issue is that https doesn't work for my site. When I hit https://example.com, I get ERR_CONNECTION_REFUSED. The non https site resolves and works fine however. Can't figure out what's going on. It looks like the ssl port is open and nginx is listening to it:
ss -tulpn | grep LISTEN
tcp LISTEN 0 128 *:9000 *:* users:(("docker-proxy",pid=18336,fd=4))
tcp LISTEN 0 128 *:80 *:* users:(("docker-proxy",pid=18464,fd=4))
tcp LISTEN 0 128 [::]:22 [::]:* users:(("sshd",pid=420,fd=4))
tcp LISTEN 0 128 *:1433 *:* users:(("docker-proxy",pid=18152,fd=4))
tcp LISTEN 0 128 *:443 *:* users:(("docker-proxy",pid=18452,fd=4))
tcp LISTEN 0 128 *:44384 *:* users:(("docker-proxy",pid=18243,fd=4))
And my containers:
reverseproxy 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp
sssfe 80/tcp, 0.0.0.0:9000->9000/tcp
sssapi 0.0.0.0:44384->44384/tcp
database 0.0.0.0:1433->1433/tcp
I'm assuming it's an issue with my nginx config, but I'm new to this and not sure where to go from here.
If you need to support SSL, please do this:
mkdir /opt/docker/nginx/conf.d -p
touch /opt/docker/nginx/conf.d/nginx.conf
mkdir /opt/docker/nginx/cert -p
then
vim /opt/docker/nginx/conf.d/nginx.conf
If you need to force the redirection to https when accessing http:
server {
listen 443 ssl;
server_name example.com www.example.com; # domain
# Pay attention to the file location, starting from /etc/nginx/
ssl_certificate 1_www.example.com_bundle.crt;
ssl_certificate_key 2_www.example.com.key;
ssl_session_timeout 5m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers ECDHE-RSA-AES128-GCM-SHA256:HIGH:!aNULL:!MD5:!RC4:!DHE;
ssl_prefer_server_ciphers on;
client_max_body_size 1024m;
location / {
proxy_set_header HOST $host;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
#Intranet address
proxy_pass http://172.17.0.8:9090; #change it
}
}
server {
listen 80;
server_name example.com www.example.com; # The domain name of the binding certificate
#HTTP domain name request is converted to https
return 301 https://$host$request_uri;
}
docker run -itd --name nginx -p 80:80 -p 443:443 -v /opt/docker/nginx/conf.d/nginx.conf:/etc/nginx/conf.d/nginx.conf -v /opt/docker/nginx/cert:/etc/nginx -m 100m nginx
After startup, enter docker ps to see if the startup is successful
docker logs nginx view logs.

Docker Nginx + NextCloud without domain - just IP

I am trying to set up docker containers that would utilize NextCloud, Nginx (+ collabora office in afterwards). I am trying to access the docker container in a local network on a Ubuntu server.
Is it possible to set up NextCloud + Nginx without a domain name? I am facing a lot of troubles with reverse proxying and setup - probably becase $host in nginx container returns empty string - I can only provide an IP of a Ubuntu server.
I have tried many docker-compose.yml and nginx.conf setups but still do not get it to work.
This is the sample from Nextcloud Dockerhub, that I believe should work out-of-the-box, but it seems that a domain name is a must:
# docker-compose.yml
version: '3'
volumes:
nextcloud:
db:
services:
db:
image: mariadb
restart: always
command: --transaction-isolation=READ-COMMITTED --binlog-format=ROW
volumes:
- db:/var/lib/mysql
environment:
- MYSQL_ROOT_PASSWORD=geslo
- MYSQL_PASSWORD=geslo
- MYSQL_DATABASE=nextcloud
- MYSQL_USER=nextcloud
app:
image: nextcloud:fpm
restart: always
links:
- db
volumes:
- nextcloud:/var/www/html
environment:
- MYSQL_PASSWORD=geslo
- MYSQL_DATABASE=nextcloud
- MYSQL_USER=nextcloud
- MYSQL_HOST=db
web:
image: nginx
restart: always
ports:
- 80:80
links:
- app
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf:ro
- nextcloud:/var/www/html:ro
# nginx.conf
worker_processes auto;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
set_real_ip_from 10.0.0.0/8;
set_real_ip_from 172.16.0.0/12;
set_real_ip_from 192.168.0.0/16;
real_ip_header X-Real-IP;
#gzip on;
upstream php-handler {
server app:9000;
}
server {
listen 80;
# Add headers to serve security related headers
# Before enabling Strict-Transport-Security headers please read into this
# topic first.
# add_header Strict-Transport-Security "max-age=15768000;
# includeSubDomains; preload;";
#
# WARNING: Only add the preload option once you read about
# the consequences in https://hstspreload.org/. This option
# will add the domain to a hardcoded list that is shipped
# in all major browsers and getting removed from this list
# could take several months.
add_header X-Content-Type-Options nosniff;
add_header X-XSS-Protection "1; mode=block";
add_header X-Robots-Tag none;
add_header X-Download-Options noopen;
add_header X-Permitted-Cross-Domain-Policies none;
add_header Referrer-Policy no-referrer;
root /var/www/html;
location = /robots.txt {
allow all;
log_not_found off;
access_log off;
}
# The following 2 rules are only needed for the user_webfinger app.
# Uncomment it if you're planning to use this app.
#rewrite ^/.well-known/host-meta /public.php?service=host-meta last;
#rewrite ^/.well-known/host-meta.json /public.php?service=host-meta-json
# last;
location = /.well-known/carddav {
return 301 $scheme://$host/remote.php/dav;
}
location = /.well-known/caldav {
return 301 $scheme://$host/remote.php/dav;
}
# set max upload size
client_max_body_size 10G;
fastcgi_buffers 64 4K;
# Enable gzip but do not remove ETag headers
gzip on;
gzip_vary on;
gzip_comp_level 4;
gzip_min_length 256;
gzip_proxied expired no-cache no-store private no_last_modified no_etag auth;
gzip_types application/atom+xml application/javascript application/json application/ld+json application/manifest+json application/rss+xml application/vnd.geo+json application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/bmp image/svg+xml image/x-icon text/cache-manifest text/css text/plain text/vcard text/vnd.rim.location.xloc text/vtt text/x-component text/x-cross-domain-policy;
# Uncomment if your server is build with the ngx_pagespeed module
# This module is currently not supported.
#pagespeed off;
location / {
rewrite ^ /index.php$request_uri;
}
location ~ ^/(?:build|tests|config|lib|3rdparty|templates|data)/ {
deny all;
}
location ~ ^/(?:\.|autotest|occ|issue|indie|db_|console) {
deny all;
}
location ~ ^/(?:index|remote|public|cron|core/ajax/update|status|ocs/v[12]|updater/.+|ocs-provider/.+)\.php(?:$|/) {
fastcgi_split_path_info ^(.+\.php)(/.*)$;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
# fastcgi_param HTTPS on;
#Avoid sending the security headers twice
fastcgi_param modHeadersAvailable true;
fastcgi_param front_controller_active true;
fastcgi_pass php-handler;
fastcgi_intercept_errors on;
fastcgi_request_buffering off;
}
location ~ ^/(?:updater|ocs-provider)(?:$|/) {
try_files $uri/ =404;
index index.php;
}
# Adding the cache control header for js and css files
# Make sure it is BELOW the PHP block
location ~ \.(?:css|js|woff2?|svg|gif)$ {
try_files $uri /index.php$request_uri;
add_header Cache-Control "public, max-age=15778463";
# Add headers to serve security related headers (It is intended to
# have those duplicated to the ones above)
# Before enabling Strict-Transport-Security headers please read into
# this topic first.
# add_header Strict-Transport-Security "max-age=15768000;
# includeSubDomains; preload;";
#
# WARNING: Only add the preload option once you read about
# the consequences in https://hstspreload.org/. This option
# will add the domain to a hardcoded list that is shipped
# in all major browsers and getting removed from this list
# could take several months.
add_header X-Content-Type-Options nosniff;
add_header X-XSS-Protection "1; mode=block";
add_header X-Robots-Tag none;
add_header X-Download-Options noopen;
add_header X-Permitted-Cross-Domain-Policies none;
add_header Referrer-Policy no-referrer;
# Optional: Don't log access to assets
access_log off;
}
location ~ \.(?:png|html|ttf|ico|jpg|jpeg)$ {
try_files $uri /index.php$request_uri;
# Optional: Don't log access to other assets
access_log off;
}
}
}
You can set up NextCloud on docker without any domain, you should be able to access with your server's ip in the browser or localhost if it's on the same computer. You just need to add the different domains or IPs to your nextcloud trusted domain : https://help.nextcloud.com/t/howto-add-a-new-trusted-domain/26
Also, check if your firewall doesn't block the port 80 (iptables -L or ufw status verbose on ubuntu).

Certbot command in docker-compose issues SSL certificate with invalid CA

The problem
I'm trying to use certbot to auto-generate a TLS certificate for Nginx in my multi-container Docker configuration. Everything works as expected except the Certificate Authority (CA) is invalid.
When I visit my site, I see that Fake LE Intermediate X1, an invalid authority, issued the certificate:
My setup
Here is the docker-compose.yml file where I call certbot to generate the certificate:
version: '2'
services:
apollo:
restart: always
networks:
- app-network
build: .
ports:
- '1337:1337'
certbot:
image: certbot/certbot
container_name: certbot
volumes:
- certbot-etc:/etc/letsencrypt
- certbot-var:/var/lib/letsencrypt
- web-root:/var/www/html
depends_on:
- webserver
command: certonly --noninteractive --keep-until-expiring --webroot --webroot-path=/var/www/html --email myemail#example.com --agree-tos --no-eff-email -d mydomain.com
webserver:
image: nginx:latest
container_name: webserver
restart: unless-stopped
ports:
- "80:80"
- "443:443"
volumes:
- web-root:/var/www/html
- ./nginx.conf:/etc/nginx/nginx.conf
- certbot-etc:/etc/letsencrypt
- certbot-var:/var/lib/letsencrypt
- dhparam:/etc/ssl/certs
depends_on:
- apollo
networks:
- app-network
volumes:
postgres: ~
certbot-etc:
certbot-var:
dhparam:
driver: local
driver_opts:
type: none
device: /home/user/project_name/dhparam/
o: bind
web-root:
networks:
app-network:
I don't think that Nginx is the issue because the HTTP -> HTTPS redirect works, and the browser receives a certificate. But just in case it's relevant: here's the nginx.conf where I refer to the certificate and configure an HTTP -> HTTPS redirect.
events {}
http {
server {
listen 80;
listen [::]:80;
server_name mydomain.com;
location ~ /.well-known/acme-challenge {
allow all;
root /var/www/html;
}
location / {
rewrite ^ https://$host$request_uri? permanent;
}
}
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name mydomain.com;
server_tokens off;
ssl_certificate /etc/letsencrypt/live/mydomain.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/mydomain.com/privkey.pem;
ssl_buffer_size 8k;
ssl_dhparam /etc/ssl/certs/dhparam-2048.pem;
ssl_protocols TLSv1.2 TLSv1.1 TLSv1;
ssl_prefer_server_ciphers on;
ssl_ciphers ECDH+AESGCM:ECDH+AES256:ECDH+AES128:DH+3DES:!ADH:!AECDH:!MD5;
ssl_ecdh_curve secp384r1;
ssl_session_tickets off;
ssl_stapling on;
ssl_stapling_verify on;
resolver 8.8.8.8;
location / {
try_files $uri #apollo;
}
location #apollo {
proxy_pass http://apollo:1337;
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-XSS-Protection "1; mode=block" always;
add_header X-Content-Type-Options "nosniff" always;
add_header Referrer-Policy "no-referrer-when-downgrade" always;
add_header Content-Security-Policy "default-src * data: 'unsafe-eval' 'unsafe-inline'" always;
}
root /var/www/html;
index index.html index.htm index.nginx-debian.html;
}
}
What I've tried
Initially, I called certonly with the --staging argument in the cerbot container definition in docker-compose.yml. This could definitely cause the invalid CA problem. However, I have since tried revoking the CA and re-running the command multiple times, but no luck.
I have tried removing the --keep-until-expiring flag in the cerbot container definition of docker-compose.yml. This causes cerbot to generate a new certificate, but it did not resolve the CA issue.
Visiting crt.sh, I can see that certbot did issue valid certificates for my domain:
So, the problem seems to lie not in the generation of these certificates, but in the way my docker-compose/cerbot configuration is referring to them.
You can try to add the --force-renewal flag:
command: >-
certonly
--webroot
--webroot-path=/var/www/html
--email myemail#example.com
--agree-tos
--no-eff-email
--force-renewal
-d mydomain.com

how to add subdomain in letsencrypt? I am using docker, nginx, wordpress

I just added one subdomain and try to expand my existing certificate. But there was an error while renewing the certificate. I added my subdomain on DNS records already. I've tried both CNAME and A but neither works. do I need to try AAAA???
certbot | Saving debug log to /var/log/letsencrypt/letsencrypt.log
certbot | Plugins selected: Authenticator webroot, Installer None
certbot | Renewing an existing certificate
certbot | Performing the following challenges:
certbot | http-01 challenge for edu.mrtrobotics.com
certbot | Using the webroot path /var/www/html for all unmatched domains.
certbot | Waiting for verification...
certbot | Challenge failed for domain edu.mrtrobotics.com
certbot | http-01 challenge for edu.mrtrobotics.com
certbot | Cleaning up challenges
certbot | Some challenges have failed.
certbot | IMPORTANT NOTES:
certbot | - The following errors were reported by the server:
certbot |
certbot | Domain: edu.mrtrobotics.com
certbot | Type: unauthorized
certbot | Detail: Invalid response from
certbot | https://www.mrtrobotics.com/content-18/ [149.28.180.33]: "
certbot | html>\n\n\n
certbot | charset=\"UTF-8\">\ncontent - MRT Robotics | Coding,
certbot | Robotics, and STEM Edu"
certbot |
certbot | To fix these errors, please make sure that your domain name was
certbot | entered correctly and the DNS A/AAAA record(s) for that domain
certbot | contain(s) the right IP address.
nginx.conf
server {
listen 80;
listen [::]:80;
server_name mrtrobotics.com www.mrtrobotics.com;
location ~ /.well-known/acme-challenge {
allow all;
root /var/www/html;
}
location / {
rewrite ^ https://$host$request_uri? permanent;
}
}
server {
listen 80;
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name edu.mrtrobotics.com;
ssl_certificate /etc/letsencrypt/live/mrtrobotics.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/mrtrobotics.com/privkey.pem;
include /etc/nginx/conf.d/options-ssl-nginx.conf;
location / {
proxy_pass https://127.0.0.1/edu$request_uri;
proxy_set_header Host mrtrobotics.com;
}
}
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name mrtrobotics.com www.mrtrobotics.com;
index index.php index.html index.htm;
root /var/www/html;
server_tokens off;
ssl_certificate /etc/letsencrypt/live/mrtrobotics.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/mrtrobotics.com/privkey.pem;
include /etc/nginx/conf.d/options-ssl-nginx.conf;
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-XSS-Protection "1; mode=block" always;
add_header X-Content-Type-Options "nosniff" always;
add_header Referrer-Policy "no-referrer-when-downgrade" always;
add_header Content-Security-Policy "default-src * data: 'unsafe-eval' 'unsafe-inline'" always;
# add_header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload" always;
# enable strict transport security only if you understand the implications
location / {
try_files $uri $uri/ /index.php$is_args$args;
}
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass wordpress:9000;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
}
location ~ /\.ht {
deny all;
}
location = /favicon.ico {
log_not_found off; access_log off;
}
location = /robots.txt {
log_not_found off; access_log off; allow all;
}
location ~* \.(css|gif|ico|jpeg|jpg|js|png)$ {
expires max;
log_not_found off;
}
}
# Set client upload size - 100Mbyte
client_max_body_size 100M;
# to avoid 504 time out error - defalut is 60s
proxy_send_timeout 180s;
proxy_read_timeout 180s;
fastcgi_send_timeout 180s;
fastcgi_read_timeout 180s;
docker-compose.yml
version: '3'
services:
db:
image: mysql:8.0
container_name: db
restart: unless-stopped
env_file: .env
environment:
- MYSQL_DATABASE=wordpress
volumes:
- ./db-data:/var/lib/mysql
command: '--default-authentication-plugin=mysql_native_password'
networks:
- app-network
wordpress:
depends_on:
- db
image: wordpress:5.1.1-fpm-alpine
container_name: wordpress
restart: unless-stopped
env_file: .env
environment:
- WORDPRESS_DB_HOST=db:3306
- WORDPRESS_DB_USER=$MYSQL_USER
- WORDPRESS_DB_PASSWORD=$MYSQL_PASSWORD
- WORDPRESS_DB_NAME=wordpress
volumes:
- ./wordpress/:/var/www/html
- ./wordpress/php.ini:/usr/local/etc/php/conf.d/uploads.ini
networks:
- app-network
phpmyadmin:
depends_on:
- db
image: phpmyadmin/phpmyadmin:latest
restart: unless-stopped
ports:
- '8080:80'
env_file: .env
environment:
- PMA_HOST=db
- MYSQL_ROOT_PASSWORD=$MYSQL_ROOT_PASSWORD
volumes:
- ./wordpress/php.ini:/usr/local/etc/php/php.ini
networks:
- app-network
webserver:
depends_on:
- wordpress
image: nginx:1.15.12-alpine
container_name: webserver
restart: unless-stopped
ports:
- "80:80"
- "443:443"
volumes:
- ./wordpress:/var/www/html
- ./nginx-conf:/etc/nginx/conf.d
- ./certbot-etc:/etc/letsencrypt
networks:
- app-network
certbot:
depends_on:
- webserver
image: certbot/certbot
container_name: certbot
volumes:
- ./certbot-etc:/etc/letsencrypt
- ./wordpress:/var/www/html
command: certonly --webroot --webroot-path=/var/www/html --email elearning#wemakerobot.com --agree-tos --no-eff-email **--expand** -d mrtrobotics.com -d www.mrtrobotics.com **-d edu.mrtrobotics.com**
volumes:
certbot-etc:
wordpress:
db-data:
networks:
app-network:
driver: bridge

Resources