Self-signed certificate not working on docker nginx container - docker

I can't get SSL certificate to work on my local setup. Everything starts ok, but the browsers are reporting "connection not secure". I've added cert to MacOs keychain and selected "always trust". I have dnsmasq in place for all .test domain to resolve to my local setup but I've also tried adding domain to /etc/hosts. Nothing works.
This is my nginx dockerfile
FROM nginx:latest
COPY ./default.conf /etc/nginx/conf.d/default.conf
RUN apt-get update && \
apt-get install -y openssl && \
mkdir /etc/nginx/ssl && \
openssl dhparam -out "/etc/nginx/ssl/dh.pem" 2048 && \
openssl req -x509 -newkey rsa:2048 -keyout "/etc/nginx/ssl/key.pem" -out "/etc/nginx/ssl/cert.pem" -days 365 -nodes \
-subj "/C=HR/ST=Zagreb/L=Zagreb/O=XOO/OU=IT Department/CN=domain.test"
This is the main part of default.conf
server {
listen 80 default_server;
root /var/www/html;
index index.html index.php;
listen 443 ssl;
server_name default_server;
ssl_certificate /etc/nginx/ssl/cert.pem;
ssl_certificate_key /etc/nginx/ssl/key.pem;
ssl_dhparam /etc/nginx/ssl/dh.pem;
charset utf-8;
# only this domain
add_header Strict-Transport-Security "max-age=31536000";
# apply also on subdomains
add_header Strict-Transport-Security "max-age=31536000; includeSubdomains";
# protocols
ssl_protocols TLSv1.2 TLSv1.3;
# ciphers
ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384;
ssl_prefer_server_ciphers off;
# SSL stapling
ssl_stapling on;
ssl_stapling_verify on;
And finally this is a part of my docker-compose.yml
version: '1'
services:
nginx:
build: ./docker/nginx/
ports:
- 80:80
- 443:443
links:
- php
volumes_from:
- app
networks:
- default

Related

nginx cannot connect to uwsgi in other container with docker-compose

I use docker-compose and there are two containers, one for uwsgi and one for nginx. But it seems that nginx fails to connect uwsgi.
Here is the environment.
$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 20.04.1 LTS
Release: 20.04
Codename: focal
$ docker --version
Docker version 20.10.3, build 48d30b5
$ docker-compose --version
docker-compose version 1.24.0, build 0aa59064
Strangely, if I login to the nginx container and try to connect to the uwsgi manually, it succeeds as follows.
$ docker-compose ps --service
python
nginx
$ docker-compose exec nginx /bin/bash
# curl python:8001
success!
However, when I try to access uwsgi via nginx, it fails.
# curl localhost:8000/s
<html>
<head><title>404 Not Found</title></head>
<body bgcolor="white">
<center><h1>404 Not Found</h1></center>
<hr><center>nginx/1.15.3</center>
</body>
</html>
Here is my config. What is wrong with these? How can I fix this problem?
docker-compose.yml
version: '3'
services:
nginx:
image: nginx:1.15.3
ports:
- "80:8000"
volumes:
- ./nginx/nginx.conf:/etc/nginx/nginx.conf
- ./python/uwsgi_params:/etc/nginx/uwsgi_params
- .:/code
depends_on:
- python
python:
build: ./python
ports:
- "8001:8001"
volumes:
- ./proj:/code/proj
command: bash -c "ls -l && cd proj && pwd && uwsgi --http :8001 --module fargate.wsgi --logto uwsgilog.txt"
nginx/nginx.conf
events {
worker_connections 768;
}
http {
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
gzip on;
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
upstream django {
server python:8001;
}
server {
listen 8000;
server_name localhost;
root /code/nginx/html;
charset utf-8;
include /etc/nginx/default.d/*.conf;
client_max_body_size 100M;
location /static {
alias /code/proj/static;
}
location ~ ^/s/(.*)$ {
uwsgi_pass django;
include /code/python/uwsgi_params;
uwsgi_param SCRIPT_NAME /s;
uwsgi_param PATH_INFO /$1;
}
}
}
python/Dockerfile
FROM python:3.6
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
RUN mkdir /code/python
WORKDIR /code
ADD . /code/python/
RUN pip install -r python/requirements.txt

Certbot command in docker-compose issues SSL certificate with invalid CA

The problem
I'm trying to use certbot to auto-generate a TLS certificate for Nginx in my multi-container Docker configuration. Everything works as expected except the Certificate Authority (CA) is invalid.
When I visit my site, I see that Fake LE Intermediate X1, an invalid authority, issued the certificate:
My setup
Here is the docker-compose.yml file where I call certbot to generate the certificate:
version: '2'
services:
apollo:
restart: always
networks:
- app-network
build: .
ports:
- '1337:1337'
certbot:
image: certbot/certbot
container_name: certbot
volumes:
- certbot-etc:/etc/letsencrypt
- certbot-var:/var/lib/letsencrypt
- web-root:/var/www/html
depends_on:
- webserver
command: certonly --noninteractive --keep-until-expiring --webroot --webroot-path=/var/www/html --email myemail#example.com --agree-tos --no-eff-email -d mydomain.com
webserver:
image: nginx:latest
container_name: webserver
restart: unless-stopped
ports:
- "80:80"
- "443:443"
volumes:
- web-root:/var/www/html
- ./nginx.conf:/etc/nginx/nginx.conf
- certbot-etc:/etc/letsencrypt
- certbot-var:/var/lib/letsencrypt
- dhparam:/etc/ssl/certs
depends_on:
- apollo
networks:
- app-network
volumes:
postgres: ~
certbot-etc:
certbot-var:
dhparam:
driver: local
driver_opts:
type: none
device: /home/user/project_name/dhparam/
o: bind
web-root:
networks:
app-network:
I don't think that Nginx is the issue because the HTTP -> HTTPS redirect works, and the browser receives a certificate. But just in case it's relevant: here's the nginx.conf where I refer to the certificate and configure an HTTP -> HTTPS redirect.
events {}
http {
server {
listen 80;
listen [::]:80;
server_name mydomain.com;
location ~ /.well-known/acme-challenge {
allow all;
root /var/www/html;
}
location / {
rewrite ^ https://$host$request_uri? permanent;
}
}
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name mydomain.com;
server_tokens off;
ssl_certificate /etc/letsencrypt/live/mydomain.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/mydomain.com/privkey.pem;
ssl_buffer_size 8k;
ssl_dhparam /etc/ssl/certs/dhparam-2048.pem;
ssl_protocols TLSv1.2 TLSv1.1 TLSv1;
ssl_prefer_server_ciphers on;
ssl_ciphers ECDH+AESGCM:ECDH+AES256:ECDH+AES128:DH+3DES:!ADH:!AECDH:!MD5;
ssl_ecdh_curve secp384r1;
ssl_session_tickets off;
ssl_stapling on;
ssl_stapling_verify on;
resolver 8.8.8.8;
location / {
try_files $uri #apollo;
}
location #apollo {
proxy_pass http://apollo:1337;
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-XSS-Protection "1; mode=block" always;
add_header X-Content-Type-Options "nosniff" always;
add_header Referrer-Policy "no-referrer-when-downgrade" always;
add_header Content-Security-Policy "default-src * data: 'unsafe-eval' 'unsafe-inline'" always;
}
root /var/www/html;
index index.html index.htm index.nginx-debian.html;
}
}
What I've tried
Initially, I called certonly with the --staging argument in the cerbot container definition in docker-compose.yml. This could definitely cause the invalid CA problem. However, I have since tried revoking the CA and re-running the command multiple times, but no luck.
I have tried removing the --keep-until-expiring flag in the cerbot container definition of docker-compose.yml. This causes cerbot to generate a new certificate, but it did not resolve the CA issue.
Visiting crt.sh, I can see that certbot did issue valid certificates for my domain:
So, the problem seems to lie not in the generation of these certificates, but in the way my docker-compose/cerbot configuration is referring to them.
You can try to add the --force-renewal flag:
command: >-
certonly
--webroot
--webroot-path=/var/www/html
--email myemail#example.com
--agree-tos
--no-eff-email
--force-renewal
-d mydomain.com

Reverse proxy on windows using docker - nginx is not forwarding https to IIS

I have a win10 box where I run docker and two containers with Windows as depicted by the diagram below where one is running Nginx and acts as a reverse proxy to the other container running IIS.
It works fine for http but the redirection from nginx to IIS fails for https.
The individual containers accept https on its own so I know the certificates are installed correctly. I use self-signed certificates.
I'm thinking that there might be a setting in nginx.conf that I am not aware of that is causing it.
I can do
+---------------------------+--------------------------+------+
| https://localhost | points to nginx | OK |
+---------------------------|--------------------------|------|
| https://localhost:5003 | points to iis | OK |
+---------------------------|--------------------------|------|
| https://localhost/mysite | points to iis via nginx | FAIL |
+---------------------------+--------------------------+------+
And the error:
There are questions e.g. this and this but they refer to http only.
There is a tutorial on DigitalOcean that describes how to set up nginx with https which I have largely followed but it still doesn't work.
Nginx - access.log:
"GET /mysite HTTP/1.1" 504 585 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/71.0.3578.98 Safari/537.36"
Nginx - error.log:
*5 upstream timed out (10060: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond) while connecting to upstream, client: 172.18.0.1, server: localhost, request: "GET /mysite HTTP/1.1", upstream: "https://172.18.0.2:5003/", host: "localhost"
IIS Logs:
C:\inetpub\logs is empty
Question
How can make nginx forward https to the IIS container?
Setup
Setting up docker network:
docker network create -d nat --subnet=172.18.0.0/16 nginx-proxy-network
Build commands:
cd nginx-proxy
docker build -t nginx-proxy .
Cd ..\iis
Docker build -t iis .
Starting nginx container:
docker run -d -p 80:80 -p 443:443 --network nginx-proxy-network --ip 172.18.0.3 nginx-proxy
Starting iis container:
docker run -d -p 5002:80 -p 5003:443 --network nginx-proxy-network --ip 172.18.0.2 iis
Nginx
Generate certificates for nginx1:
C:\openssl\openssl.exe genrsa -des3 -out localhost.key 2048
C:\openssl\openssl.exe req -new -key localhost.key -out localhostcsr -config C:\openssl\openssl.conf
C:\openssl\openssl.exe x509 -req -days 365 -in localhost.csr -signkey localhost.key -out localhost.crt
It asks for a password that I then store in a txt file.
Nginx.conf:
worker_processes 1;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
server {
listen 80;
server_name localhost ;
location /mysite {
proxy_pass http://172.18.0.2/;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
location / {
root html;
index index.html index.htm;
}
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
}
# HTTPS server
server {
listen *:443 ssl;
server_name localhost ;
ssl on;
ssl_password_file C:\cert\pwdcert.txt;
ssl_certificate C:\cert\localhost.crt;
ssl_certificate_key C:\cert\localhost.key;
ssl_session_cache shared:SSL:1m;
ssl_session_timeout 5m;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;
add_header Strict-Transport-Security "max-age=63072000; includeSubdomains; preload";
add_header X-Frame-Options DENY;
add_header X-Content-Type-Options nosniff;
add_header X-Frame-Options "SAMEORIGIN";
location /mysite {
proxy_pass https://172.18.0.2:5003/;
# proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
location / {
root html;
index index.html index.htm;
}
}
}
Nginx - Docker file:
FROM microsoft/windowsservercore
COPY nginx/ /nginx
RUN mkdir "C:\\cert"
COPY *.crt /cert
COPY *.key /cert
COPY pwdcert.txt /cert
WORKDIR /nginx
CMD ["nginx"]
IIS
IIS Docker file:
FROM microsoft/aspnet
COPY iisscripts.ps1 /
RUN powershell -noexit "C:\iisscripts.ps1"
COPY mysite/ /inetpub/wwwroot/
iisscripts.ps1:
$cert = New-SelfSignedCertificate -DnsName "localhost" - CertStoreLocation cert:\LocalMachine\My
New-WebBinding -Name "Default Web Site" -IP "*" -Port 443 -Protocol https
new-item -path IIS:\SslBindings\0.0.0.0!443 -Value $cert
Are you able to curl the IIS URL from the container running nginx?
exec into the nginx container using ssh then:-
https://[my domain or IP address]
Just as I thought. As the certificate you have installed in your IIS contianer is not trusted by the container running NGINX it will not proxy the connection. You either need to tell NGINX to not verify ssl by adding the following to your NGINX configuration.
ssl_verify_client off
Or use a trusted certificate.

Running NGinX with SSL on Docker container

I am trying to run NGinX in a docker container and access it over HTTPS. The container starts and works fine on the port 80 but fails on 443. Please help!
Here are the steps I followed:
Default.conf
server {
listen 443;
server_name localhost;
ssl on;
ssl_certificate /etc/nginx/ssl/nginx.crt;
ssl_certificate_key /etc/nginx/ssl/nginx.key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
#ssl_ciphers HIGH:!aNULL:!MD5;
#ssl_prefer_server_ciphers on;
ssl_prefer_server_ciphers off;
#ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH";
ssl_ciphers 'ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:ECDHE-ECDSA-DES-CBC3-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA:!DSS';
#ssl_ecdh_curve auto;
#ssl_session_cache shared:SSL:10m;
ssl_session_cache none;
ssl_session_tickets off;
#ssl_stapling on;
#ssl_stapling_verify on;
#ssl_stapling_verify off;
add_header Strict-Transport-Security "max-age=63072000; includeSubDomains; preload";
add_header X-Frame-Options DENY;
add_header X-Content-Type-Options nosniff;
keepalive_timeout 1200;
client_body_timeout 1200;
client_header_timeout 1200;
proxy_read_timeout 300;
client_max_body_size 200M;
client_body_buffer_size 200M;
proxy_request_buffering off;
proxy_max_temp_file_size 0;
proxy_set_header X-Forwarded-For $proxy_protocol_addr;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
ssi on;
}
Docker file is
FROM nginx
ENV LANG C.UTF-8
RUN apt-get update; apt-get install -y \
openssl
COPY default.conf /etc/nginx/conf.d/
COPY index.html /usr/share/nginx/html/
CMD ["echo", "Moved conf files"]
ADD cert.sh /cert.sh
RUN chmod a+x /cert.sh
CMD ["echo", "Added cert.sh"]
RUN ./cert.sh;
EXPOSE 443
CMD ["nginx", "-g", "daemon off;"]
I am building docker image with docker build -t nginx . Running the container with docker run --name mynginx4 -P -d nginx
I can see /etc/nginx/ssl/nginx.crt and /etc/nginx/ssl/nginx.key are present in the container. But when I curl I get this error
BANL151bc2453:docker-nginx-ssl-secure pgupta14$ sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a2dc5dd90ef0 nginx "nginx -g 'daemon of…" 48 seconds ago Up 35 seconds 0.0.0.0:32817->80/tcp, 0.0.0.0:32816->443/tcp mynginx4
curl https://localhost:32816
curl: (35) LibreSSL SSL_connect: SSL_ERROR_SYSCALL in connection to localhost:32816
Output of curl -vvv
curl https://localhost:32816 -vvv
* Rebuilt URL to: https://localhost:32816/
* Trying ::1...
* TCP_NODELAY set
* Connected to localhost (::1) port 32816 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* Cipher selection: ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:#STRENGTH
* successfully set certificate verify locations:
* CAfile: /etc/ssl/cert.pem
CApath: none
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
* LibreSSL SSL_connect: SSL_ERROR_SYSCALL in connection to localhost:32816
* Closing connection 0
curl: (35) LibreSSL SSL_connect: SSL_ERROR_SYSCALL in connection to localhost:32816
In the output of curl -vvv, I see it is using /etc/ssl/cert.pem. Shouldn't it be using my certificates which are present at /etc/nginx/ssl/?

Docker nginx error: openssl: command not found

I am using nginx as a proxy to forward requests to other components (servers).
Each component, including nginx is implemented as docker container, i.e. I have a docker container for 'nginx-proxy', 'dashboard-server', 'backend-server' (REST API), and 'landing-server' (Landing Page). The latter 3 components are all NodeJS Express servers and working properly when I use the command docker-compose build there are no errors but when I start the containers with docker-compose -f docker-compose.yml -f docker-compose.prod.yml up -d the NodeJS containers work fine, but the nginx container gives me this error using docker-compose logs nginx-proxy:
Attaching to docker_nginx-proxy_1
nginx-proxy_1 | /start.sh: line 5: openssl: command not found
nginx-proxy_1 | Creating dhparams…\c
nginx-proxy_1 | ok
nginx-proxy_1 | Starting nginx…
nginx-proxy_1 | 2017/08/23 23:27:20 [emerg] 6#6:
BIO_new_file(“/etc/letsencrypt/live/admin.domain.com/fullchain.pem”)
failed (SSL: error:02001002:system library:fopen:No such file or directory:
fopen(‘/etc/letsencrypt/live/admin.domain.com/fullchain.pem’,‘r’) error:2006D080:BIO routines:BIO_new_file:no such file)
nginx-proxy_1 | nginx: [emerg]
BIO_new_file(“/etc/letsencrypt/live/admin.domain.com/fullchain.pem”) failed (SSL: error:02001002:system library:fopen:
No such file or directory:fopen(‘/etc/letsencrypt/live/admin.domain.com/fullchain.pem’,‘r’)error:2006D080:BIO routines:BIO_new_file:no such file)
I am using Lets Encrypt for the SSL certificates, however the command certbot certonly --webroot -w /var/www/letsencrypt -d admin.domain.com -d api.domain.com -d www.domain.com -d domain.com results in the error Connection Refused because the nginx server does not start to handle the requests.
My nginx Dockerfile (nginx-proxy/Dockerfile):
FROM nginx:1.12
COPY start.sh /start.sh
RUN chmod u+x /start.sh
COPY conf.d /etc/nginx/conf.d
COPY sites-enabled /etc/nginx/sites-enabled
ENTRYPOINT ["/start.sh"]
My start.sh file (nginx-proxy/start.sh):
#!/bin/bash
if [ ! -f /etc/nginx/ssl/dhparam.pem ]; then
echo "Creating dhparams…\c"
openssl dhparam -out /etc/nginx/ssl/dhparam.pem 2048
echo "ok"
fi
echo "Starting nginx…"
nginx -g 'daemon off;
My default.conf file (nginx-proxy/conf.d/default.conf):
include /etc/nginx/sites-enabled/*.conf;
My api.conf file (the others are similar) (nginx-proxy/sites-enabled/api.conf):
server {
listen 80;
server_name api.domain.com;
location ^~ /.well-known/acme-challenge/ {
default_type "text/plain";
root /var/www/letsencrypt;
}
location = /.well-known/acme-challenge/ {
return 404;
}
return 301 https://$host$request_uri;
}
server {
listen 443;
server_name api.domain.com;
ssl on;
ssl_certificate /etc/letsencrypt/live/api.domain.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/api.domain.com/privkey.pem;
ssl_session_timeout 5m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers 'EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH';
ssl_prefer_server_ciphers on;
ssl_session_cache shared:SSL:1m;
ssl_dhparam /etc/nginx/ssl/dhparam.pem;
client_max_body_size 0;
chunked_transfer_encoding on;
location ^~ /.well-known/acme-challenge/ {
default_type "text/plain";
root /var/www/letsencrypt;
}
location = /.well-known/acme-challenge/ {
return 404;
}
location / {
proxy_read_timeout 900;
proxy_pass_header Server;
proxy_cookie_path ~*^/.* /;
proxy_pass http://backend-server:3000;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-Proto $http_x_forwarded_proto;
}
}
Any ideas?
I found the solution.
In my nginx Dockerfile, I had to use
FROM nginx:1.12-alpine
RUN apk update \
&& apk add openssl
...
Then the openssl command worked properly.
At first, you can try edit your start.sh file at line openssl to /usr/bin/openssl. Did /usr/bin/openssl exists?
Second, your nginx server will not start until /etc/letsencrypt/live/api.domain.com/fullchain.pem and /etc/letsencrypt/live/api.domain.com/privkey.pem file exists.
So delete or comment all the server block that handling 443 port, keep server block that handling 80 port. Your api.conf will become this:
server {
listen 80;
server_name api.domain.com;
location ^~ /.well-known/acme-challenge/ {
default_type "text/plain";
root /var/www/letsencrypt;
}
location = /.well-known/acme-challenge/ {
return 404;
}
return 301 https://$host$request_uri;
}
Then start your nginx server and retry install Let's Encrypt certificate.

Resources