I have hit this roadblock where I am not able get the SSL Certificates from Let's encrypt.
I am using Nginx , Certbot and trying to get SSL running for my site with a node backend.
I tried to follow this post as my knowledge is limited. Any help pointers would be highly appreciated.
https://medium.com/#pentacent/nginx-and-lets-encrypt-with-docker-in-less-than-5-minutes-b4b8a60d3a71
As the post mentions , I first try to run the script to get a dummy certificate. I have modified the script to point to my domain.
But I get this error
Failed authorization procedure. example.org (http-01):
urn:ietf:params:acme:error:connection :: The server could not connect
to the client to verify the domain :: Fetching
http://example.org/.well-known/acme-challenge/Jca_rbXSDHEmXz8-y3bKKckD8g0lsuoQJgAxeSEz5Jo:
Connection refused
IMPORTANT NOTES:
- The following errors were reported by the server:
Domain: example.org Type: connection Detail: Fetching
http://example.org/.well-known/acme-challenge/Jca_rbXSDHEmXz8-y3bKKckD8g0lsuoQJgAxeSEz5Jo:
Connection refused
This is my nginx configuration
upstream app {
server app:3000;
}
server {
listen 80;
server_name example.org;
location / {
proxy_pass http://app/;
}
location /.well-known/acme-challenge/ {
root /var/www/certbot;
}
# location /api/ {
# proxy_pass http://app/;
# proxy_redirect off;
# proxy_set_header Host $host;
# proxy_set_header X-Real-IP $remote_addr;
# proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# proxy_set_header X-Forwarded-Host $server_name;
# }
}
server {
listen 443 ssl;
server_name example.org;
ssl_certificate /etc/letsencrypt/live/example.org/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.org/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
location / {
proxy_pass http://example.org; #for demo purposes
}
}
Related
Is there a "proper" structure for the directives of an NGINX Reverse Proxy? I have seen 2 main differences when looking for examples of an NGINX reverse proxy.
http directive is used to house all server directives. Servers with data are listed in a pool within the upstream directive.
server directives are listed directly within the main directive.
Is there any reason for this or is this just a syntactical sugar difference?
Example of #1 within ./nginx.conf file:
upstream docker-registry {
server registry:5000;
}
http {
server {
listen 80;
listen [::]:80;
return 301 https://$host#request_uri;
}
server {
listen 443 default_server;
ssl on;
ssl_certificate external/cert.pem;
ssl_certificate_key external/key.pem;
# set HSTS-Header because we only allow https traffic
add_header Strict-Transport-Security "max-age=31536000;";
proxy_set_header Host $http_host; # required for Docker client sake
proxy_set_header X-Real-IP $remote_addr; # pass on real client IP
location / {
auth_basic "Restricted"
auth_basic_user_file external/docker-registry.htpasswd;
proxy_pass http://docker-registry; # the docker container is the domain name
}
location /v1/_ping {
auth_basic off;
proxy_pass http://docker-registry;
}
}
}
Example of #2 within ./nginx.conf file:
server {
listen 80;
listen [::]:80;
return 301 https://$host#request_uri;
}
server {
listen 443 ssl;
listen [::]:443 ssl;
error_log /var/log/nginx/error.log info;
access_log /var/log/nginx/access.log main;
ssl_certificate /etc/ssl/private/{SSL_CERT_FILENAME};
ssl_certificate_key /etc/ssl/private/{SSL_CERT_KEY_FILENAME};
location / {
proxy_pass http://app1
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-For $remote_addr; # could also be `$proxy_add_x_forwarded_for`
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Request-Start $msec;
}
}
I dont quite understand your question, but it seems to me that the second example is missing the http {}, I dont think that nginx will start without it.
unless your example2 file is included somehow in the nginx.conf that has the http{}
I had a Shopify-like application. So, my customer get sub-domain when they create store(i.e customer1.myShopify.com).
to handle this case of dynamic sub-domains with nginx:
server {
listen 443 ssl;
server_name admin.myapp.com;
ssl_certificate /etc/letsencrypt/live/myapp/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/myapp/privkey.pem;
location / {
proxy_pass http://admin-front-end:80/;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $host;
}
}
server {
listen 443 ssl;
server_name *.myapp.com;
ssl_certificate /etc/letsencrypt/live/myapp/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/myapp/privkey.pem;
location / {
proxy_pass http://app-front-end:80/;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $host;
}
}
this works great so if you visit admin.myapp.com you'll see the admin application and if you visit any xxx.myapp.com you'll see the shop-front-end application.
The Problem
I want to allow my customer to connect their own domain. so I told them to connect with CNAME and A Record.
A Record => # => 12.12.12.3(my root nginx ip)
CNAME => WWW => thier.myapp.com
not each request to customer.com will resolved by my nginx.
so I added this configuration to my nginx, to catch all other server_name request:
server {
listen 80;
server_name server_name ~^.*$;
location / {
proxy_pass http://app-front-end:80/;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $host;
}
}
and it works fine.
but how can I handle SSL for this case? because it could be any domain name.I don't know what the customer domain name will be.
how i can give them the ability to add SSL certificate automatically and without create manually ?
This server block should work since variable names are supported for ssl_certificate and ssl_certificate_key directives.
http {
map "$ssl_server_name" $domain_name { ~(.*)\.(.*)\.(.*)$ $2.$3; }
server {
listen 443;
server_name server_name ~^.*$;
ssl_certificate /path/to/cert/files/$domain_name.crt;
ssl_certificate_key /path/to/cert/keys/$domain_name.key;
location / {
proxy_pass http://app-front-end:80/;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $host;
}
}
}
P.S. Using variable names would compromise performance because now nginx would load the files on each ssl handshake.
Ref: https://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_certificate
In my view instead of compromising the nginx performance, we should use one cron job, say letsencrypt bot, which will fetch certificate based on user requested domain, and you can add the certificate in nginx conf, and restart server.
Bonus:
I have used traefik which is kubernetes based solutions, they load configs on the fly without the restart.
In VM you can use the certbot for managing the SSL/TLS certificate.
Now if you are using the HTTP-01 method to verify your domain you won't be able to get the Wild card domain name.
i would suggest to use the DNS-01 method in cert-bot for domain verification and you can get the wild card certificate and use it.
Adding the certificate to Nginx config using :
ssl_certificate /path/to/cert/files/tls.crt;
ssl_certificate_key /path/to/cert/keys/tls.key;
If you are using the certbot it will also auto inject and add the SSL config above lines to the configuration file.
For different domains also you can run the job or certbot with HTTP-01 method and you will get the certificate.
If you are on Kubernetes you can use the cert-manager, which will be managing the SSL/TLS certificate.
I suggest using separete block with different ssh certificates thats the only solution that worked for me
server {
listen 80;
root /var/www/html/example1.com;
index index.html;
server_name example1.com;
ssl_certificate /etc/letsencrypt/live/example1.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example1.com/privkey.pem;
location / {
try_files $uri $uri/ =404;
}
}
server {
listen 80;
root /var/www/html/example2.com;
index index.html;
server_name example2.com;
ssl_certificate /etc/letsencrypt/live/example2.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example2.com/privkey.pem;
location / {
try_files $uri $uri/ =404;
}
}
Working with a fresh install of Zentyal 6.1, how do I remove /SOGo from the default webmail login. Currently users need to access https://mail.mydomain.com/SOGo (Case sensitive) when I would like them to be able to access at https://mail.mydomain.com/.
I have tried adding the below to /etc/apache2/sites-enabled/default-ssl.conf. Result of that is the text on the page loading but all includes such as css and just return net::ERR_ABORTED 403 (Forbidden).
ServerName mail.domain.com
ProxyPass / http://127.0.0.1/SOGo/ retry=0
ProxyPassReverse / http://127.0.0.1/SOGo/
Ideally I would like all users to access mail.domain.com to access webmail. Also, I would like Outlook clients to use mail.domain.com for the server. This will be setup with multiple virtual mail domains for end users, such as joe#domain1.com or jane#domain2.com.
You should try using the Apache rewrite mod in order to redirect the url you like to the SOGo url. See this:
https://httpd.apache.org/docs/2.4/rewrite/remapping.html
BR.
I ended up scrapping Zentyal and switched to MailCow in Docker. Access SoGo via mail.domain.com. Below is the Reverse-Proxy for NGINX outside of Docker.
#######
### NGINX Reverse-Proxy to mailcow and SOGo
### Redirects root to SOGo and /setup to mailcow control panel
### Handles all SSL security
#######
## HTTP catch-all for invalid domain names (e.g. root domain "example.com")
server {
listen 80 default_server;
listen [::]:80 default_server;
# Have NGINX drop the connection (return no-data)
return 444;
}
## Redirect HTTP to HTTPS for valid domain names on this server
## (e.g. mail.example.com, webmail.example.com)
server {
listen 80;
listen [::]:80;
server_name mail.domain-name.com
autodiscover.domain-name.com
autoconfig.domain-name.com;
location ^~ /.well-known/acme-challenge/ {
allow all;
default_type "text/plain";
# Path can be used for cert-validation on all domains
root /var/www/html/;
break;
}
# Redirect to properly formed HTTPS request
return 301 https://$host$request_uri;
}
## HTTPS catch-all site for invalid domains that generate a certificate
## mismatch but the user proceeds anyways
server {
listen 443 default_server ssl http2;
listen [::]:443 default_server ssl http2;
# SSL settings in another file (see my 'mozModern_ssl' file as an example)
# include /etc/nginx/mozModern_ssl.conf
# SSL certificates for this connection
ssl_certificate /etc/letsencrypt/live/mail.domain-name.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/mail.domain-name.com/privkey.pem;
# Have NGINX drop the connection (return no-data)
return 444;
}
## Proxy primary server and webmail subdomain to mailcow
## Go to SOGo after typing root address only (default browsing action)
## Go to mailcow admin panel after typing /admin subdirectory
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name mail.domain-name.com
autodiscover.domain-name.com;
location ^~ /.well-known/acme-challenge/ {
allow all;
default_type "text/plain";
# Path can be used for cert-validation on all domains
root /var/www/html/;
break;
}
# SSL settings in another file (see my 'mozModern_ssl' file as an example)
#include /etc/nginx/mozModern_ssl.conf
# SSL certificates for this connection
ssl_certificate /etc/letsencrypt/live/mail.domain-name.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/mail.domain-name.com/privkey.pem;
location /Microsoft-Server-ActiveSync {
proxy_pass http://127.0.0.1:8080/Microsoft-Server-ActiveSync;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_connect_timeout 75;
proxy_send_timeout 3650;
proxy_read_timeout 3650;
proxy_buffers 64 256k;
client_body_buffer_size 512k;
client_max_body_size 0;
}
# Redirect root to SOGo. Rewrite rule changes / to /SOGo
location / {
rewrite ^/$ /SOGo;
proxy_pass https://127.0.0.1:8443;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
client_max_body_size 100m;
}
# Redirect /setup to mailcow admin panel
# Note the trailing / after setup and the trailing / after proxy URL
# This makes sure that NGINX doesn't try to go to proxyURL/setup which
# would result in a 404.
# Recent updates result in loops if you try to use 'admin' here
location ^~ /setup/ {
proxy_pass https://127.0.0.1:8443/;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
client_max_body_size 100m;
}
}
I have the following nginx configuration:
server {
listen 80;
server_name _;
server_tokens off;
location /.well-known/acme-challenge/ {
root /var/www/certbot;
}
location / {
return 301 https://$host$request_uri;
}
}
server {
listen 443 ssl;
server_name _;
server_tokens off;
ssl_certificate /etc/letsencrypt/live/example.org/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.org/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
location / {
proxy_pass http://example.org;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
I am running nginx in a docker container and it starts properly. When I execute: curl localhost I receive 301 Moved Permanently. But when I try to call from an external source curl publicIP I receive Timed out
My first thought was that the problem is from the firewall but if I start a nginx container without any configuration curl publicIP works properly I receive 200.
Can you help me figure out if it's a problem in nginx configuration?
Thanks
I have this configuration:
upstream frontend_upstream {
# FrontEnd part based on `frontend` container with React app.
server frontend:3000;
}
server {
...
listen 80;
server_name stage.example.com;
server_tokens off;
location /.well-known/acme-challenge/ {
root /var/www/certbot;
}
location / {
# Define the location of the proxy server to send the request to
# Web it's a name of Docker container with a frontend.
proxy_pass https://frontend_upstream;
...
}
# Setup communication with API container.
location /api {
proxy_pass http://api:9002;
rewrite "^/api/(.*)$" /$1 break;
proxy_redirect off;
}
}
server {
listen 443 ssl;
server_name stage.example.com;
server_tokens off;
ssl_certificate /etc/letsencrypt/live/stage.example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/stage.example.com/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
location / {
proxy_pass http://frontend_upstream;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
I want to be able to connect to my application via HTTP and HTTPs, but SSL_do_handshake() failed (SSL: error:1408F10B:SSL routines:ssl3_get_record:wrong version number) while SSL handshaking to upstream raises.
What is wrong with this configuration?
There are a lot of similar issues, but none of them helps me.
location / {
# Define the location of the proxy server to send the request to
# Web it's a name of Docker container with a frontend.
proxy_pass http://frontend_upstream;
...
}
try this.
Your upstream most likely works on http, not on https.