I have taken a project builded with Laravel 4.2 and PHP 5.6.
And have mounted the server with Docker.
The server uses Nginx, certificates from Let’s Encrypt with Certbot.
These are the configuration files of the Nginx:
nginx.conf
server {
index index.php index.html;
error_log /var/log/nginx/error.log;
access_log /var/log/nginx/access.log;
root /var/www/public;
server_name example.com.py www.example.com.py;
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass app:9000;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
}
location / {
try_files $uri $uri/ /index.php?$query_string;
gzip_static on;
}
listen [::]:443 ssl http2 ipv6only=on; # managed by Cetbot
listen 443 ssl http2; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/example.com.py/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/example.com.py/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
and options-ssl-nginx.conf
# This file contains important security parameters. If you modify this file
# manually, Certbot will be unable to automatically provide future security
# updates. Instead, Certbot will print and log an error message with a path to
# the up-to-date file that you will need to refer to when manually updating
# this file.
ssl_session_cache shared:le_nginx_SSL:40m; #holds approx 40 x 4000 sessions
ssl_session_timeout 2h;
ssl_session_tickets off;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_prefer_server_ciphers off;
ssl_ciphers "ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384";
The project is an E-commerce that conects with a local Credit Card Payments procesor company.
Everthing works fine until the Payments procesor, responds to a url in my server for payment confirmation.
Here is what my server responds to the Payments procesor when it sends the request to the payments confirmation url:
[Status 500]
OpenSSL: : SSL: : SSLError: Receivedfatalalert: handshake_failure
I have no log of any error or anything in my laravel.log or my nginx log.
I copy pasted the same request and reproduced it with Postman and it works fine returning the correct 200 response.
The payment procesor told me that i need to validate only certificates with TLS1.2 and above and as you can see in my options-ssl-nginx.conf
file, thats already validated.
Does Anyone knows what could be missing?
Thanks in advance
UPDATE 1:
I raised the level for the error logs of nginx, and i got the following errors:
This one with a test:
SSL_do_handshake() failed (SSL: error:1417A0C1:SSL routines:tls_post_process_client_hello:no shared cipher) while SSL handshaking, client: 190.128.218.209, server: 0.0.0.0:443
I also got this one some hours before:
SSL_do_handshake() failed (SSL: error:14209102:SSL routines:tls_early_post_process_client_hello:unsupported protocol) while SSL handshaking, client: 3.236.110.87, server: 0.0.0.0:443
ssl_ciphers "ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384";
You cipher set is very restrictive, i.e. there are only 5 TLS 1.2 ciphers for RSA certificates (the ECDSA ones are likely irrelevant since you likely don't use a ECC certificate). It might well be that the specific client used by the payment processor company neither supports these TLS 1.2 ciphers nor supports TLS 1.3.
It might be useful to just comment out the ssl_ciphers line so that it falls back to the default settings, which are usually (with current versions of OpenSSL) are still secure but broader.
Related
I have a shiny server app using aws ec2 & route53, nginx & certbot for ssl. right now my domain name is used by the app.
I would like to have a static homepage to welcome users and offer the access to login to the app.
The purpose is to have an homepage intro and so it can be indexed by google.
Can i use one domain for that (for both app and webpage)?
how should i define and manage my domain to do so?
hope i made my Q clear enough.
thanks in advance
I forgot to mention that my static website is on aws s3 bucket (and not on the ec2 +nginx server).
I'm not sure about the syntax to define the nginx.conf. the following is how the nginx.conf is working now fine:
server {
listen 80;
listen [::]:80;
# redirect all HTTP requests to HTTPS with a 301 Moved Permanently response.
return 301 https://$host$request_uri;
}
server {
# listen 443 means the Nginx server listens on the 443 port.
listen 443 ssl http2;
listen [::]:443 ssl http2;
# certs sent to the client in SERVER HELLO are concatenated in ssl_certificate
ssl_certificate /etc/letsencrypt/live/app.mydomain/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/app.mydomain/privkey.pem;
ssl_session_timeout 1d;
ssl_session_cache shared:MozSSL:10m; # about 40000 sessions
ssl_session_tickets off;
# curl https://ssl-config.mozilla.org/ffdhe2048.txt > /path/to/dhparam.pem
ssl_dhparam /etc/nginx/snippets/dhparam.pem;
# intermediate configuration
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES12>
ssl_prefer_server_ciphers off;
# HSTS (ngx_http_headers_module is required) (63072000 seconds)
add_header Strict-Transport-Security "max-age=63072000" always;
# OCSP stapling
ssl_stapling on;
ssl_stapling_verify on;
# verify chain of trust of OCSP response using Root CA and Intermediate certs
ssl_trusted_certificate /etc/letsencrypt/live/app.mydomain/chain.pem;
# Replace it with your (sub)domain name.
server_name app.mydomain;
# The reverse proxy, keep this unchanged:
location / {
proxy_pass http://localhost:3838;
proxy_redirect http://localhost:3838/ $scheme://$host/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_read_timeout 20d;
proxy_buffering off;
}
}
and if i understood #AlexStoneham, i need to add something like that:
server{
server_name mydomain;
location / {
proxy_pass $scheme://$host.s3-website-eu-central-1.amazonaws.com$request_uri
}
}
but that adding doesnt work. should i add to it the 443 listener block and add ssl certificate all over again?
app.mydomain is for the shiny app and working fine now.
mydomain should direct to s3 static webpage.
thanks
Use nginx server blocks with your nginx conf
and subdomains with your route53 conf
Leverage a subdomain like app.yourdomain.com to go to the shiny app configured with nginx to serve the shiny app in one server block. Set up another subdomain like www.yourdomain.com to go to the static pages configured with nginx to server the static pages in another server block.
See:
https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/dns-routing-traffic-for-subdomains.html
for the route53 details
and:
https://www.nginx.com/resources/wiki/start/topics/examples/server_blocks/
for the nginx details
The nginx.conf was ok and didn't need to add anything because the static webpage is on s3 bucket and not on nginx/ec2.
The issue was that in one of my many tries i made a certbot certificate of the "mydomain" that was the same name of the s3 bucket.
That clashed and made the problem when trying to link my s3 bucket with that domain name through route53 (the s3 endpoint is http and not https).
The solution was to delete that specific ssl certificate from my ec2 server(with nginx on it):
$ sudo certbot certificates #shows the exist certificates
$ sudo certbot delete #choose the certificate to delete, in my case: "mydomain"
I have a fully dockerised application:
nginx as proxy
a backend server (express.js)
a database (mongodb)
a frontend server (express js)
goaccess for logging
The problem is when I hit my backend endpoint with a POST request, the response is never sent to the client. A 499 code is logged by nginx along with this log
epoll_wait() reported that client prematurely closed connection, so upstream connection is closed too while sending request to upstream,
The client is the browser, there is no doubt about it.
The error arise after 1min of processing in firefox and 5min of processing in chrome. As far as I know, these times match the timeout settings of theses browsers. I could increase the timeout in firefox but it is not a viable solution.
When I get rid of the proxy, the request completes and the client get the response in about 15min. So I think there is a problem with the nginx configuration but I don't know what.
So far I tried to increase all timeout you can imagine but that didn't change anything.
I also try to set the proxy_ignore_client_abort in nginx but it is not useful in my case. Indeed the connection between nginx and my backend is still alive and the request completes after 15min (code 200 in nginx logs) but the ui is not updated because the client has terminated the connection with nginx.
I think that the browser thinks nginx is dead, because it doesn't receive any data, so it closes the TCP connection.
I'll try later on to "stimulates" this TCP connection when the request is still processing by switching between my website pages (so the browser should not close the connection), but if I have to do some weird stuff to get my backend result, it is not a viable solution.
There should be a way to process long requests without facing these browser's timeout but I don't know how.
Any help would be appreciated :)
My nginx configuration:
user nginx;
pid /run/nginx.pid;
worker_processes auto;
worker_rlimit_nofile 65535;
events {
multi_accept on;
worker_connections 65535;
}
http {
charset utf-8;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
server_tokens off;
log_not_found off;
types_hash_max_size 2048;
types_hash_bucket_size 64;
client_max_body_size 16M;
# mime
include mime.types;
default_type application/octet-stream;
# logging
log_format my_log '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for" ';
access_log /var/log/nginx/access.log my_log;
error_log /var/log/nginx/error.log info;
# limits
limit_req_log_level warn;
limit_req_zone $binary_remote_addr zone=main:10m rate=10r/s;
# SSL
ssl_session_timeout 1d;
ssl_session_cache shared:SSL:10m;
ssl_session_tickets off;
# Mozilla Intermediate configuration
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384;
# OCSP
ssl_stapling on;
ssl_stapling_verify on;
resolver 1.1.1.1 1.0.0.1 8.8.8.8 8.8.4.4 208.67.222.222 208.67.220.220 valid=60s;
resolver_timeout 2s;
# Connection header for WebSocket reverse proxy
map $http_upgrade $connection_upgrade {
default upgrade;
"" close;
}
map $remote_addr $proxy_forwarded_elem {
# IPv4 addresses can be sent as_is
~^[0-9.]+$ "for=$remote_addr";
# IPv6 addresses need to be bracketed and quoted
~^[0-9A-Fa-f:.]+$ "for\"[$remote_addr]\"";
# Unix domain socket names cannot be represented in RFC 7239 syntax
default "for=unknown";
}
map $http_forwarded $proxy_add_forwarded {
# If the incoming Forwarded header is syntactially valid, append to it
"~^(,[ \\t]*)*([!#$%&'*+.^_`|~0-9A-Za-z-]+=([!#$%&'*+.^_`|~0-9A-Za-z-]+|\"([\\t \\x21\\x23-\\x5B\\x5D-\\x7E\\x80-\\xFF]|\\\\[\\t \\x21-\\x7E\\x80-\\xFF])*\"))?(;([!#$%&'*+.^_`|~0-9A-Za-z-]+=([!#$%&'*+.^_`|~0-9A-Za-z-]+|\"([\\t \\x21\\x23-\\x5B\\x5D-\\x7E\\x80-\\xFF]|\\\\[\\t \\x21-\\x7E\\x80-\\xFF])*\"))?)*([ \\t]*,([ \\t]*([!#$%&'*+.^_`|~0-9A-Za-z-]+=([!#$%&'*+.^_`|~0-9A-Za-z-]+|\"([\\t \\x21\\x23-\\x5B\\x5D-\\x7E\\x80-\\xFF]|\\\\[\\t \\x21-\\x7E\\x80-\\xFF])*\"))?(;([!#$%&'*+.^_`|~0-9A-Za-z-]+=([!#$%&'*+.^_`|~0-9A-Za-z-]+|\"([\\t \\x21\\x23-\\x5B\\x5D-\\x7E\\x80-\\xFF]|\\\\[\\t \\x21-\\x7E\\x80-\\xFF])*\"))?)*)?)*$" "$http_forwarded, $proxy_forwarded_elem";
# Otherwise, replace it
default "$proxy_forwarded_elem";
}
# Load configs
include /etc/nginx/conf.d/localhost.conf;
}
and localhost.conf
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name localhost;
root /usr/share/nginx/html;
ssl_certificate /etc/nginx/live/localhost/cert.pem;
ssl_certificate_key /etc/nginx/live/localhost/key.pem;
include /etc/nginx/conf.d/security.conf;
include /etc/nginx/conf.d/proxy.conf;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log info;
# nginx render files or proxy the request
location / {
try_files $uri #front;
}
location #front {
proxy_pass http://frontend:80;
}
location ^~ /api/v1 {
proxy_read_timeout 30m; # because an inference with SIMP can takes some time
proxy_send_timeout 30m;
proxy_connect_timeout 30m;
proxy_pass http://backend:4000;
}
location = /report.html {
root /usr/share/goaccess/html/;
}
location ^~ /ws {
proxy_pass http://goaccess:7890;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_read_timeout 7d;
proxy_connect_timeout 3600;
}
include /etc/nginx/conf.d/general.conf;
}
EDIT:
The request is sent via the Angular HttpClient, maybe this module is built in a way to abort requests if a response in not send in a short time frame, I'll try to investigate on that.
Ok I think I can answer my own question.
HTTP requests are not designed for long requests. When a request is issued, a response should be delivered as quickly as possible.
When you are doing a long process job, you should use workers and messages architecture (or event driven architecture) with tools like rabbitmq or kafka. You can also use polling (but it is not the more efficient solution).
So that, in my POST handler what I should do is when data arrive send a message to my broker and then issue an appropriate response (like request is processing).
The worker subscribe to a queue and can receive the message previously delivered, do the job and then reply back to my back end. We can then use a STOMP (websocket) plugin to route the result to the front end.
I am trying to install a SSL certificate that I recently acquired from GoDaddy. My web application is on Rails 4.2.6 and I am using an Ubuntu Server 14.04. I am also using Phusion Passenger 5.0.28 and Nginx. I don’t know if it makes any difference, but I launched the instance using AWS’ EC2.
I created a combined file using the two .crt files sent by GoDaddy.
When I edit my application.rb file:
config.force_ssl = true
I receive the following error:
ERR_CONNECTION_TIMED_OUT
There are two files that I have tried editing, with not success so far:
nginx.conf. The server block currently look like this:
server {
listen 443 ssl;
server_name localhost;
ssl_certificate /var/www/primeraraiz5/primeraraiz_combined.crt;
ssl_certificate_key /var/www/primeraraiz5/primeraraiz.com.key;
ssl_session_cache shared:SSL:1m;
ssl_session_timeout 5m;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;
location / {
root html;
index index.html index.htm;
}
}
include /etc/nginx/sites-enabled/*;
rails.conf (in a sites-available directory; which is “symbolically linked” to the sites-enabled directory ). The server block looks like this:
server {
listen 443 ssl;
passenger_enabled on;
passenger_app_env production;
root /var/www/primeraraiz5/public;
server_name 52.39.200.205 primeraraiz.com;
}
server {
server_name www.primeraraiz.com;
return 301 $scheme://primeraraiz.com$request_uri;
}
I don’t know if I am doing something wrong in these files or if I should change any settings at AWS or with the company that currently hosts my domain.
Thanks a lot for your help!
There are a couple of things to do to your configuration.
The first is the server block containing the redirect. Since you haven't provided us with a server that's listening on port 80, I assume that you want to redirect all requests to http://www.primeraraiz.com; to HTTPS. If so, replace $scheme with https so that your block looks as follows:
server {
server_name www.primeraraiz.com;
return 301 https://primeraraiz.com$request_uri;
}
Next, the SSL offloading needs to happen in the server block from which you're serving. In your case, you're offloading SSL for server name localhost, and not for primeraraiz.com which is what I assume you're trying to do. So copy the SSL parameters of your first server block to the one that has server name primeraraiz.com to end up with:
server {
listen 443 ssl;
server_name 52.39.200.205 primeraraiz.com;
ssl_certificate /var/www/primeraraiz5/primeraraiz_combined.crt;
ssl_certificate_key /var/www/primeraraiz5/primeraraiz.com.key;
ssl_session_cache shared:SSL:1m;
ssl_session_timeout 5m;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;
passenger_enabled on;
passenger_app_env production;
root /var/www/primeraraiz5/public;
}
I'm trying to change the settings for Nginx ssl_protocols, but the changes don't reflect on the server.
At first I thought it was because we were using Ubuntu 12.04, but now we're updated to 14.04.
Nginx version:
nginx version: nginx/1.10.1
built by gcc 4.8.4 (Ubuntu 4.8.4-2ubuntu1~14.04.3)
built with OpenSSL 1.0.1f 6 Jan 2014
TLS SNI support enabled
configure arguments: --sbin-path=/usr/local/sbin/nginx --with-http_ssl_module --with-http_stub_status_module --with-http_gzip_static_module
Openssl version:
OpenSSL 1.0.1f 6 Jan 2014
Ngnix.conf:
http {
include /usr/local/nginx/conf/mime.types;
default_type application/octet-stream;
tcp_nopush on;
keepalive_timeout 65;
tcp_nodelay off;
log_format main '$remote_addr - $remote_user [$time_local] $status '
'"$request" $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
error_log /var/log/nginx/error.log debug;
open_file_cache max=1000 inactive=20s;
open_file_cache_valid 30s;
open_file_cache_min_uses 2;
open_file_cache_errors on;
client_body_timeout 10;
client_header_timeout 10;
sendfile on;
# output compression
gzip on;
gzip_min_length 1100;
gzip_buffers 4 8k;
gzip_proxied any;
gzip_types text/plain text/html text/css text/js application/x-javascript application/javascript application/json;
# include config for each site here
include /etc/nginx/sites/*;
/etc/nginx/sites/site.conf:
server {
listen 443 ssl;
server_name server_name;
root /home/deploy/server_name/current/public;
access_log /var/log/nginx/server_name.access.log main;
ssl_certificate /usr/local/nginx/conf/ssl/wildcard.server_name.com.crt;
ssl_certificate_key /usr/local/nginx/conf/ssl/wildcard.server_name.com.key.unsecure;
ssl_client_certificate /usr/local/nginx/conf/ssl/geotrust.crt;
ssl_protocols TLSv1.2 TLSv1.1 TLSv1;
ssl_ciphers ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:DHE-RSA-AES256-SHA;
ssl_prefer_server_ciphers on;
location ~ ^/assets/ {
expires max;
add_header Cache-Control public;
add_header ETag "";
break;
}
location / {
try_files $uri #server_name;
proxy_set_header X-Forwarded-Proto https;
}
location #server_name {
include proxy.conf;
proxy_pass http://server_name;
proxy_set_header X-Forwarded-Proto https;
}
# stats url
location /nginx_stats {
stub_status on;
access_log off;
}
}
The config files get loaded properly and are both being used as intended. If it has any relevance the server is running Ruby on Rails with Unicorn.
Does anyone have an idea what could be wrong?
Description
I had a similar problem. My changes would be applied (nginx -t would warn about duplicate and invalid values), but TLSv1.0 and TLSv1.1 would still be accepted. My line in my sites-enabled/ file reads
ssl_protocols TLSv1.2 TLSv1.3;.
I ran grep -R 'protocol' /etc/nginx/* to find other mentions ssl_protocols, but I only found the main configuration file /etc/nginx/nginx.conf and my own site config.
Underlying problem
The problem was caused by a file included by certbot/letsencrypt, at /etc/letsencrypt/options-ssl-nginx.conf. In certbot 0.31.0 (certbot --version) the file includes this line:
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
This somewhat sneakily enabled these versions of TLS.
I was tipped off by Libre Software.
0.31.0 is the most up-to-date version I was able to get for Ubuntu 18.04 LTS
Solution
TLS versions <1.2 were disabled by default in the certbot nginx config starting from certbot v0.37.0 (thank you mnordhoff). I copied the file from there into the letsencrypt config (options-ssl-nginx.conf), added a note to myself and subsequent maintainers and everything was all right with the world again.
How to not get into this mess in the first place
grepping one level higher (/etc/* instead of /etc/nginx*) would have allowed me to find the culprit. But a more reliable and powerful tool is nginx -T, which prints out all the configuration files that are considered.
Other useful commands:
nginx -s reload after you change configs
nginx -v to find out your nginx version. To enable TSL version 1.3, you need version 1.13.0+.
openssl version: you need at least OpenSSL 1.1.1 "built with TLSv1.3 support"
curl -I -v --tlsv<major.minor> <your_site> for testing whether a certain version of TLS is in fact enabled
journalctl -u nginx --since "10 minutes ago" to make absolutely sure something else isn't going on.
Want to add another (somewhat obscure) possibility since the CERTbot one didn't cover me. Mine is only for NGINX installs with multiple domains. Basically the info you specify for your specific domain may be modified because of the server default. That default is set from the first domain name encountered when reading the config (basically alphabetically). Details on this page.
http://nginx.org/en/docs/http/configuring_https_servers.html
A common issue arises when configuring two or more HTTPS servers listening on a single IP address:
server {
listen 443 ssl;
server_name www.example.com;
ssl_certificate www.example.com.crt;
...
}
server {
listen 443 ssl;
server_name www.example.org;
ssl_certificate www.example.org.crt;
...
}
With this configuration a browser receives the default server’s certificate, i.e. www.example.com regardless of the requested server name. This is caused by SSL protocol behaviour. The SSL connection is established before the browser sends an HTTP request and nginx does not know the name of the requested server. Therefore, it may only offer the default server’s certificate.
The issue wasn't in the server itself, but instead in the AWS Load Balancer having wrong SSL Ciphers selected.
I'm trying to test my SSL setup before migrating over a site from Heroku to a (Digital Ocean) VPS, so I'm using a self-signed certificate as per these instructions.
I used the following command to create the certificates, and they are present in the appropriate directory:
sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /etc/nginx/ssl/nginx.key -out /etc/nginx/ssl/nginx.crt
Here are the relevant lines from the server block on my nginx.conf:
server {
listen 80 default_server;
listen 443 ssl;
server_name migration.my_domain.com;
ssl_certificate /etc/nginx/ssl/nginx.crt;
ssl_certificate_key /etc/nginx/ssl/nginx.key;
Additionally, in production.rb I've used the following line:
config.force_ssl = true
my_domain (not the actual domain name) and its subdomain migration are both set up in my DNS and are correctly pointing to my server's IP address. At present, when I access via http://migration.my_domain.com pages get served up. But when I access via https://migration.my_domain.com, I'm getting an error in Chrome:
This site can’t be reached
migration.my_domain.com refused to connect.
Try:
Reloading the page
Checking the connection
Checking the proxy and the firewall
ERR_CONNECTION_REFUSED
Any idea what I'm missing here?
Figured it out. First, I'm deploying via Capistrano which I had mistakenly thought was restarting nginx after a deploy. Turns out it doesn't. So I needed to do that manually. So deployed with this at the beginning of my server block:
server {
listen 443 ssl default_server deferred;
server_name migration.my_domain.com;
ssl_certificate /etc/nginx/ssl/nginx.crt;
ssl_certificate_key /etc/nginx/ssl/nginx.key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_ciphers 'EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH';
Deployed that, but after restarting nginx, first got the warning from Chrome about a self-signed certificate and that the site can't be trusted (which is fine and expected). After moving past that got a message about too many redirects. Turns out that the line from above in my production.rb file:
config.force_ssl = true
was causing a problem. Saw this which from what I can tell means that what nginx sends through to puma does not contain whether it's ssl or not, so puma redirects everything, even the https requests because it just doesn't know what it's getting. So, now I have two near-duplicate server blocks. The first one which handles http requests has the following relevant statements:
server {
listen 80;
server_name migration.my_domain.com;
# ...bunch of non-relevant config...
location #puma {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://puma;
}
}
This handles all the 80 requests which puma will redirect all of to ssl (I believe) thanks to config.force_ssl = true in production.rb. Nginx will then receive an https request for the same URL which will be handled by this block:
server {
listen 443 ssl;
server_name migration.my_domain.com;
ssl_certificate /etc/nginx/ssl/nginx.crt;
ssl_certificate_key /etc/nginx/ssl/nginx.key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_ciphers 'EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH';
# ...bunch of non-relevant config...
location #puma {
proxy_set_header X-Forwarded-Proto https; # IMPORTANT!! I believe this tells puma that everything sent through via this block is https. Thus, puma no longer redirects.
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://puma;
}
}
This seems to work correctly, albeit with all the appropriate warnings one should get in the browser when receiving a self-signed certificate. I'm confident that once I switch in my actual certificates, I will now have a functioning ssl setup.
Thanks again #doon!