Nginx upstream server works with IP address but not with DNS - docker

Sorry for mistakes. I am new with Nginx.
I have my application deployed on docker engine.
So I have basically 5 docker images but here 2 are most important:
1st backend. (Django DRF application using gunicorn)
2nd frontend. (React App on Nginx)
I am upstreaming backend on Nginx so in Nginx.conf file I have 2 locations defined:
"/" for frontend
"/api" for backend (upstream backend to be able to use it).
I am able to start my containers and they "talk" to each other if I am using IP address in my browser. So backend get requests and give responses.
Now I bought dns and added ssl certificates (LetsEncrypt, but still i have to add exception , but that is a separate question). If I reach my site using DNS frontend works, but backend does not work.
Here is unsuccessful with using DNS.
and successful request using IP address.
Here is my nginx.conf
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log notice;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
# include /etc/nginx/conf.d/*.conf;
upstream backend {
server api:8000;
}
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
ssl_certificate /etc/nginx/ssl/live/site.org/fullchain.pem;
ssl_certificate_key /etc/nginx/ssl/live/site.org/privkey.pem;
location /api {
if ($request_method = 'OPTIONS') {
add_header 'Access-Control-Allow-Origin' '*';
#
# Om nom nom cookies
#
add_header 'Access-Control-Allow-Credentials' 'true';
add_header 'Access-Control-Allow-Methods' 'GET, POST, PUT, OPTIONS';
#
# Custom headers and headers various browsers *should* be OK with but aren't
#
add_header 'Access-Control-Allow-Headers' 'DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type';
#
# Tell client that this pre-flight info is valid for 20 days
#
add_header 'Access-Control-Max-Age' 1728000;
add_header 'Content-Type' 'text/plain charset=UTF-8';
add_header 'Content-Length' 0;
return 204;
}
# Tried this ipv6=off
resolver 1.1.1.1 ipv6=off valid=30s;
set $empty "";
proxy_pass http://backend$empty;
# proxy_pass http://backend;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_cache_bypass $http_upgrade;
proxy_set_header X-Real-IP $remote_addr;
proxy_read_timeout 3600;
proxy_headers_hash_max_size 512;
proxy_headers_hash_bucket_size 128;
proxy_set_header Content-Security-Policy upgrade-insecure-requests;
}
location / {
root /usr/share/nginx/html;
index index.html index.htm;
try_files $uri $uri/ /index.html;
}
# location /static/ {
# alias /home/app/web/staticfiles/;
# }
}
server {
listen 80;
listen [::]:80;
location / {
return 301 https://$host$request_uri;
}
location ~ /.well-known/acme-challenge/ {
root /var/www/certbot;
}
}
}

This HTTP 400 Bad Request error looks like the one coming from the Django request validation, since your requests differs only by the Host HTTP request header value. You should include every used domain name to the ALLOWED_HOSTS list in the settings.py Django file. Domain names should be specified as they would appear in the Host header (excluding the possible port number); a wildcard-like entry like .example.com is allowed, assuming the example.com domain and every subdomain. Special value * can be used to skip Host header validation (not recommended unless you do this validation at some other request processing level).

Related

multi-website host behind nginx reverse proxy

I have followed what seems like countless sites on how to set-up a reverse proxy with nginx.
I am going to run several websites in docker containers on an EC2 instance. The instance is in a target group behind an ALB - SSL termination at the ALB.
I have created sites A and B:
sitea.conf
server {
root /var/www/html;
server_name sitea.com;
location / {
proxy_pass http://127.0.0.1:9090;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
siteb.conf
server {
root /var/www/html;
server_name siteb.com;
location / {
proxy_pass http://127.0.0.1:8080;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
This was a default install of nginx on an AWS Linux 2 AMI.
I put both sitea.conf and siteb.conf in the
/etc/nginx/sites-available
directory and then created the symlink
ln -s /etc/nginx/sites-available/* /etc/nginx/sites-enabled
What I am expecting is the routing by nginx.
What is happening is sitea.com is getting ALL of the traffic.
Even the load balancer health checks are being routed by nginx to sitea. Tailing the logs on the container
docker logs --follow sitea
I see all of the health check requests coming in (and getting re-directed because it is a wordpress container).
Nginx is not routing any traffic based on the host header (the load balancer health checks being the tell tale indicator).
Obviously something with my configuration - but I thought this was all there was too it. Where else do I need to configure nginx for a multi-site reverse proxy?
EDIT:
Including the /etc/nginx/nginx.conf
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
include /usr/share/nginx/modules/*.conf;
events {
worker_connections 1024;
}
http {
log_format main '$remote_addr - $remote_user [$time_local]
"$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 4096;
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Load modular configuration files from the /etc/nginx/conf.d directory.
# See http://nginx.org/en/docs/ngx_core_module.html#include
# for more information.
include /etc/nginx/sites-enabled/*.conf;
server {
listen 80;
listen [::]:80;
server_name _;
root /usr/share/nginx/html;
# Load configuration files for the default server block.
# include /etc/nginx/default.d/*.conf;
error_page 404 /404.html;
location = /404.html {
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
}

Nginx refuses connections to flask app, flask app without nginx works fine

I have 2 docker containers deployed using docker compose.
One is nginx and the other is my flask application. I am only using nginx as a static server for let's encrypt certification.
If I deploy my flask app without nginx, I can successfully curl / ping my server. However, the moment nginx is introduced, I am not able to connect.
What I want to do is at least access my server via numeric external ip e.g. xx.xx.xx.xx and then my domain which points to the same ip. (My domain is actually a subdomain e.g. api.domain.com)
My docker compose is:
services:
nginx:
build:
context: ./nginx
dockerfile: Dockerfile
args:
DOMAIN: ${DOMAIN}
FLASK: application
ports:
- 80:80
- 443:443
volumes:
- /etc/letsencrypt:/etc/letsencrypt
depends_on:
- application
application:
build:
context: ./flask_app
dockerfile: Dockerfile
ports:
- 5000:5000
nginx.conf
user nginx;
worker_processes auto;
worker_rlimit_nofile 8192;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
charset utf-8;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
server_tokens off;
log_not_found off;
types_hash_max_size 2048;
client_max_body_size 16M;
include mime.types;
default_type application/octet-stream;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ecdh_curve X25519:sect571r1:secp521r1:secp384r1;
ssl_session_timeout 1d;
ssl_session_cache shared:SSL:50m;
ssl_session_tickets off;
ssl_ciphers 'TLS13+AESGCM+AES128:TLS13+AESGCM+AES256:TLS13+CHACHA20:EECDH+AESGCM:EECDH+CHACHA20';
ssl_stapling on;
ssl_stapling_verify on;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
include conf.d/*.conf;
}
flask_app.conf
server {
listen 80;
listen [::]:80;
server_name www.${DOMAIN} ${DOMAIN};
location ^~ /.well-known/acme-challenge/ {
root /var/www/_letsencrypt;
}
location / {
return 301 https://${DOMAIN}${DOLLAR}request_uri;
}
}
server {
listen 443 ssl http2 default_server;
listen [::]:443 ssl http2 default_server;
server_name www.${DOMAIN} ${DOMAIN};
ssl_certificate /etc/letsencrypt/live/${DOMAIN}/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/${DOMAIN}/privkey.pem;
ssl_trusted_certificate /etc/letsencrypt/live/${DOMAIN}/chain.pem;
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-XSS-Protection "1; mode=block" always;
add_header X-Content-Type-Options "nosniff" always;
add_header Referrer-Policy "no-referrer-when-downgrade" always;
# You might want to change the CSP policy to fit your needs - see https://content-security-policy.com/
add_header Content-Security-Policy "default-src 'self'; script-src 'self'; connect-src 'self'; img-src 'self'; style-src 'self';";
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload" always;
add_header Allow "GET, POST, HEAD" always;
access_log /var/log/nginx/${DOMAIN}.access.log;
error_log /var/log/nginx/${DOMAIN}.error.log warn;
location / {
proxy_http_version 1.1;
proxy_cache_bypass ${DOLLAR}http_upgrade;
proxy_hide_header X-Powered-By;
proxy_hide_header Server;
proxy_hide_header X-AspNetMvc-Version;
proxy_hide_header X-AspNet-Version;
proxy_set_header Proxy "";
proxy_set_header Upgrade ${DOLLAR}http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host ${DOLLAR}host;
proxy_set_header X-Real-IP ${DOLLAR}remote_addr;
proxy_set_header X-Forwarded-For ${DOLLAR}proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto ${DOLLAR}scheme;
proxy_set_header X-Forwarded-Host ${DOLLAR}host;
proxy_set_header X-Forwarded-Port ${DOLLAR}server_port;
proxy_pass http://application:5000;
}
location ~* \.(?:css|cur|js|jpe?g|gif|htc|ico|png|html|xml|otf|ttf|eot|woff|woff2|svg)${DOLLAR} {
expires 7d;
add_header Pragma public;
add_header Cache-Control public;
proxy_pass http://application:5000;
}
if ( ${DOLLAR}request_method !~ ^(GET|POST|HEAD)${DOLLAR} ) {
return 405;
}
if (${DOLLAR}http_user_agent ~* LWP::Simple|BBBike|wget) {
return 403;
}
location ~ /\.(?!well-known) {
deny all;
}
gzip on;
gzip_vary on;
gzip_proxied any;
gzip_comp_level 6;
gzip_types text/plain text/css text/xml application/json application/javascript application/rss+xml application/atom+xml image/svg+xml;
}
I'm not sure why your nginx config contains ${DOLLAR} in multiple places. I don't think this is valid syntax, and can't find any documentation relating to this. Lines like:
proxy_set_header Host ${DOLLAR}host;
Should actually be:
proxy_set_header Host $host;
As for using ${DOMAIN} in the nginx conf, I would avoid this and opt for a more simple configuration. Just specify the domain in the nginx config file:
server_name www.example.com example.com;
I'd familiarise yourself with the official nginx image docs under "Complex configuration" it shows you how to copy a working config out of a running container, then modify this to your needs.
Once you have this working, if you really want to specify the domain in your docker-compose file, and treat your nginx config as a template which is modified at container-start time, you could proceed to read the section "Using environment variables in nginx configuration" which shows a workaround to use envsubst to acehive this. This is probably not required for single site deployments however.

containerized reverse proxy showing default site

No matter what I do, I keep running into the problem where my website publishes the default nginx website. I'm trying to dockerize my webserver such that it can point to home assistant running in another container. I've been able to get it to work when both were hosted on the same raspi, not running in containers, but not when both are running in containers.
I've attached my nginx.conf, Dockerfile and default.conf that I was using to start the environment up. I've spent the last 2 days looking for someone who was trying to do something similar, but I assume I'm making such a stupid mistake that most have been able to figure it out on their own..
nginx.conf:
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
include /etc/nginx/conf.d/*.conf;
}
default.conf (/etc/nginx/conf.d/hass.conf)
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
server {
# Update this line to be your domain
server_name nekohouse.ca;
# These shouldn't need to be changed
listen [::]:80 default_server ipv6only=off;
return 301 https://$host$request_uri;
}
server {
# Update this line to be your domain
server_name nekohouse.ca;
# Ensure these lines point to your SSL certificate and key
ssl_certificate fullchain.pem;
ssl_certificate_key privkey.pem;
# Use these lines instead if you created a self-signed certificate
# ssl_certificate /etc/nginx/ssl/cert.pem;
# ssl_certificate_key /etc/nginx/ssl/key.pem;
# Ensure this line points to your dhparams file
ssl_dhparam /etc/nginx/dhparams.pem;
# These shouldn't need to be changed
listen [::]:443 default_server ipv6only=off; # if your nginx version is >= 1.9.5 you can also add the "http2" flag here
add_header Strict-Transport-Security "max-age=31536000; includeSubdomains";
ssl on;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH:!aNULL:!eNULL:!EXPORT:!DES:!MD5:!PSK:!RC4";
ssl_prefer_server_ciphers on;
ssl_session_cache shared:SSL:10m;
proxy_buffering off;
location / {
proxy_pass http://localhost:8123;
proxy_set_header Host $host;
proxy_redirect http:// https://;
proxy_http_version 1.1;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
}
}
default.conf (/etc/nginx/conf.d/default.conf)
server {
listen 8100;
server_name localhost;
#charset koi8-r;
#access_log /var/log/nginx/host.access.log main;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
#error_page 404 /404.html;
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
# proxy the PHP scripts to Apache listening on 127.0.0.1:80
#
#location ~ \.php$ {
# proxy_pass http://127.0.0.1;
#}
# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
#
#location ~ \.php$ {
# root html;
# fastcgi_pass 127.0.0.1:9000;
# fastcgi_index index.php;
# fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name;
# include fastcgi_params;
#}
# deny access to .htaccess files, if Apache's document root
# concurs with nginx's one
#
#location ~ /\.ht {
# deny all;
#}
}
The problem is because of this line
proxy_pass http://localhost:8123;
When running in containers, you should understand that localhost refers to the nginx container and not the docker host.
So, you should either change localhost to the hostname of your docker host or use docker-compose so that you can change it to the name of the container defined.
If you are just running the containers separately, you could also just use the container IP for now but note that it will change everytime the container is restarted.

Puma shows al requests coming from 127.0.0.1 when behind nginx

I am having a problem where the only IP Address that shows up in my rails log is 127.0.0.1, it appears that the remote ip is not getting proxy passed. I am unsure of what I a missing. Nginx is custom compiled within an omnibus package. and I have that build script below as well. If anyone can give me some insight that would be greatly appreciated.
Nginx Build Recipe:
name "nginx"
default_version "1.9.10"
dependency "pcre"
dependency "openssl"
source url: "http://nginx.org/download/nginx-#{version}.tar.gz",
md5: "64cc970988356a5e0fc4fcd1ab84fe57"
relative_path "nginx-#{version}"
build do
command ["./configure",
"--prefix=#{install_dir}/embedded",
"--with-http_ssl_module",
"--with-http_stub_status_module",
"--with-http_gzip_static_module",
"--with-http_v2_module",
"--with-http_realip_module",
"--with-ipv6",
"--with-debug",
"--with-ld-opt=-L#{install_dir}/embedded/lib",
"--with-cc-opt=\"-L#{install_dir}/embedded/lib -I#{install_dir}/embedded/include\""].join(" ")
command "make -j #{workers}", :env => {"LD_RUN_PATH" => "#{install_dir}/embedded/lib"}
command "make install"
end
Nginx Config:
user smart-mobile smart-mobile;
worker_processes 1;
error_log stderr;
pid nginx.pid;
daemon off;
events {
worker_connections 10240;
}
http {
#log_format combined '$remote_addr - $remote_user [$time_local] '
# '"$request" $status $body_bytes_sent '
# '"$http_referer" "$http_user_agent"';
#
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
gzip on;
gzip_http_version 1.0;
gzip_comp_level 2;
gzip_proxied any;
gzip_types text/plain text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript application/json;
proxy_cache_path proxy_cache keys_zone=smart-mobile:10m max_size=1g levels=1:2;
proxy_cache smart-mobile;
include /opt/smart-mobile/embedded/conf/mime.types;
include /var/opt/smart-mobile/nginx/conf/smart-mobile.conf;
}
Nginx Site Config:
upstream smart_mobile {
server unix:/var/opt/smart-mobile/puma/puma.socket;
}
server {
listen 80;
server_name 10.10.20.108;
access_log /var/log/smart-mobile/nginx/smart-mobile-http.access.log;
error_log /var/log/smart-mobile/nginx/smart-mobile-http.error.log;
root /opt/smart-mobile/embedded/smart-mobile-rails/public;
index index.html;
## Real IP Module Config
## http://nginx.org/en/docs/http/ngx_http_realip_module.html
location / {
if (-f /opt/smart-mobile/embedded/smart-mobile-rails/tmp/maintenance.enable) {
return 503;
}
proxy_http_version 1.1;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
try_files $uri $uri/index.html $uri.html #ruby;
}
location #ruby {
proxy_pass http://smart_mobile;
}
error_page 404 /404.html;
error_page 402 /402.html;
error_page 500 /500.html;
error_page 502 /502.html;
error_page 503 #maintenance;
location #maintenance {
if ($uri !~ ^/icos/) {
rewrite ^(.*)$ /503.html break;
}
}
}
Puma Config:
directory '/opt/smart-mobile/embedded/smart-mobile-rails'
threads 2,4
bind 'unix:///var/opt/smart-mobile/puma/puma.socket'
pidfile '/var/opt/smart-mobile/puma/puma.pid'
preload_app!
on_worker_boot do
ActiveSupport.on_load(:active_record) do
ActiveRecord::Base.establish_connection
end
end
before_fork do
ActiveRecord::Base.connection_pool.disconnect!
end
This worked for me (puma 3.4.0):
# Serve static content if a corresponding file exists.
location / {
try_files $uri #proxy;
# NOTE: Parameters below apply ONLY for static files that match.
expires max;
add_header Cache-Control "public";
add_header By-Nginx "yes"; # DEBUG
}
# Serve dynamic content from the backend.
location #proxy {
proxy_pass http://backend_for_www.site.com;
proxy_pass_request_headers on;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
After some exploration, I've found out that:
Puma is trained to look at HTTP header X-Forwarded-For specifically.
Once it's passed correctly, Puma should hook it up.
No configuration on Puma end is necessary.
request.headers["REMOTE_ADDR"] will stay "127.0.0.1", this doesn't change no matter how hard you try.
Passing header X-Real-IP does not affect the logging issue anyhow.
Basically you can use set_remote_address header: "X-Real-IP" in Puma configuration file to set "remote address of the connection" from this header.
But Puma itself doesn't look in that direction I don't know any other software that does. Documented here: http://www.rubydoc.info/gems/puma/3.2.0/Puma%2FDSL%3Aset_remote_address.
This was my own fault I had all my proxy_set_headers before the try_files. I moved the proxy_set_header directives into the #ruby location block and removed the X-Real-IP header. Everything is working now thank you for all the input.

Redirect www to non-www on Nginx with SSL gives redirect loop

I apologize if this seems like a deja vu. There are plenty of posts about similar issues, and I read them all (and tried them out unsuccessfully).
My setup: Rails 4, Puma, Nginx, SSL Cert for both https://www and https://
I am using a combined block so I get a redirect to SSL. However, I would like to redirect https://www.domain.com to https://domain.com
Everything works fine with the setup you will see below until I add the redirect rule (return 301 https://$host$request_uri;), then I get a redirect loop.
I added "proxy_set_header X-Forwarded-Proto $scheme;" to my #app location for force_ssl (which is set to true in the Rails config file), but that did not solve the issue.
I would really appreciate expert advise here, and please, if you see any points of improvement in my setup, beyond just fixing the redirect loop, please let me know.
nginx.conf:
user root;
worker_processes 4;
pid /var/run/nginx.pid;
#setup where nginx will log errors to
# and where the nginx process id resides
error_log /var/log/nginx/error.log error;
#pid /var/run/nginx.pid;
events {
worker_connections 1024;
accept_mutex off;
use epoll;
}
http {
include /etc/nginx/mime.types;
types_hash_max_size 2048;
default_type application/octet-stream;
#access_log /tmp/nginx.access.log combined;
# use the kernel sendfile
sendfile on;
# prepend http headers before sendfile()
tcp_nopush on;
keepalive_timeout 25;
tcp_nodelay on;
gzip on;
gzip_http_version 1.0;
gzip_proxied any;
gzip_min_length 500;
gzip_disable "MSIE [1-6]\.";
gzip_types text/plain text/html text/xml text/css
text/comma-separated-values
text/javascript application/x-javascript
application/atom+xml;
#Hide server info
server_tokens off;
upstream app_server {
server unix:/root/sites/mina_deploy/shared/tmp/sockets/puma.sock
fail_timeout=0;
}
# configure the virtual host
server {
server_name domain.com www.domain.com 162.555.555.162;
root /root/sites/mina_deploy/current/public;
# port to listen for requests on
listen 80 default deferred;
listen 443 ssl;
####### THIS REDIRECT CAUSES A LOOP ########
#return 301 https://$host$request_uri;
ssl_certificate /etc/ssl/ssl-bundle.crt;
ssl_certificate_key /etc/ssl/myserver.key;
#enables all versions of TLS, but not SSLv2 or 3 which are weak and now deprecated.
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
#Disables all weak ciphers
ssl_ciphers 'AES128+EECDH:AES128+EDH';
ssl_session_cache shared:SSL:10m;
ssl_prefer_server_ciphers on;
ssl_dhparam /etc/ssl/dhparam.pem;
# maximum accepted body size of client request
client_max_body_size 4G;
# the server will close connections after this time
keepalive_timeout 5;
add_header Strict-Transport-Security max-age=63072000;
#add_header X-Frame-Options DENY;
add_header Access-Control-Allow-Origin '*';
add_header X-Content-Type-Options nosniff;
ssl_stapling on;
ssl_stapling_verify on;
resolver 8.8.8.8 8.8.4.4 valid=300s;
resolver_timeout 5s;
location ~ ^/(system|assets)/ {
gzip_static on;
error_page 405 = $uri;
expires max;
add_header Cache-Control public;
break;
}
try_files $uri/index.html $uri #app;
location #app {
# pass to the upstream unicorn server mentioned above
proxy_pass http://app_server;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_read_timeout 300;
}
}
}
The thing I did was have multiple server blocks. You mentioned that you want www.domain.com to redirect to domain.com. In this case I would do
server {
listen 80;
server_name www.domain.com;
return 301 https://domain.com$request_uri;
}
Then remove your www.domain.com from your server_name in your original block. Also I would break up your redirects from 80 to 443 in separate blocks as well. So you would repeat this process if a user tried to go to https://www.domain.com you would have a server that says similar things.
server {
listen 443;
server_name www.domain.com;
return 301 https://domain.com$request_uri;
}
And one to listen for http traffic on the domain you want, but redirected to https traffic.
server {
listen 80;
server_name domain.com;
return 301 https://domain.com$request_uri;
}
Then you can listen to just port 443 in your server block where you want everyone to go and no redirects are in that block.
You can view documentation for nginx here which will show you that this is the proper way to rewrite
Replying to your comment, Use the three blocks that I have written, and in your original server block, you will need to remove
server_name domain.com www.domain.com 162.555.555.162;
and also remove
listen 80 deferred;
and add
server_name domain.com;
Also, just making sure you know that for this to work, you will have to have your domain and www subdomain pointing at your server

Resources