I am following the step in https://www.sitepoint.com/deploy-your-rails-app-to-aws/ to deploy my ruby on rails app to AWS(Amazon Linux).
I did everything sucessfully except for the setting of nginx.
In the artcile, it asks me to comment out the existing content and paste the following into /etc/nginx/sites-available/default.
upstream app {
# Path to Puma SOCK file, as defined previously
server unix:/home/deploy/contactbook/shared/tmp/sockets/puma.sock fail_timeout=0;
}
server {
listen 80;
server_name localhost;
root /home/deploy/contactbook/public;
try_files $uri/index.html $uri #app;
location / {
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_redirect off;
proxy_http_version 1.1;
proxy_set_header Connection '';
proxy_pass http://app;
}
location ~ ^/(assets|fonts|system)/|favicon.ico|robots.txt {
gzip_static on;
expires max;
add_header Cache-Control public;
}
error_page 500 502 503 504 /500.html;
client_max_body_size 4G;
keepalive_timeout 10;
}
But after I installed nginx on Amazon Linux, there is no folder /etc/nginx/sites-available
So I created this folder and file default.
But I got 404 error when I try to access my home page.
Then, I found I have /etc/nginx/nginx.conf, so I updated this file. But when I did sudo service nginx restart, I got error msg as:
nginx: [emerg] "upstream" directive is not allowed here in /etc/nginx/nginx.conf:1
nginx: configuration file /etc/nginx/nginx.conf test failed
Does anyone know how should I do this correctly?
FYI, the content of /etc/nginx/nginx.conf before I broke it:
# For more information on configuration, see:
# * Official English Documentation: http://nginx.org/en/docs/
# * Official Russian Documentation: http://nginx.org/ru/docs/
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Load modular configuration files from the /etc/nginx/conf.d directory.
# See http://nginx.org/en/docs/ngx_core_module.html#include
# for more information.
include /etc/nginx/conf.d/*.conf;
index index.html index.htm;
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name localhost;
root /usr/share/nginx/html;
# Load configuration files for the default server block.
include /etc/nginx/default.d/*.conf;
location / {
}
# redirect server error pages to the static page /40x.html
#
error_page 404 /404.html;
location = /40x.html {
}
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
# proxy the PHP scripts to Apache listening on 127.0.0.1:80
#
#location ~ \.php$ {
# proxy_pass http://127.0.0.1;
#}
# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
#
#location ~ \.php$ {
# root html;
# fastcgi_pass 127.0.0.1:9000;
# fastcgi_index index.php;
# fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name;
# include fastcgi_params;
#}
# deny access to .htaccess files, if Apache's document root
# concurs with nginx's one
#
#location ~ /\.ht {
# deny all;
#}
}
# Settings for a TLS enabled server.
#
# server {
listen 443 ssl;
# listen [::]:443 ssl;
# server_name localhost;
# root /usr/share/nginx/html;
#
# ssl_certificate "/etc/pki/nginx/server.crt";
# ssl_certificate_key "/etc/pki/nginx/private/server.key";
# # It is *strongly* recommended to generate unique DH parameters
# # Generate them with: openssl dhparam -out /etc/pki/nginx/dhparams.pem 2048
# #ssl_dhparam "/etc/pki/nginx/dhparams.pem";
# ssl_session_cache shared:SSL:1m;
# ssl_session_timeout 10m;
# ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
# ssl_ciphers HIGH:SEED:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!RSAPSK:!aDH:!aECDH:$
# ssl_prefer_server_ciphers on;
#
# # Load configuration files for the default server block.
# include /etc/nginx/default.d/*.conf;
#
# location / {
# }
error_page 404 /404.html;
# location = /40x.html {
# }
#
# error_page 500 502 503 504 /50x.html;
# location = /50x.html {
# }
# }
}
Related
No matter what I do, I keep running into the problem where my website publishes the default nginx website. I'm trying to dockerize my webserver such that it can point to home assistant running in another container. I've been able to get it to work when both were hosted on the same raspi, not running in containers, but not when both are running in containers.
I've attached my nginx.conf, Dockerfile and default.conf that I was using to start the environment up. I've spent the last 2 days looking for someone who was trying to do something similar, but I assume I'm making such a stupid mistake that most have been able to figure it out on their own..
nginx.conf:
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
include /etc/nginx/conf.d/*.conf;
}
default.conf (/etc/nginx/conf.d/hass.conf)
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
server {
# Update this line to be your domain
server_name nekohouse.ca;
# These shouldn't need to be changed
listen [::]:80 default_server ipv6only=off;
return 301 https://$host$request_uri;
}
server {
# Update this line to be your domain
server_name nekohouse.ca;
# Ensure these lines point to your SSL certificate and key
ssl_certificate fullchain.pem;
ssl_certificate_key privkey.pem;
# Use these lines instead if you created a self-signed certificate
# ssl_certificate /etc/nginx/ssl/cert.pem;
# ssl_certificate_key /etc/nginx/ssl/key.pem;
# Ensure this line points to your dhparams file
ssl_dhparam /etc/nginx/dhparams.pem;
# These shouldn't need to be changed
listen [::]:443 default_server ipv6only=off; # if your nginx version is >= 1.9.5 you can also add the "http2" flag here
add_header Strict-Transport-Security "max-age=31536000; includeSubdomains";
ssl on;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH:!aNULL:!eNULL:!EXPORT:!DES:!MD5:!PSK:!RC4";
ssl_prefer_server_ciphers on;
ssl_session_cache shared:SSL:10m;
proxy_buffering off;
location / {
proxy_pass http://localhost:8123;
proxy_set_header Host $host;
proxy_redirect http:// https://;
proxy_http_version 1.1;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
}
}
default.conf (/etc/nginx/conf.d/default.conf)
server {
listen 8100;
server_name localhost;
#charset koi8-r;
#access_log /var/log/nginx/host.access.log main;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
#error_page 404 /404.html;
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
# proxy the PHP scripts to Apache listening on 127.0.0.1:80
#
#location ~ \.php$ {
# proxy_pass http://127.0.0.1;
#}
# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
#
#location ~ \.php$ {
# root html;
# fastcgi_pass 127.0.0.1:9000;
# fastcgi_index index.php;
# fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name;
# include fastcgi_params;
#}
# deny access to .htaccess files, if Apache's document root
# concurs with nginx's one
#
#location ~ /\.ht {
# deny all;
#}
}
The problem is because of this line
proxy_pass http://localhost:8123;
When running in containers, you should understand that localhost refers to the nginx container and not the docker host.
So, you should either change localhost to the hostname of your docker host or use docker-compose so that you can change it to the name of the container defined.
If you are just running the containers separately, you could also just use the container IP for now but note that it will change everytime the container is restarted.
I apologize for asking what seems to be such a simple question that is asked again and again.
I built a small app using Rails 4.2.3. Everything works locally so I am trying to deploy to AWS with Elastic Beanstalk and the following setup: 64bit Amazon Linux 2016.03 v2.1.6 running Ruby 2.3 (Puma)
Before I deploy I run:
rake assets:precompile RAILS_ENV=production
I then commit those files to git and use eb deploy to push the files up the the EC2 instance.
Some things work:
When I ssh into that instance, I see all of the precompiled assets in /var/app/current/public/assets
CSS all looks correct
Coffeescripts are running properly
But, neither static images or ones that I upload via Paperclip show up as I would expect.
In production.rb I have this line:
config.serve_static_files = ENV['RAILS_SERVE_STATIC_FILES'].present?
I can confirm that key is not in my ENV variable by going into the console:
irb(main):001:0> ENV['RAILS_SERVE_STATIC_FILES']
=> nil
which leads me to believe that the serving of these files should be handled by nginx. I can confirm that nginx is running, but quite frankly I don't know how it is configured.
[ec2-user#ip-172-31-13-16 assets]$ ps waux | grep nginx
root 2800 0.0 0.4 109364 4192 ? Ss Oct08 0:00 nginx: master process /usr/sbin/nginx -c /etc/nginx/nginx.conf
nginx 2801 0.0 0.6 109820 6672 ? S Oct08 0:09 nginx: worker process
ec2-user 21321 0.0 0.2 110456 2092 pts/0 S+ 23:02 0:00 grep --color=auto nginx
I "think" I am supposed to edit my .ebextensions file to do a few things automatically when I deploy, but that's about where I got stuck. Any suggestions?
/etc/nginx/nginx.conf
# For more information on configuration, see:
# * Official English Documentation: http://nginx.org/en/docs/
# * Official Russian Documentation: http://nginx.org/ru/docs/
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Load modular configuration files from the /etc/nginx/conf.d directory.
# See http://nginx.org/en/docs/ngx_core_module.html#include
# for more information.
include /etc/nginx/conf.d/*.conf;
index index.html index.htm;
server {
listen 80 ;
listen [::]:80 ;
server_name localhost;
root /usr/share/nginx/html;
# Load configuration files for the default server block.
include /etc/nginx/default.d/*.conf;
location / {
}
# redirect server error pages to the static page /40x.html
#
error_page 404 /404.html;
location = /40x.html {
}
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
# proxy the PHP scripts to Apache listening on 127.0.0.1:80
#
#location ~ \.php$ {
# proxy_pass http://127.0.0.1;
#}
# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
#
#location ~ \.php$ {
# root html;
# fastcgi_pass 127.0.0.1:9000;
# fastcgi_index index.php;
# fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name;
# include fastcgi_params;
#}
# deny access to .htaccess files, if Apache's document root
# concurs with nginx's one
#
#location ~ /\.ht {
# deny all;
#}
}
# Settings for a TLS enabled server.
#
# server {
# listen 443 ssl;
# listen [::]:443 ssl;
# server_name localhost;
# root /usr/share/nginx/html;
#
# ssl_certificate "/etc/pki/nginx/server.crt";
# ssl_certificate_key "/etc/pki/nginx/private/server.key";
# # It is *strongly* recommended to generate unique DH parameters
# # Generate them with: openssl dhparam -out /etc/pki/nginx/dhparams.pem 2048
# #ssl_dhparam "/etc/pki/nginx/dhparams.pem";
# ssl_session_cache shared:SSL:1m;
# ssl_session_timeout 10m;
# ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
# ssl_ciphers HIGH:SEED:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!RSAPSK:!aDH:!aECDH:!EDH-DSS-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA:!SRP;
# ssl_prefer_server_ciphers on;
#
# # Load configuration files for the default server block.
# include /etc/nginx/default.d/*.conf;
#
# location / {
# }
#
# error_page 404 /404.html;
# location = /40x.html {
# }
#
# error_page 500 502 503 504 /50x.html;
# location = /50x.html {
# }
# }
}
/etc/nginx/conf.d/virtual.conf
#
# A virtual host using mix of IP-, name-, and port-based configuration
#
#server {
# listen 8000;
# listen somename:8080;
# server_name somename alias another.alias;
# location / {
# root html;
# index index.html index.htm;
# }
#}
/etc/nginx/conf.d/webapp_healthd.conf
upstream my_app {
server unix:///var/run/puma/my_app.sock;
}
log_format healthd '$msec"$uri"'
'$status"$request_time"$upstream_response_time"'
'$http_x_forwarded_for';
server {
listen 80;
server_name _ localhost; # need to listen to localhost for worker tier
if ($time_iso8601 ~ "^(\d{4})-(\d{2})-(\d{2})T(\d{2})") {
set $year $1;
set $month $2;
set $day $3;
set $hour $4;
}
access_log /var/log/nginx/access.log main;
access_log /var/log/nginx/healthd/application.log.$year-$month-$day-$hour healthd;
location / {
proxy_pass http://my_app; # match the name of upstream directive which is defined above
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
location /assets {
alias /var/app/current/public/assets;
gzip_static on;
gzip on;
expires max;
add_header Cache-Control public;
}
location /public {
alias /var/app/current/public;
gzip_static on;
gzip on;
expires max;
add_header Cache-Control public;
}
}
Fix webapp_healthd.conf to make nginx to serve files in public folder and if cannot or they do not exist then proxy_pass to Your app:
upstream my_app {
server unix:///var/run/puma/my_app.sock;
}
server {
listen 80;
server_name _; # need to listen to localhost for worker tier
if ($time_iso8601 ~ "^(\d{4})-(\d{2})-(\d{2})T(\d{2})") {
set $year $1;
set $month $2;
set $day $3;
set $hour $4;
}
access_log /var/log/nginx/healthd/application.log.$year-$month-$day-$hour healthd;
index index.html index.htm;
location #app {
log_not_found off;
access_log off;
proxy_pass http://my_app; # proxy passing to upstream
proxy_http_version 1.1;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_cache_bypass $http_upgrade;
}
root /var/app/current/public;
location / {
try_files $uri $uri/ #app; # tries to serve static files if not will ask #app
}
}
I know this is a very common issue, but I've been struggling days with a strange one this time:
I want to serve two Rails 4 apps on the same VPS (ubuntu 14.04). I followed this guide for one app with success. My app1 is working fine. But not app2.
The error is this one (/var/log/nginx/error.log):
directory index of "/srv/app1/public/app2/" is forbidden
General nginx.conf
# Run nginx as www-data.
user www-data;
# One worker process per CPU core is a good guideline.
worker_processes 1;
# The pidfile location.
pid /var/run/nginx.pid;
# For a single core server, 1024 is a good starting point. Use `ulimit -n` to
# determine if your server can handle more.
events {
worker_connections 1024;
}
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay off;
keepalive_timeout 65;
types_hash_max_size 2048;
server_tokens off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
gzip_disable "msie6";
gzip_http_version 1.1;
gzip_proxied any;
gzip_min_length 500;
gzip_types text/plain text/xml text/css
text/comma-separated-values text/javascript
application/x-javascript application/atom+xml;
##
# Unicorn Rails
##
# The socket here must match the socket path that you set up in unicorn.rb.
upstream unicorn_app2 {
server unix:/srv/app2/tmp/unicorn.app2.sock fail_timeout=0;
}
upstream unicorn_app1 {
server unix:/srv/app1/tmp/unicorn.app1.sock fail_timeout=0;
}
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
sites-available/app1
server {
listen 80;
server_name _
public.ip.of.vps; # Replace this with your site's domain.
keepalive_timeout 300;
client_max_body_size 4G;
root /srv/app1/public; # Set this to the public folder location of your Rails application.
location /app1 {
try_files $uri #unicorn_app1;
}
location #unicorn_app1 {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded_Proto $scheme;
proxy_redirect off;
# This passes requests to unicorn, as defined in /etc/nginx/nginx.conf
proxy_pass http://unicorn_app1;
proxy_read_timeout 300s;
proxy_send_timeout 300s;
auth_basic "Restricted"; #For Basic Auth
auth_basic_user_file /etc/nginx/.htpasswd; #For Basic Auth
}
location ~ ^/assets/ {
#gzip_static on; # to serve pre-gzipped version
expires max;
add_header Cache-Control public;
}
# You can override error pages by redirecting the requests to a file in your
# application's public folder, if you so desire:
error_page 500 502 503 504 /500.html;
location = /500.html {
root /srv/app1/public;
}
}
sites-available/app2
server {
listen 80;
server_name __
public.ip.of.vps; # Replace this with your site's domain.
keepalive_timeout 300;
client_max_body_size 4G;
root /srv/app2/public; # Set this to the public folder location of your Rails application.
location /app2 {
try_files $uri #unicorn_app2;
}
location #unicorn_app2 {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded_Proto $scheme;
proxy_redirect off;
# This passes requests to unicorn, as defined in /etc/nginx/nginx.conf
proxy_pass http://unicorn_app2;
proxy_read_timeout 300s;
proxy_send_timeout 300s;
auth_basic "Restricted"; #For Basic Auth
auth_basic_user_file /etc/nginx/.htpasswd; #For Basic Auth
}
location ~ ^/assets/ {
#gzip_static on; # to serve pre-gzipped version
expires max;
add_header Cache-Control public;
}
# You can override error pages by redirecting the requests to a file in your
# application's public folder, if you so desire:
error_page 500 502 503 504 /500.html;
location = /500.html {
root /srv/app2/public;
}
}
Why is that nginx is looking for app2 in the public folder of app1?
The problem is that your 2 nginx server blocks are listening to the same domain name.
Move the location blocks /app2 and unicorn_app2 into site-available/app1
And delete site-available/app2
This answer shows an example.
So I just installed nginx and everything works well for a start and I get the default nginx welcome message on my vps running fedora 21. Then when I try adding virtual hosts for my rails application, for some reason nginx is unable to load my virtual host file. It throws a permission denied error which I can't trace why its thrown. Tried everything possible but nothing seems to change.
nginx.conf
# For more information on configuration, see:
# * Official English Documentation: http://nginx.org/en/docs/
# * Official Russian Documentation: http://nginx.org/ru/docs/
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log;
#error_log /var/log/nginx/error.log notice;
#error_log /var/log/nginx/error.log info;
pid /run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
#keepalive_timeout 0;
keepalive_timeout 65;
#gzip on;
index index.html index.htm;
# Load modular configuration files from the /etc/nginx/conf.d directory.
# See http://nginx.org/en/docs/ngx_core_module.html#include
# for more information.
include /etc/nginx/conf.d/*.conf;
# Load virtual host conf files. (line 44 below)
include /etc/nginx/sites-enabled/*;
server {
listen 80 default_server;
server_name localhost;
root /usr/share/nginx/html;
#charset koi8-r;
#access_log /var/log/nginx/host.access.log main;
# Load configuration files for the default server block.
include /etc/nginx/default.d/*.conf;
location / {
}
# redirect server error pages to the static page /40x.html
#
error_page 404 /404.html;
location = /40x.html {
}
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
}
}
myblog(/etc/nginx/sites-available/myblog)
upstream thin{
server 127.0.0.1:3000;
server 127.0.0.1:3001;
}
server {
listen 80;
# server_name www.sitename.com;
# rewrite ^/(.*) http://sitename.com/$1 permanent;
}
server {
listen 80;
# server_name sitename.com;
access_log /home/deployer/nginx-access.log;
error_log /home/deployer/nginx-error.log;
root /home/deployer/apps/myblog/current/public/;
index index.html;
location ^~ /assets/ {
gzip_static on;
expires max;
add_header Cache-Control public;
}
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
if (-f $request_filename/index.html) {
rewrite (.*) $1/index.html break;
}
if (-f $request_filename.html) {
rewrite (.*) $1.html break;
}
if (!-f $request_filename) {
proxy_pass http://thin;
break;
}
}
}
The sites-enabled folder is symliked to the sites-available folder. On restarting nginx, I get this error
nginx: [emerg] open() "/etc/nginx/sites-enabled/myblog" failed (13: Permission denied) in /etc/nginx/nginx.conf:44
Permissions of the myblog virtual host file
-rw-r--r--. 1 root root 1112 Jun 1 10:58 /etc/nginx/sites-available/myblog
Is there something i'm missing? I'm not sure why nginx cannot load my file. Thanks!
I have a client that wanted SSL on its site so I got the certificate and set up the nginx conf (below is the config) with it. If I dont point the root of the HTTPS part to the real server root it works, but if I set the root to the site files HTTPS gets redirected to HTTP. No error messages.
Any ideas?
user www-data;
worker_processes 4;
error_log logs/error.log;
#error_log logs/error.log notice;
#error_log logs/error.log info;
#pid logs/nginx.pid;
events {
worker_connections 1024;
}
http {
passenger_root /usr/local/rvm/gems/ruby-1.9.3-p448/gems/passenger-4.0.14;
passenger_ruby /usr/local/rvm/wrappers/ruby-1.9.3-p448/ruby;
include mime.types;
default_type application/octet-stream;
#log_format main '$remote_addr - $remote_user [$time_local] "$request" '
# '$status $body_bytes_sent "$http_referer" '
# '"$http_user_agent" "$http_x_forwarded_for"';
#access_log logs/access.log main;
sendfile on;
#tcp_nopush on;
#keepalive_timeout 0;
keepalive_timeout 65;
#gzip on;
server {
listen 80;
server_name www.nope.se;
passenger_enabled on;
root /var/www/current/public/;
#charset koi8-r;
#access_log logs/host.access.log main;
#error_page 404 /404.html;
# redirect server error pages to the static page /50x.html
#
#error_page 500 502 503 504 /50x.html;
#location = /50x.html {
# root html;
#}
# proxy the PHP scripts to Apache listening on 127.0.0.1:80
#
#location ~ \.php$ {
# proxy_pass http://127.0.0.1;
#}
# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
#
#location ~ \.php$ {
# root html;
# fastcgi_pass 127.0.0.1:9000;
# fastcgi_index index.php;
# fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name;
# include fastcgi_params;
#}
# deny access to .htaccess files, if Apache's document root
# concurs with nginx's one
#
#location ~ /\.ht {
# deny all;
#}
}
# another virtual host using mix of IP-, name-, and port-based configuration
#
#server {
# listen 8000;
# listen somename:8080;
# server_name somename alias another.alias;
# location / {
# root html;
# index index.html index.htm;
# }
#}
# HTTPS server
#
server {
listen 443;
server_name www.nope.se;
ssl on;
ssl_certificate /opt/nginx/cert/www.nope.se.crt;
ssl_certificate_key /opt/nginx/cert/www.nope.se.key;
ssl_session_timeout 10m;
#ssl_protocols SSLv2 SSLv3 TLSv1;
#ssl_ciphers HIGH:!aNULL:!MD5;
#ssl_prefer_server_ciphers on;
passenger_enabled on;
root /var/www/current/public/;
# location / {
# root html;
# index index.html index.htm;
# }
}
}
I honestly do not understand your question. But here is some gyan on how a typical nginx-https configuration is done. hope you find it useful.
SSL is a protocol that works one layer below HTTP. Think of it as a tunnel inside which HTTP protocol travels. Hence your SSL certificates are loaded, no matter where you specify them, before any HTTP related configuration. This is also the reason why there should be only one SSL setting per nginx instance.
I recommend that you move your ssl certificate related logic to a separate server block like this.
server {
listen 443 ssl default_server;
ssl_certificate ssl/website.pem;
ssl_certificate_key ssl/website.key;
ssl_trusted_certificate ssl/ca.all.pem;
ssl_session_cache builtin:1000 shared:SSL:10m;
ssl_session_timeout 5m;
ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2; # default on newer versions
ssl_prefer_server_ciphers on;
# The following is all one long line. We use an explicit list of ciphers to enable
# forward secrecy without exposing ciphers vulnerable to the BEAST attack
ssl_ciphers ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-RC4-SHA:ECDHE-RSA-AES128-SHA:RC4-SHA:RC4-MD5:ECDHE-RSA-AES256-SHA:AES256-SHA:ECDHE-RSA-DES-CBC3-SHA:DES-CBC3-SHA:AES128-SHA;
# The following is for reference. It needs to be specified again
# in each virtualhost, in both HTTP and non-HTTP versions.
# All this directive does it to tell the browser to use HTTPS version of the site and remember this for a month
add_header Strict-Transport-Security max-age=2592000;
}
I also recommend that you set a 301 redirect in your non-https server block as shown below.
Change this:
server {
listen 80;
server_name www.nope.se;
...
}
to something like this:
server {
listen 80;
server_name www.nope.se;
add_header Strict-Transport-Security max-age=7200;
return 301 https://$host$request_uri;
}
With this in place, when a user visits http://www.nope.se they will be automatically redirected to https://www.nope.se