Setting up HTTPS on Nginx/Rails - ruby-on-rails

I need to set up HTTPS for my website (Nginx, Rails 4). I used the directions from this post.
I did everything up until the part where it says "Configure your Nginx server to use the new key and certificate".
The problem is that I don't know exactly what the nginx.conf file should look like. I found something that says how to set it up for Rails, and I tried that, but it failed to restart. This is what I added to my file (and it didn't work):
server {
listen 443;
ssl on;
# path to your certificate
ssl_certificate /etc/nginx/ssl/mysite.com.unified.crt;
# path to your ssl key
ssl_certificate_key /etc/nginx/ssl.key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # don’t use SSLv3 ref: POODLE
# put the rest of your server configuration here.
#location / {
# set X-FORWARDED_PROTO so ssl_requirement plugin works
proxy_set_header X-FORWARDED_PROTO https;
# standard rails+mongrel configuration goes here.
}
}

Related

Nginx + Puma + Sidekiq web interface not showing css styles

I have an angularJS web app running in a Nginx server that sends request to a Rails API running in a Puma server. I have integrated Sidekiq 5.2.8 and everything works great but the Sidekiq web interface.
In my Nginx config file, I have a rule to pass request to the API. Please find the whole nginx.conf document:
events {
worker_connections 1024;
}
http {
upstream api.development {
# Path to Puma SOCK file, as defined previously
server unix:/tmp/puma.sock fail_timeout=0;
}
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
# set client body size to 10M #
client_max_body_size 10M;
gzip on;
server {
listen 80 default_server;
listen [::]:80 default_server ipv6only=on;
#charset koi8-r;
#access_log logs/host.access.log main;
root /Users/Rober/Projects/domain/dev/domain/app;
index index.html index.htm;
try_files $uri $uri/ /index.html =404;
# Proxy requests to the backoffice Rails API
location /api {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
#proxy_set_header X-Forwarded-Proto https;
proxy_redirect off;
rewrite ^/api(.*) /$1 break;
proxy_pass http://api.development;
}
# Rule to proxy the sidekiq web UI
location /sidekiq {
proxy_pass http://api.development;
}
# Expire rules for static content
# RCM: WPO
# Images
location ~* \.(?:jpg|jpeg|gif|png|ico|cur|gz|svg|svgz|mp4|ogg|ogv|webm|htc)$ {
root /Users/Rober/Projects/domain/dev/domain/app;
expires 1w;
add_header Cache-Control "public";
}
# This rule is the root cause of the problems with the sidekiq css
# I have commented it for testing purposes
# CSS and Javascript
#location ~* \.(?:css|js)$ {
# root /Users/Rober/Projects/domain/dev/domain/app;
# expires 1w;
# add_header Cache-Control "public";
#}
# I have replaced the previous location above for this as suggested by #Beena Shetty.
location ~* \.(?:css|js)$ {
add_header X-debug-message "Into the location css" always;
if ($uri !~* "^/sidekiq/\w*(.*)+$") {
add_header X-debug-message "Into the location css if" always;
root /Users/Rober/Projects/domain/dev/domain/app;
expires 1w;
add_header Cache-Control "public";
}
}
# cache.appcache, your document html and data
location ~* \.(?:manifest|appcache|html?|xml|json)$ {
root /Users/Rober/Projects/domain/dev/domain/app;
expires -1;
}
}
include servers/*;
}
In Rails:
routes:
require 'sidekiq/web'
mount Sidekiq::Web => '/sidekiq'
I have included next rule in Nginx config file and now when I request http://localhost/sidekiq I can see the web interface and navigate, but still cannot see the styles.
location /sidekiq {
proxy_pass http://api.development;
}
See screenshot.
The dev tools shows that when I load sideqik is trying to get bootstrap.css and some other css and javascript in http://localhost/sidekiq/stylesheets/bootstrap.css
What am I missing?
UPDATE:
I have found out the root cause of the problem in my nginx.conf. I have next rule that set a cache expiration time for performance purposes. If I comment this code, everything works. But how can I have both things living together?
CSS and Javascript
location ~* \.(?:css|js)$ {
root /Users/Rober/Projects/domain/dev/domain/app;
expires 1w;
add_header Cache-Control "public";
}
UPDATE 2: Just in case the problem comes from another point, I have included my whole nginx conf.
Now, with the provided config, the expiration rules in my web app are still working, but the css in the sidekiq webapp do not.
I have included two headers as debug. One when the server is accessing the location rule and the second when the server is accessing inside the if condition. When I request my home page with localhost and I check the request for my own css, such as app.css, I can see the header X-debug-message: Into the location css if, which is right.
If I request sidekiq with localhost/sidekiq I still get 404 error for css, let´s say http://localhost/sidekiq/stylesheets/bootstrap.css and I can see the header X-debug-message: Into the location css.
Current conclusions:
As soon as I include the location ~* .(?:css|js)$ rule, sidekiq css stops working. Even if the rule is empty, like:
location ~* \.(?:css|js)$ {
}
As soon as I delete or comment the whole rule, the sidekiq css works perfectly, but unfortunately this is not compatible with the expires rules that we need to include for performance purposes.
Try this:
location ~* \.(?:css|js)$ {
if ($uri !~* "^/sidekiq/\w*(.*)+$"){
root /Users/Rober/Projects/domain/dev/yanpy/app;
expires 1w;
add_header Cache-Control "public";
}
}
I wasn't able to find a fix for this, so I hacked it with the following method: I copied sidekiq's assets to the public folder and it started working ( since they're referenced by the UI ).
|- images
|-- favicon.io
|-- logo.png
|-- status.png
|- javascripts
|-- application.js
|-- dashboard.js
|- stylesheets
|-- application.css
|-- application-rtl.css
|-- application-dark.css
|-- application-rtl.min.css
|-- bootstrap.css
Mainly, these files: https://github.com/mperham/sidekiq/tree/master/web/assets

Securing a docker registry with basic auth for push requests only

I am trying to set up a private docker registry behind an nginx proxy that is read-only (i.e. allows pull requests) for everyone but requires authentication for push requests. I have followed various guides but am still stumped. Below is my current nginx configuration:
events {
worker_connections 1024;
}
http {
upstream docker-registry {
server registry:5000;
}
## Set a variable to help us decide if we need to add the
## 'Docker-Distribution-Api-Version' header.
## The registry always sets this header.
## In the case of nginx performing auth, the header is unset
## since nginx is auth-ing before proxying.
map $upstream_http_docker_distribution_api_version $docker_distribution_api_version {
'registry/2.0' '';
default registry/2.0;
}
server {
listen 80;
server_name docker-host.example.com;
location / {
rewrite ^(.*)$ https://docker-host.example.com$1 last;
}
}
server {
listen 443 ssl;
server_name docker-host.example.com;
ssl_certificate /etc/nginx/ssl/example.cert.pem;
ssl_certificate_key /etc/nginx/ssl/example.key.pem;
ssl_ciphers 'AES256+EECDH:AES256+EDH::!EECDH+aRSA+RC4:!RC4:!aNULL:!eNULL:!LOW:!3DES:!MD5:!EXP:!PSK:!SRP:!DSS';
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
client_max_body_size 0;
location / {
limit_except GET HEAD OPTIONS {
auth_basic "Restricted";
auth_basic_user_file /etc/nginx/users.pwd;
}
include proxy.conf;
}
}
}
It does allow anonymous pull requests but push always fails with 'unauthorized: authentication required'. If I remove the conditional limit_except, i.e. require authentication for all access, it works just fine after logging in.
When I remove the authentication configuration from nginx entirely, everything works as well, but obviously without authentication.
Any help or pointers would be greatly appreciated.
We have been using https://github.com/cesanta/docker_auth and it works pretty well you can setup many authentication methods
For more info check
https://github.com/cesanta/docker_auth/blob/master/README.md
"unauthorized: authentication required" error comes from registry API. that means you have auth enabled in registry's itself. either disable auth in registry and use nginx basic auth only, or proxy pass "Authorization" header with related data (tricky).

nginx return command returns https url without www

I'm trying to setup nginx to return https url for all http requests.
The problem is, that it returns https url without the www which results in invalid url.
Here is my config:
server {
listen 80; server_name my_server;
return 301 https://$server_name$request_uri;
}
server {
listen 443;
ssl on;
ssl_certificate /etc/ssl/my_pem.pem;
ssl_certificate_key /etc/ssl/my_key.key;
server_name my_server;
access_log /var/log/nginx/my_log.access.log;
...
}
I've tried including www in the server_name and also specifying the explicit url with www for the 301 return.
Everything resulted in invalid url.
I've noticed though that when I'm logged in the application and I change https to http and trigger the request the redirect works. When I'm logged out, the redirect fails and renders the https url without the www.
Then I tried with only server_name like so: return 301 https://$server_name but that didn't work either.
I'd like to have users not worrying about the url they specify. The url is as put together follows www.one.two-three.com
<<< EDIT >>>
This works: http://www.one.two-three.com/some_request
and this doesn't: http://www.one.two-three.com
<<< EDIT >>>
<<< EDIT 1 >>>
By typing www.one.two-three.com in the URL line in Chrome/Chromium it redirects to https://www.one.two-three.com.
In Firefox it returns https://one.two-three.com
<<< EDIT 1 >>>
Can someone help with this?
Thank you.
Seba
The pattern I use to solve this has 2 parts. First, I set up explicit redirects to go from HTTP to the correct HTTPS URL, as well as from the bare HTTP to "www" HTTPS. Second, this means I may not rely on $server_name and so I have a maintainable bit of duplication in my config.
server {
listen 80;
server_name www.example.com example.com example.biz example.us www.example.biz www.example.us;
return 301 https://www.example.com$request_uri;
}
server {
listen 443;
server_name example.com example.biz example.us www.example.biz www.example.us;
ssl on;
ssl_certificate /etc/ssl/com.example.crt;
ssl_certificate_key /etc/ssl/com.example.key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers RC4:HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;
return 301 https://www.example.com$request_uri;
}
server {
listen 443;
server_name www.example.com;
...
}

Streaming mp4 in Chrome with rails, nginx and send_file

I can't for the life of me stream a mp4 to Chrome with a html5 <video> tag. If I drop the file in public then everything is gravy and works as expected. But if I try to serve it using send_file, pretty much everything imaginable goes wrong. I am using a rails app that is proxied by nginx, with a Video model that has a location attribute that is an absolute path on disk.
At first I tried:
def show
send_file Video.find(params[:id]).location
end
And I was sure I would be basking in the glory that is modern web development. Ha. This plays in both Chrome and Firefox, but neither seek and neither have any idea how long the video is. I poked at the response headers and realized that Content-Type is being sent as application/octet-stream and there is no Content-Length set. Umm... wth?
Okay, I guess I can set those in rails:
def show
video = Video.find(params[:id])
response.headers['Content-Length'] = File.stat(video.location).size
send_file(video.location, type: 'video/mp4')
end
At this point everything works pretty much as expected in Firefox. It knows how long the video is and seeking works as expected. Chrome appears to know how long the video is (doesn't show timestamps, but seek bar looks appropriate) but seeking doesn't work.
Apparently Chrome is pickier than Firefox. It requires that the server respond with a Accept-Ranges header with value bytes and respond to subsequent requests (that happen when the users seeks) with 206 and the appropriate portion of the file.
Okay, so I borrowed some code from here and then I had this:
video = Video.find(params[:id])
file_begin = 0
file_size = File.stat(video.location).size
file_end = file_size - 1
if !request.headers["Range"]
status_code = :ok
else
status_code = :partial_content
match = request.headers['Range'].match(/bytes=(\d+)-(\d*)/)
if match
file_begin = match[1]
file_end = match[2] if match[2] && !match[2].empty?
end
response.header["Content-Range"] = "bytes " + file_begin.to_s + "-" + file_end.to_s + "/" + file_size.to_s
end
response.header["Content-Length"] = (file_end.to_i - file_begin.to_i + 1).to_s
response.header["Accept-Ranges"]= "bytes"
response.header["Content-Transfer-Encoding"] = "binary"
send_file(video.location,
:filename => File.basename(video.location),
:type => 'video/mp4',
:disposition => "inline",
:status => status_code,
:stream => 'true',
:buffer_size => 4096)
Now Chrome attempts to seek, but when you do the video stops playing and never works again until the page reloads. Argh. So I decided to play around with curl to see what was happening and I discovered this:
$ curl --header "Range: bytes=200-400" http://localhost:8080/videos/1/001.mp4
ftypisomisomiso2avc1mp41 �moovlmvhd��#��trak\tkh��
$ curl --header "Range: bytes=1200-1400" http://localhost:8080/videos/1/001.mp4
ftypisomisomiso2avc1mp41 �moovlmvhd��#��trak\tkh��
No matter the byte range request, the data always starts from the beginning of the file. The appropriate amount of bytes is returned (201 bytes in this case), but it's always from the beginning of the file. Apparently nginx respects the Content-Length header but ignores the Content-Range header.
My nginx.conf is untouched default:
user www-data;
worker_processes 4;
pid /run/nginx.pid;
events {
worker_connections 768;
}
http {
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
gzip on;
gzip_disable "msie6";
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
and my app.conf is pretty basic:
upstream unicorn {
server unix:/tmp/unicorn.app.sock fail_timeout=0;
}
server {
listen 80 default deferred;
root /vagrant/public;
try_files $uri/index.html $uri #unicorn;
location #unicorn {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header HOST $http_host;
proxy_redirect off;
proxy_pass http://unicorn;
}
error_page 500 502 503 504 /500.html;
client_max_body_size 4G;
keepalive_timeout 5;
}
First I tried the nginx 1.4.x that comes with Ubuntu 14.04, then tried 1.7.x from a ppa - same results. I even tried apache2 and had exactly the same results.
I would like to reiterate that the video file is not the problem. If I drop it in public then nginx serves it with the appropriate mime types, headers and everything needed for Chrome to work properly.
So my question is a two-parter:
Why doesn't nginx/apache handle all this stuff automagically with send_file (X-Accel-Redirect/X-Sendfile) like it does when the file is served statically from public? Handling this stuff in rails is so backwards.
How the heck can I actually use send_file with nginx (or apache) so that Chrome will be happy and allow seeking?
Update 1
Okay, so I thought I'd try to take the complication of rails out of the picture and just see if I could get nginx to proxy the file correctly. So I spun up a dead-simple nodjs server:
var http = require('http');
http.createServer(function (req, res) {
res.writeHead(200, {
'X-Accel-Redirect': '/path/to/file.mp4'
});
res.end();
}).listen(3000, '127.0.0.1');
console.log('Server running at http://127.0.0.1:3000/');
And chrome is happy as a clam. =/ curl -I even shows that Accept-Ranges: bytes and Content-Type: video/mp4 is being inserted by nginx automagically - as it should be. What could rails be doing that's preventing nginx from doing this?
Update 2
I might be getting closer...
If I have:
def show
video = Video.find(params[:id])
send_file video.location
end
Then I get:
$ curl -I localhost:8080/videos/1/001.mp4
HTTP/1.1 200 OK
Server: nginx/1.7.9
Date: Sun, 18 Jan 2015 12:06:38 GMT
Content-Type: application/octet-stream
Connection: keep-alive
Status: 200 OK
X-Frame-Options: SAMEORIGIN
X-XSS-Protection: 1; mode=block
X-Content-Type-Options: nosniff
Content-Disposition: attachment; filename="001.mp4"
Content-Transfer-Encoding: binary
Cache-Control: private
Set-Cookie: request_method=HEAD; path=/
X-Meta-Request-Version: 0.3.4
X-Request-Id: cd80b6e8-2eaa-4575-8241-d86067527094
X-Runtime: 0.041953
And I have all the problems described above.
But if I have:
def show
video = Video.find(params[:id])
response.headers['X-Accel-Redirect'] = video.location
head :ok
end
Then I get:
$ curl -I localhost:8080/videos/1/001.mp4
HTTP/1.1 200 OK
Server: nginx/1.7.9
Date: Sun, 18 Jan 2015 12:06:02 GMT
Content-Type: text/html
Content-Length: 186884698
Last-Modified: Sun, 18 Jan 2015 03:49:30 GMT
Connection: keep-alive
Cache-Control: max-age=0, private, must-revalidate
Set-Cookie: request_method=HEAD; path=/
ETag: "54bb2d4a-b23a25a"
Accept-Ranges: bytes
And everything works perfectly.
But why? Those should do exactly the same thing. And why doesn't nginx set Content-Type automagically here like it does for the simple nodejs example? I have config.action_dispatch.x_sendfile_header = 'X-Accel-Redirect' set. I have moved it back and forth between application.rb and development.rb with the same results. I guess I never mentioned... this is rails 4.2.0.
Update 3
Now I've changed my unicorn server to listen on port 3000 (since I already changed nginx to listen on 3000 for the nodejs example). Now I can make requests directly to unicorn (since it's listening on a port and not a socket) so I have found that curl -I directly to unicorn shows that no X-Accel-Redirect header is sent and just curling unicorn directly actually sends the file. It's like send_file isn't doing what it's supposed to.
I finally have the answers to my original questions. I didn't think I'd ever get here. All my research had lead to dead-ends, hacky non-solutions and "it just works out of the box" (well, not for me).
Why doesn't nginx/apache handle all this stuff automagically with send_file (X-Accel-Redirect/X-Sendfile) like it does when the file is served statically from public? Handling this stuff in rails is so backwards.
They do, but they have to be configured properly to please Rack::Sendfile (see below). Trying to handle this in rails is a hacky non-solution.
How the heck can I actually use send_file with nginx (or apache) so that Chrome will be happy and allow seeking?
I got desperate enough to start poking around rack source code and that's where I found my answer, in the comments of Rack::Sendfile. They are structured as documentation that you can find at rubydoc.
For whatever reason, Rack::Sendfile requires the front end proxy to send a X-Sendfile-Type header. In the case of nginx it also requires a X-Accel-Mapping header. The documentation also has examples for apache and lighttpd as well.
One would think the rails documentation could link to the Rack::Sendfile documentation since send_file does not work out of the box without additional configuration. Perhaps I'll submit a pull request.
In the end I only needed to add a couple lines to my app.conf:
upstream unicorn {
server unix:/tmp/unicorn.app.sock fail_timeout=0;
}
server {
listen 80 default deferred;
root /vagrant/public;
try_files $uri/index.html $uri #unicorn;
location #unicorn {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header HOST $http_host;
proxy_set_header X-Sendfile-Type X-Accel-Redirect; # ADDITION
proxy_set_header X-Accel-Mapping /=/; # ADDITION
proxy_redirect off;
proxy_pass http://localhost:3000;
}
error_page 500 502 503 504 /500.html;
client_max_body_size 4G;
keepalive_timeout 5;
}
Now my original code works as expected:
def show
send_file(Video.find(params[:id]).location)
end
Edit:
Although this worked initially, it stopped working after I restarted my vagrant box and I had to make further changes:
upstream unicorn {
server unix:/tmp/unicorn.app.sock fail_timeout=0;
}
server {
listen 80 default deferred;
root /vagrant/public;
try_files $uri/index.html $uri #unicorn;
location ~ /files(.*) { # NEW
internal; # NEW
alias $1; # NEW
} # NEW
location #unicorn {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header HOST $http_host;
proxy_set_header X-Sendfile-Type X-Accel-Redirect;
proxy_set_header X-Accel-Mapping /=/files/; # CHANGED
proxy_redirect off;
proxy_pass http://localhost:3000;
}
error_page 500 502 503 504 /500.html;
client_max_body_size 4G;
keepalive_timeout 5;
}
I find this whole thing of mapping one URI to another and then mapping that URI to a location on disk to be totally unnecessary. It's useless for my use case and I'm just mapping one to another and back again. Apache and lighttpd don't require it. But at least it works.
I also added Mime::Type.register('video/mp4', :mp4) to config/initializers/mime_types.rb so the file is served with the correct mime type.

nginx with passenger don't handle static assets

I have rails app running with helpfull nginx and passenger, and I want to add static page (conteins code coverage analysis tool - simplecov).
Localy this works fine (without passenger), but on the server this don't works.
My nginx.conf:
#user nobody;
worker_processes 1;
events {
worker_connections 1024;
#speed up for linux 2.6+
use epoll;
}
http {
passenger_root /home/demo/.rvm/gems/ruby-1.9.3-p0#gm/gems/passenger-3.0.9;
passenger_ruby /home/demo/.rvm/wrappers/ruby-1.9.3-p0#gm/ruby;
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
server {
listen 80;
server_name *.dev.mysite.com;
root /var/www/projects/mysite/qa/current/public;
passenger_enabled on;
rails_env qa;
charset utf-8;
error_log /var/www/projects/mysite/qa/shared/log/host.error.log;
}
#Coverage code tool (SimpleCov gem)
server {
listen 4444;
server_name coverage.mysite.com;
location / {
root /var/lib/jenkins/jobs/WebForms/workspace/coverage;
index index.html index.htm;
}
}
#Yard server
server {
listen 5555;
server_name yard.mysite.com;
location / {
proxy_pass http://127.0.0.1:8808;
}}}
And nothing receive when I try to hit to coverage.mysite.com:4444.
I think I remember coming across something similar to this on one of my rails apps.
Have you tried commenting and uncommenting the lines below?:
# in config/environments/production.rb
# Specifies the header that your server uses for sending files
#config.action_dispatch.x_sendfile_header = "X-Sendfile"
# For nginx:
config.action_dispatch.x_sendfile_header = 'X-Accel-Redirect'
It should be near the top, around line 12 through 16.
Try that, then redploy and test in the browser.

Resources