Streaming mp4 in Chrome with rails, nginx and send_file - ruby-on-rails

I can't for the life of me stream a mp4 to Chrome with a html5 <video> tag. If I drop the file in public then everything is gravy and works as expected. But if I try to serve it using send_file, pretty much everything imaginable goes wrong. I am using a rails app that is proxied by nginx, with a Video model that has a location attribute that is an absolute path on disk.
At first I tried:
def show
send_file Video.find(params[:id]).location
end
And I was sure I would be basking in the glory that is modern web development. Ha. This plays in both Chrome and Firefox, but neither seek and neither have any idea how long the video is. I poked at the response headers and realized that Content-Type is being sent as application/octet-stream and there is no Content-Length set. Umm... wth?
Okay, I guess I can set those in rails:
def show
video = Video.find(params[:id])
response.headers['Content-Length'] = File.stat(video.location).size
send_file(video.location, type: 'video/mp4')
end
At this point everything works pretty much as expected in Firefox. It knows how long the video is and seeking works as expected. Chrome appears to know how long the video is (doesn't show timestamps, but seek bar looks appropriate) but seeking doesn't work.
Apparently Chrome is pickier than Firefox. It requires that the server respond with a Accept-Ranges header with value bytes and respond to subsequent requests (that happen when the users seeks) with 206 and the appropriate portion of the file.
Okay, so I borrowed some code from here and then I had this:
video = Video.find(params[:id])
file_begin = 0
file_size = File.stat(video.location).size
file_end = file_size - 1
if !request.headers["Range"]
status_code = :ok
else
status_code = :partial_content
match = request.headers['Range'].match(/bytes=(\d+)-(\d*)/)
if match
file_begin = match[1]
file_end = match[2] if match[2] && !match[2].empty?
end
response.header["Content-Range"] = "bytes " + file_begin.to_s + "-" + file_end.to_s + "/" + file_size.to_s
end
response.header["Content-Length"] = (file_end.to_i - file_begin.to_i + 1).to_s
response.header["Accept-Ranges"]= "bytes"
response.header["Content-Transfer-Encoding"] = "binary"
send_file(video.location,
:filename => File.basename(video.location),
:type => 'video/mp4',
:disposition => "inline",
:status => status_code,
:stream => 'true',
:buffer_size => 4096)
Now Chrome attempts to seek, but when you do the video stops playing and never works again until the page reloads. Argh. So I decided to play around with curl to see what was happening and I discovered this:
$ curl --header "Range: bytes=200-400" http://localhost:8080/videos/1/001.mp4
ftypisomisomiso2avc1mp41 �moovlmvhd��#��trak\tkh��
$ curl --header "Range: bytes=1200-1400" http://localhost:8080/videos/1/001.mp4
ftypisomisomiso2avc1mp41 �moovlmvhd��#��trak\tkh��
No matter the byte range request, the data always starts from the beginning of the file. The appropriate amount of bytes is returned (201 bytes in this case), but it's always from the beginning of the file. Apparently nginx respects the Content-Length header but ignores the Content-Range header.
My nginx.conf is untouched default:
user www-data;
worker_processes 4;
pid /run/nginx.pid;
events {
worker_connections 768;
}
http {
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
gzip on;
gzip_disable "msie6";
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
and my app.conf is pretty basic:
upstream unicorn {
server unix:/tmp/unicorn.app.sock fail_timeout=0;
}
server {
listen 80 default deferred;
root /vagrant/public;
try_files $uri/index.html $uri #unicorn;
location #unicorn {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header HOST $http_host;
proxy_redirect off;
proxy_pass http://unicorn;
}
error_page 500 502 503 504 /500.html;
client_max_body_size 4G;
keepalive_timeout 5;
}
First I tried the nginx 1.4.x that comes with Ubuntu 14.04, then tried 1.7.x from a ppa - same results. I even tried apache2 and had exactly the same results.
I would like to reiterate that the video file is not the problem. If I drop it in public then nginx serves it with the appropriate mime types, headers and everything needed for Chrome to work properly.
So my question is a two-parter:
Why doesn't nginx/apache handle all this stuff automagically with send_file (X-Accel-Redirect/X-Sendfile) like it does when the file is served statically from public? Handling this stuff in rails is so backwards.
How the heck can I actually use send_file with nginx (or apache) so that Chrome will be happy and allow seeking?
Update 1
Okay, so I thought I'd try to take the complication of rails out of the picture and just see if I could get nginx to proxy the file correctly. So I spun up a dead-simple nodjs server:
var http = require('http');
http.createServer(function (req, res) {
res.writeHead(200, {
'X-Accel-Redirect': '/path/to/file.mp4'
});
res.end();
}).listen(3000, '127.0.0.1');
console.log('Server running at http://127.0.0.1:3000/');
And chrome is happy as a clam. =/ curl -I even shows that Accept-Ranges: bytes and Content-Type: video/mp4 is being inserted by nginx automagically - as it should be. What could rails be doing that's preventing nginx from doing this?
Update 2
I might be getting closer...
If I have:
def show
video = Video.find(params[:id])
send_file video.location
end
Then I get:
$ curl -I localhost:8080/videos/1/001.mp4
HTTP/1.1 200 OK
Server: nginx/1.7.9
Date: Sun, 18 Jan 2015 12:06:38 GMT
Content-Type: application/octet-stream
Connection: keep-alive
Status: 200 OK
X-Frame-Options: SAMEORIGIN
X-XSS-Protection: 1; mode=block
X-Content-Type-Options: nosniff
Content-Disposition: attachment; filename="001.mp4"
Content-Transfer-Encoding: binary
Cache-Control: private
Set-Cookie: request_method=HEAD; path=/
X-Meta-Request-Version: 0.3.4
X-Request-Id: cd80b6e8-2eaa-4575-8241-d86067527094
X-Runtime: 0.041953
And I have all the problems described above.
But if I have:
def show
video = Video.find(params[:id])
response.headers['X-Accel-Redirect'] = video.location
head :ok
end
Then I get:
$ curl -I localhost:8080/videos/1/001.mp4
HTTP/1.1 200 OK
Server: nginx/1.7.9
Date: Sun, 18 Jan 2015 12:06:02 GMT
Content-Type: text/html
Content-Length: 186884698
Last-Modified: Sun, 18 Jan 2015 03:49:30 GMT
Connection: keep-alive
Cache-Control: max-age=0, private, must-revalidate
Set-Cookie: request_method=HEAD; path=/
ETag: "54bb2d4a-b23a25a"
Accept-Ranges: bytes
And everything works perfectly.
But why? Those should do exactly the same thing. And why doesn't nginx set Content-Type automagically here like it does for the simple nodejs example? I have config.action_dispatch.x_sendfile_header = 'X-Accel-Redirect' set. I have moved it back and forth between application.rb and development.rb with the same results. I guess I never mentioned... this is rails 4.2.0.
Update 3
Now I've changed my unicorn server to listen on port 3000 (since I already changed nginx to listen on 3000 for the nodejs example). Now I can make requests directly to unicorn (since it's listening on a port and not a socket) so I have found that curl -I directly to unicorn shows that no X-Accel-Redirect header is sent and just curling unicorn directly actually sends the file. It's like send_file isn't doing what it's supposed to.

I finally have the answers to my original questions. I didn't think I'd ever get here. All my research had lead to dead-ends, hacky non-solutions and "it just works out of the box" (well, not for me).
Why doesn't nginx/apache handle all this stuff automagically with send_file (X-Accel-Redirect/X-Sendfile) like it does when the file is served statically from public? Handling this stuff in rails is so backwards.
They do, but they have to be configured properly to please Rack::Sendfile (see below). Trying to handle this in rails is a hacky non-solution.
How the heck can I actually use send_file with nginx (or apache) so that Chrome will be happy and allow seeking?
I got desperate enough to start poking around rack source code and that's where I found my answer, in the comments of Rack::Sendfile. They are structured as documentation that you can find at rubydoc.
For whatever reason, Rack::Sendfile requires the front end proxy to send a X-Sendfile-Type header. In the case of nginx it also requires a X-Accel-Mapping header. The documentation also has examples for apache and lighttpd as well.
One would think the rails documentation could link to the Rack::Sendfile documentation since send_file does not work out of the box without additional configuration. Perhaps I'll submit a pull request.
In the end I only needed to add a couple lines to my app.conf:
upstream unicorn {
server unix:/tmp/unicorn.app.sock fail_timeout=0;
}
server {
listen 80 default deferred;
root /vagrant/public;
try_files $uri/index.html $uri #unicorn;
location #unicorn {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header HOST $http_host;
proxy_set_header X-Sendfile-Type X-Accel-Redirect; # ADDITION
proxy_set_header X-Accel-Mapping /=/; # ADDITION
proxy_redirect off;
proxy_pass http://localhost:3000;
}
error_page 500 502 503 504 /500.html;
client_max_body_size 4G;
keepalive_timeout 5;
}
Now my original code works as expected:
def show
send_file(Video.find(params[:id]).location)
end
Edit:
Although this worked initially, it stopped working after I restarted my vagrant box and I had to make further changes:
upstream unicorn {
server unix:/tmp/unicorn.app.sock fail_timeout=0;
}
server {
listen 80 default deferred;
root /vagrant/public;
try_files $uri/index.html $uri #unicorn;
location ~ /files(.*) { # NEW
internal; # NEW
alias $1; # NEW
} # NEW
location #unicorn {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header HOST $http_host;
proxy_set_header X-Sendfile-Type X-Accel-Redirect;
proxy_set_header X-Accel-Mapping /=/files/; # CHANGED
proxy_redirect off;
proxy_pass http://localhost:3000;
}
error_page 500 502 503 504 /500.html;
client_max_body_size 4G;
keepalive_timeout 5;
}
I find this whole thing of mapping one URI to another and then mapping that URI to a location on disk to be totally unnecessary. It's useless for my use case and I'm just mapping one to another and back again. Apache and lighttpd don't require it. But at least it works.
I also added Mime::Type.register('video/mp4', :mp4) to config/initializers/mime_types.rb so the file is served with the correct mime type.

Related

Carrierwave + NGINX rails production images not displaying

I've deployed a rails app that allows a user to upload a photo and have it display on another page, fairly simple. I tested it in development, the image uploads to the public folder and displays properly. In production and deployed the image uploads to the server but isn't being rendered on the page.
404 errors for only the images I've uploaded, path looks like this:
http://IP-OF-APP/uploads/blog/name-of-image.jpg
I read in another S.O. article mentioning setting a production config serve_static_files as true. This didn't fix the problem.
I thought it might be a server configuration with NGINX not picking up the uploads path here is my /sites-default/nginx.conf file.
upstream app {
server unix: /home/deploy/MYAPPNAME/shared/tmp/sockets/puma.sock fail_timeout=0;
}
server {
listen 80;
server_name IP_ADDRESS_OF_SERVER;
root /home/deploy/MYAPPNAME/public;
location #app {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_pass http://app;
proxy_redirect off;
}
location ~ ^/(assets)/ {
root /home/deploy/MYAPPNAME/shared/public
gzip_static on;
expires max;
add_header Cache-Control public;
}
location ~ ^/uploads/ {
root /home/deploy/MYAPPNAME/shared/public;
expires 24h;
add_header Cache-Control public;
break;
}
location ~ ^/(fonts|system)/favicon.ico/robots.txt {
gzip_static on;
expires max;
add_header Cache-Control public;
}
error_page 500 502 503 504 /500.html;
client_max_body_size 4G;
keepalive_timeout 10;
}
I also thought it might be that the image isn't uploading to the server correctly so I SSH'd into the server and found the image uploaded living in:
home/deploy/MYAPPNAME/shared/public/uploads/blog/name-of-image.jpg
Also found what I assume to be the same image in:
home/deploy/MYAPPNAME/current/public/uploads/blog/name-of-image.jpg
My uploader looks like this:
class BlogUploader < CarrierWave::Uploader::Base
include CarrierWave::MiniMagick
storage :file
def store_dir
'uploads/blog'
end
def extension_whitelist
%w(jpg jpeg gif png)
end
process :resize_to_fit => [825, 825]
#titles are validated to be unique
def filename
"#{model.title}"+".#{file.extension}" if original_filename.present?
end
end
using
Rails 4.2.6
Ruby 2.3.0
Carrierwave 0.11.0
EDIT
All of the other static images, CSS and JS are displaying and rendering properly.
The NGINX error when rending the upload image produces this:
2016/04/29 17:32:34 [error] 4993#0: *23 open() "/home/deploy/MYAPPNAME/shared/public/assets/uploads/blog/name-of-image.jpg" fails (2: no such file or directory), client: *******, server: SERVER_IP, request: "GET /assets/uploads/blog/name-of-image.jpg HTTP/1.1", host: "SERVER_IP"
From your log:
User is requesting:
/assets/uploads/blog/name-of-image.jpg
Nginx is looking for the image in:
/home/deploy/MYAPPNAME/shared/public/assets/uploads/blog/name-of-image.jpg
You confirmed that the image is in:
home/deploy/MYAPPNAME/shared/public/uploads/blog/name-of-image.jpg
Nginx is looking in public/assets/uploads, your file is in public/uploads.

Rails + Puma + Nginx Every couple of days Bad Gateway 502

I have a rails app running on Nginx with Puma and like clockwork, every couple of days the app goes down with a 502 Bad Gateway error.
My nginx log contains lots of errors like this:
2015/07/23 14:43:49 [error] 14044#0: *7036 connect() to unix:///var/www/myapp/myapp_app.sock failed (111: Connection refused) while connecting to upstream, client: 12.123.12.12, server: myapp.com, request: "GET /arrangements HTTP/1.1", upstream: "http://unix:///var/www/myapp/myapp_app.sock:/arrangements", host: "myapp.com", referrer: "http://myapp.com/arrangements"
I have to restart Puma and everything works again...for a couple days.
Any ideas how I can troubleshoot this? I'm newer to nginx and puma.
/etc/nginx/sites-enabled/myapp.com
upstream myapp {
server unix:///var/www/myapp/myapp_app.sock;
}
server {
listen 80;
server_name myapp.com;
root /var/www/myapp/current/public;
client_max_body_size 20M;
location ~ \.php$ {
try_files $uri =404;
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
allow all;
satisfy any;
}
location / {
proxy_pass http://myapp; # match the name of upstream directive which is defined above
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
location ~* ^/assets/ {
# Per RFC2616 - 1 year maximum expiry
expires 1y;
add_header Cache-Control public;
# Some browsers still send conditional-GET requests if there's a
# Last-Modified header or an ETag header even if they haven't
# reached the expiry date sent in the Expires header.
add_header Last-Modified "";
add_header ETag "";
break;
}
}
The DigitalOcean network team has identified an issue with firmware running on a number of network switches within NYC3. This issue is causing intermittent loss of connectivity to customer droplets.
While the issue has been confirmed only in a subset of racks, we will be upgrading all switches running the affected firmware in NYC3. This maintenance will result in approximately ten minutes of downtime per rack at some point within the maintenance window as individual switches are upgraded.
Maintenance window:
2015-08-27 22:00 EDT - 2015-08-28 02:00 EDT
2015-08-28 02:00 UTC - 2015-08-28 06:00 UTC
We apologize for the inconvenience and appreciate your patience as we work to improve the reliability of our network.
I would give it a day or two and see if the problem you're having recurs, or simply disappears on its own.
Added/Edited
P.S. I just noticed a detail on the email,
Affected Droplets:
railsbox00
if you're getting the e-mails, then your droplet is affected by the firmware problem. Check your emails and see if they list your VPS; it's at the bottom of the email.
I don't know if this question is still relevant, but what helped me greatly with this exact problem was to move the actual location of the puma.sock file to another directory. I picked the /tmp directory.
The socket used to be on a drive that was NFS mounted to another server, and I believe that that was the problem - some hiccups in the network here and there. I'm not sure what it was exactly but since I moved the puma.sock to /tmp all problems disappeared. For me.

Content-Length Header missing from Nginx-backed Rails app

I've a rails app that serves large static files to registered users. I was able to implement it by following the excellent guide here: Protected downloads with nginx, Rails 3.0, and #send_file. The downloads and everything else is working great, but there is just this problem - The Content-Length header isn't being sent.
It's okay for small files, but it gets really frustrating when large files are downloaded, since download managers and browsers don't show any progress. How can I fix this? Do I have to add something to my nginx configuration or do I have to pass along some other option to the send_file method in my rails controller? I have been searching online for quite some time but have been unsuccessful. Please Help! Thanks!
Here's my nginx.conf:
upstream unicorn {
server unix:/tmp/unicorn.awesomeapp.sock fail_timeout=0;
}
server {
listen 80 default_server deferred;
# server_name example.com;
root /home/deploy/apps/awesomeapp/current/public;
location ~ /downloads/(.*) {
internal;
alias /home/deploy/uploads/$1;
}
location ^~ /assets/ {
gzip_static on;
expires max;
add_header Cache-Control public;
}
try_files $uri/index.html $uri #unicorn;
location #unicorn {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_set_header X-Sendfile-Type X-Accel-Redirect;
proxy_set_header X-Accel-Mapping /downloads/=/home/deploy/uploads/;
proxy_pass http://unicorn;
}
error_page 500 502 503 504 /500.html;
client_max_body_size 20M;
keepalive_timeout 10;
}
Okay, here's something. I don't know if it's the right way or not but I was able to fix the issue by manually sending the Content-Length Header from my Rails Controller. Here's what I'm doing:
def download
#file = Attachment.find(params[:id])
response.headers['Content-Length'] = #file.size.to_s
send_file(#file.path, x_sendfile: true)
end
nginx should be automatically able to set the header. There must be something that I'm missing; but until I find a 'proper' solution, I guess this will have to do.
P.S: The Header needs to be a string to work properly with some webservers, hence the .to_s

nginx rewrite rules with Passenger

I'm trying to migrate to nginx from Apache using Passenger in both instances to host a Rails app. The app takes a request, which is for an image- if the image exists at /system/logos/$requestedimage then it should get served, or it should be allowed to hit the Rails app to generate it if needed (where it is then cached to /system/logos).
In Apache I used the following:
RewriteCond %{DOCUMENT_ROOT}/system/logos/%{REQUEST_FILENAME} -f
RewriteRule ^/(.*)$ http://assets.clg.eve-metrics.com/system/logos/$1
This worked fine. The assets. subdomain is another subdomain but with the same root, just Passenger disabled, specifically set up for hosting static files (expires-wise).
In nginx I am using the following:
server {
listen 80;
passenger_enabled on;
server_name clg.eve-metrics.com www.clg.eve-metrics.com;
root /opt/www/clg/current/public;
gzip on;
gzip_min_length 1000;
gzip_proxied expired no-cache no-store private auth;
gzip_types text/plain application/xml text/css application/javascript;
gzip_disable msie6;
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
if (-f $document_root/system/logos$request_filename) {
rewrite ^/(.*)$ http://assets.clg.eve-metrics.com/system/logos/$1 break;
}
}
This doesn't work so well. At all, in fact. It never redirects to the cached path and it never hits the Rails app. It's like nginx is assuming it's a static asset so not passing it on to Passenger. Is there a way to stop this behaviour so it hits the app?
My rails application is running on nginx and passenger. I have moved my rails cache directory from the default /public to /public/system/cache/. To make it work, I had to insert this into my vhost config file:
if (-f $document_root/system/cache/$uri/index.html) {
rewrite (.*) /system/cache/$1/index.html break;
}
if (-f $document_root/system/cache/$uri.html) {
rewrite (.*) /system/cache/$1.html break;
}
I remember that I too tried to make it work with $request_filename, but didn't get it to work. Try with $uri instead and see if it works :-)
James, please try this configuration file
https://gist.github.com/711913
and pay attention on this location config:
location ~* \.(png|gif|jpg|jpeg|css|js|swf|ico)(\?[0-9]+)?$ {
access_log off;
expires max;
add_header Cache-Control public;
}
passenger won't let Rails to manage your assets files if you have right permissions (user run nginx should has permissions to access to file directly)

Nginx raises 404 when using format => 'js'

I upload images to my App using Ajax and an Iframe. In Development everything works like a charm. But in production Nginx suddenly raises a 404 error. When I look into the log, the request never hits the Rails app. So I guess it has something to do with my Nginx configuration (maybe gzip compression).
The failing requests is send to "/images.js".
Any ideas on how to solve this? Google couldn't help me...
My Nginx config:
server {
listen 80;
server_name www.myapp.de;
root /var/www/apps/myapp/current/public; # <--- be sure to point to 'public'!
passenger_enabled on;
rails_env production;
# set the rails expires headers: http://craigjolicoeur.com/blog/setting-static-asset-expires-headers-with-nginx-and-passenger
location ~* \.(ico|css|js|gif|jp?g|png)(\?[0-9]+)?$ {
expires max;
break;
}
gzip on;
gzip_http_version 1.0;
gzip_vary on;
gzip_comp_level 6;
gzip_proxied any;
gzip_types text/plain text/html text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
# make sure gzip does not lose large gzipped js or css files
# see http://blog.leetsoft.com/2007/7/25/nginx-gzip-ssl
gzip_buffers 16 8k;
# this rewrites all the requests to the maintenance.html
# page if it exists in the doc root. This is for capistrano?^?^?s
# disable web task
if (-f $document_root/system/maintenance.html) {
rewrite ^(.*)$ /system/maintenance.html last;
break;
}
# Set the max size for file uploads to 10Mb
client_max_body_size 10M;
error_page 500 502 503 504 /500.html;
location = /500.html {
root /var/www/apps/myapp/current/public;
}
}
nginx will serve the request for /images.js from your root /var/www/apps/myapp/current/public since it matches
location ~* \.(ico|css|js|gif|jp?g|png)(\?[0-9]+)?$ {
expires max;
break;
}
(the break directive only applies to rewrite rules afaik so it can be removed)
If you want to serve /images.js from rails you need to enable rails for that location.
It's worth noting that any .json requests will get caught if you use the regex above. Add a $ to avoid that.
location ~* \.(ico|css|js$|gif|jp?g|png)(\?[0-9]+)?$ {
expires max;
break;
}
For this purpose I use this location directives:
location ~* ^.+\.(jpg|jpeg|gif|png|css|swf|ico|mov|flv|fla|pdf|zip|rar|doc|xls)$
{
expires 12h;
add_header Cache-Control private;
}
location ~* ^/javascripts.+\.js$
{
expires 12h;
add_header Cache-Control private;
}
I ran into a very similar issue, and put format.json instead of format.js - this made a URL that had a .json extension, but allowed me to not modify my Nginx config.

Resources