I have a download link that goes to a method in a controller which uses send_file so that I may rename the file (it is an MP3 with a uuid as a filename). After clicking on the link I see the request in the NGINX logs and Rails logs, however it takes up to 90 seconds before the download beings. I have tried various settings with proxy_buffers and client_*_buffers with no affect. I have an HTML5 audio player that uses the real URL for the file and it streams the file right away with no delay.
My NGINX config:
upstream app {
server unix:/home/archives/app/tmp/unicorn.sock fail_timeout=0;
}
server {
listen 80 default deferred;
server_name archives.example.com;
root /home/archives/app/public/;
client_max_body_size 200M;
client_body_buffer_size 100M;
proxy_buffers 2 100M;
proxy_buffer_size 100M;
proxy_busy_buffers_size 100M;
try_files /maintenance.html $uri/index.html $uri.html $uri #production;
location #production {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-Sendfile-Type X-Accel-Redirect;
proxy_set_header X-Accel-Mapping /home/archives/app/public/uploads/audio/=/uploads/audio/;
proxy_redirect off;
proxy_pass http://app;
}
location ~ "^/assets/*" {
gzip_static on;
expires max;
add_header Cache-Control public;
}
location ~ (?:/\..*|~)$ {
access_log off;
log_not_found off;
deny all;
}
error_page 500 502 503 504 /500.html;
location = /500.html {
root /home/archives/app/public;
}
}
Rails controller:
def download
send_file #audio.path, type: #audio_content_type, filename: "#{#audio.title} - #{#audio.speaker.name}"
end
Maybe it is slow because you have set an overly large proxy buffer? 100M proxy buffer means that your server will download 100M from origin data before starting to send it to the destination. Default is 32kB and something like 512kB would be already a nice number.
After testing I found out it was turbolinks causing the issue. It was doing a XHR request in the background, downloading the file first then allowing the browser to actually download the file. After adding 'data-no-turbolink'='true' to my link, do files download instantly.
Related
I have a Rails 5 app. I'm using the Carrierwave gem to allow image uploads to public/system/....
In reviewing production app for performance tweaks, I realized that I misconfigured nginx, and that it's only serving static files from /assets instead of /assets and /system.
What I have:
location ~ ^/assets/ {
gzip_static on;
expires max;
add_header Cache-Control public;
}
What I (think I) should have:
location ~ ^/(assets|system)/ {
gzip_static on;
expires max;
add_header Cache-Control public;
}
However, config.public_file_server.enabled = false is set in production.rb.
So now i'm confused-- how is Rails serving these images? I'm assuming I have a (grossly) incomplete understanding of how the asset pipeline actually works?
Update: nginx config
upstream puma {
server unix:///home/deploy/apps/myapp/shared/sockets/mydomain.sock;
}
server {
listen 80 default;
server_name mydomain.com;
root /home/deploy/apps/myapp/current/public;
access_log /home/deploy/apps/myapp/shared/log/nginx.access.log;
error_log /home/deploy/apps/myapp/shared/log/nginx.error.log info;
location ~ ^/(assets|system)/ {
gzip_static on;
expires max;
add_header Cache-Control public;
}
location ~ ^/(robots.txt|sitemap.xml.gz)/ {
root /home/deploy/apps/myapp/current/public;
}
try_files $uri/index.html $uri #puma;
location #puma {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://puma;
}
location /cable {
proxy_pass http://puma;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
error_page 500 502 503 504 /500.html;
client_max_body_size 50M;
keepalive_timeout 10;
listen 443 ssl; # managed by Certbot
# ssl certificate info...
}
It would help posting the whole nginx configuration for this application. Rails will respect public_file_server when served by passenger, puma etc. However, it can be easily overriden with nginx.
The common nginx config line
root /home/rails/testapp/public;
basically tells nginx to serve /public "as it is" and makes public_file_server irrelevant.
(perhaps).
I have my assets already precompiled, all are loading except the fonts which are ttf files. the assets also has the ttf.gz files. I have already checked the names and sure its the same name. Also my static images are loading just fine.
My nginx config looks like this
server {
listen 80;
root /home/usr/apps/web/public;
location / {
proxy_pass http://app;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
}
location ~* ^/assets/ {
gzip_static on;
expires 1y;
add_header Cache-Control public;
location ~* \.(eot|otf|ttf|woff|woff2)$ {
add_header Access-Control-Allow-Origin *;
}
}
error_page 500 502 503 504 /500.html;
client_max_body_size 4G;
keepalive_timeout 10;
}
this what the console shows
Failed to load resource: the server responded with a status of 404 (Not Found)
I just fixed it. Looks like the problem was on the precompilation. My fonts sass was not properly configured on getting my fonts.
I'm trying to follow this article:
https://www.digitalocean.com/community/tutorials/how-to-deploy-a-rails-app-with-puma-and-nginx-on-ubuntu-14-04
with a fresh Amazon Linux EC2 instance. I'm using the out-of-the-box /etc/nginx/nginx.conf file, and added my config file to /etc/nginx/sites-default/default
Puma seems to be running fine:
/home/ec2-user/flviewer/shared/log/puma_error.log: [8006] *
Listening on
unix:///home/ec2user/flviewer/shared/sockets/tmp/puma.sock
But this shows up in /var/log/nginx/error.log:
2016/12/12 05:33:00 [error] 11018#0: *1 open()
"/usr/share/nginx/html/flviewer" failed (2: No such file or
directory), client: 173.73.119.219, server: localhost, request: "GET
/flviewer HTTP/1.1", host: "54.86.222.53"
Why the heck is it looking in '/usr/share/nginx/html/flviewer' when it should be looking at the socket i opened?
here is my config as dumped by 'nginx -T':
# configuration file /etc/nginx/sites-available/default:
upstream app {
# Path to Puma SOCK file, as defined previously
server unix:/home/ec2-user/flviewer/shared/tmp/sockets/puma.sock fail_timeout=0;
}
server {
listen 80;
server_name localhost;
root /home/ec2-user/flviewer/current/public;
try_files $uri/index.html $uri #app;
location #app {
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_redirect off;
#proxy_http_version 1.1;
proxy_set_header Connection '';
proxy_pass http://app;
#autoindex on;
}
location ~ ^/(assets|fonts|system)/|favicon.ico|robots.txt {
gzip_static on;
expires max;
add_header Cache-Control public;
}
error_page 500 502 503 504 /500.html;
client_max_body_size 4G;
keepalive_timeout 10;
}
Nothing worked. I stripped /etc/nginx.conf down to just this, and am up and running. I had to throw away all of the boilerplate that was in nginx.conf. This works:
config file:
# Run nginx as a normal console program, not as a daemon
daemon off;
user ec2-user;
# Log errors to stdout
error_log /dev/stdout info;
events {} # Boilerplate
http {
# Print the access log to stdout
access_log /dev/stdout;
# Tell nginx that there's an external server called #app living at our socket
upstream app {
server unix:/home/ec2-user/flv/shared/tmp/sockets/puma.sock fail_timeout=0;
}
server {
# Accept connections on localhost:2048
listen 80;
server_name localhost;
# Application root
root /home/ec2-user/flv/shared/public;
# If a path doesn't exist on disk, forward the request to #app
try_files $uri/index.html $uri #app;
# Set some configuration options on requests forwarded to #app
location #app {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://app;
}
location ~ ^/(assets|fonts|system)/|favicon.ico|robots.txt {
gzip_static on;
expires max;
add_header Cache-Control public;
}
error_page 500 502 503 504 /500.html;
client_max_body_size 4G;
keepalive_timeout 10;
}
}
I think it has to do with using the default nginx config file. Try moving /etc/nginx/sites-available/default to /etc/nginx/sites-enabled/flviewer.
$ mv /etc/nginx/sites-available/default /etc/nginx/sites-enabled/flviewer
Then reload and restart nginx.
I have been able to deploy my rials app into a vps system using nginx, unicorn and capistrano with no errors. Now, i want to deploy another rails app using the same nginx config(the two scripts are below) inside the same vps server and after running cap deploy:setup and cap deploy:cold it sets up correctly and the rails app is sent to the server. The problem i get is this. when i type service nginx restart i get the following error
nginx: [emerg] duplicate upstream "unicorn" in /etc/nginx/sites-enabled/cf:1
nginx: configuration file /etc/nginx/nginx.conf test failed
my nginx script for the first app which is currently running is
upstream unicorn {
server unix:/tmp/unicorn.cf.sock fail_timeout=0;
}
server {
listen 80 default deferred;
server_name cfmagazineonline.com;
root /home/deployer/apps/cf/current/public;
location ^~ /assets/ {
gzip_static on;
expires max;
add_header Cache-Control public;
}
try_files $uri/index.html $uri #unicorn;
location #unicorn {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://unicorn;
}
error_page 500 502 503 504 /500.html;
client_max_body_size 4G;
keepalive_timeout 10;
}
my nginx config for the second rails app which fails to run but instead trows an error for the first rails app and makes it to crash is
upstream unicorn {
server unix:/tmp/unicorn.gutrees.sock fail_timeout=0;
}
server {
listen 80 default deferred;
server_name gutrees.com;
root /home/deployer/apps/gutrees/current/public;
location ^~ /assets/ {
gzip_static on;
expires max;
add_header Cache-Control public;
}
try_files $uri/index.html $uri #unicorn;
location #unicorn {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://unicorn;
}
error_page 500 502 503 504 /500.html;
client_max_body_size 4G;
keepalive_timeout 10;
}
any ideas how i can fix this issue and set up the virtual host correctly. Thank you
Name your second apps upstream differently (upstream names need to be unique). So instead of unicorn use i.e. the name #my_shiny_app_server. Then use this name proxy_pass directive http://my_shiny_app_server. Replace the my_shiny string with the name of your app e.g. gutrees, cf.
I have a Rails 3.1 app running in production using Nginx and Unicorn. And for some reason, my custom 404 and 500 html error pages are not showing. Instead I'm getting the actual error message ("Routing Error", for example).
In my production.rb file, I have config.consider_all_requests_local = false
And on the same server with a nearly identical configuration, I have a 'staging' site that works just fine. The only difference, as far as I can tell, is that the production one has SSL while the staging does not.
Here is the Nginx config for the production app:
upstream unicorn_myapp_prod {
server unix:/tmp/unicorn.myapp_prod.sock fail_timeout=0;
}
server {
listen 80;
server_name myapp.com;
root /home/deployer/apps/myapp_prod/current/public;
location ^~ /assets/ {
gzip_static on;
expires max;
add_header Cache-Control public;
}
try_files $uri/index.html $uri #unicorn;
location #unicorn {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://unicorn_myapp_prod;
}
error_page 500 502 503 504 /500.html;
client_max_body_size 4G;
keepalive_timeout 10;
}
server {
listen 443 default;
ssl on;
ssl_certificate /home/deployer/apps/myapp_prod/shared/ssl_certs/myapp_prod.crt;
ssl_certificate_key /home/deployer/apps/myapp_prod/shared/ssl_certs/myapp_prod.key;
server_name myapp.com;
root /home/deployer/apps/myapp_prod/current/public;
location ^~ /assets/ {
gzip_static on;
expires max;
add_header Cache-Control public;
}
try_files $uri/index.html $uri #unicorn;
location #unicorn {
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://unicorn_myapp_prod;
}
error_page 500 502 503 504 /500.html;
client_max_body_size 4G;
keepalive_timeout 10;
}
Any ideas? Thanks!
The https listener's location #unicorn block is missing the X-Forwarded-For directive.
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
It's in your http listener, but not the https listener.
Assuming that Rails' force_ssl is successfully redirecting all of the http requests and your only errors are happening on https requests, it seems that would explain it.
Also, to be very clear, there is a well known problem in Rack/Rails3 with respect to routing errors, which you specifically mention.
https://rails.lighthouseapp.com/projects/8994/tickets/4444-can-no-longer-rescue_from-actioncontrollerroutingerror
If you're using haproxy along with nginx and unicorn (e.g. you're on Engineyard), this fix won't be enough. You'll need to override Rails with something like this:
class ActionDispatch::Request
def local?
Rails.env != 'production'
end
end
Good luck!
not sure if this is applicable but we also have a link in our nginx config after the error_page line which handles the location of the /500.html pages
location = /500.html { root /path/to/rails/app/public; }
obviously substitute the path to rails app portion with your path.