I have a Rails 5 app. I'm using the Carrierwave gem to allow image uploads to public/system/....
In reviewing production app for performance tweaks, I realized that I misconfigured nginx, and that it's only serving static files from /assets instead of /assets and /system.
What I have:
location ~ ^/assets/ {
gzip_static on;
expires max;
add_header Cache-Control public;
}
What I (think I) should have:
location ~ ^/(assets|system)/ {
gzip_static on;
expires max;
add_header Cache-Control public;
}
However, config.public_file_server.enabled = false is set in production.rb.
So now i'm confused-- how is Rails serving these images? I'm assuming I have a (grossly) incomplete understanding of how the asset pipeline actually works?
Update: nginx config
upstream puma {
server unix:///home/deploy/apps/myapp/shared/sockets/mydomain.sock;
}
server {
listen 80 default;
server_name mydomain.com;
root /home/deploy/apps/myapp/current/public;
access_log /home/deploy/apps/myapp/shared/log/nginx.access.log;
error_log /home/deploy/apps/myapp/shared/log/nginx.error.log info;
location ~ ^/(assets|system)/ {
gzip_static on;
expires max;
add_header Cache-Control public;
}
location ~ ^/(robots.txt|sitemap.xml.gz)/ {
root /home/deploy/apps/myapp/current/public;
}
try_files $uri/index.html $uri #puma;
location #puma {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://puma;
}
location /cable {
proxy_pass http://puma;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
error_page 500 502 503 504 /500.html;
client_max_body_size 50M;
keepalive_timeout 10;
listen 443 ssl; # managed by Certbot
# ssl certificate info...
}
It would help posting the whole nginx configuration for this application. Rails will respect public_file_server when served by passenger, puma etc. However, it can be easily overriden with nginx.
The common nginx config line
root /home/rails/testapp/public;
basically tells nginx to serve /public "as it is" and makes public_file_server irrelevant.
(perhaps).
Related
I'm trying to follow this article:
https://www.digitalocean.com/community/tutorials/how-to-deploy-a-rails-app-with-puma-and-nginx-on-ubuntu-14-04
with a fresh Amazon Linux EC2 instance. I'm using the out-of-the-box /etc/nginx/nginx.conf file, and added my config file to /etc/nginx/sites-default/default
Puma seems to be running fine:
/home/ec2-user/flviewer/shared/log/puma_error.log: [8006] *
Listening on
unix:///home/ec2user/flviewer/shared/sockets/tmp/puma.sock
But this shows up in /var/log/nginx/error.log:
2016/12/12 05:33:00 [error] 11018#0: *1 open()
"/usr/share/nginx/html/flviewer" failed (2: No such file or
directory), client: 173.73.119.219, server: localhost, request: "GET
/flviewer HTTP/1.1", host: "54.86.222.53"
Why the heck is it looking in '/usr/share/nginx/html/flviewer' when it should be looking at the socket i opened?
here is my config as dumped by 'nginx -T':
# configuration file /etc/nginx/sites-available/default:
upstream app {
# Path to Puma SOCK file, as defined previously
server unix:/home/ec2-user/flviewer/shared/tmp/sockets/puma.sock fail_timeout=0;
}
server {
listen 80;
server_name localhost;
root /home/ec2-user/flviewer/current/public;
try_files $uri/index.html $uri #app;
location #app {
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_redirect off;
#proxy_http_version 1.1;
proxy_set_header Connection '';
proxy_pass http://app;
#autoindex on;
}
location ~ ^/(assets|fonts|system)/|favicon.ico|robots.txt {
gzip_static on;
expires max;
add_header Cache-Control public;
}
error_page 500 502 503 504 /500.html;
client_max_body_size 4G;
keepalive_timeout 10;
}
Nothing worked. I stripped /etc/nginx.conf down to just this, and am up and running. I had to throw away all of the boilerplate that was in nginx.conf. This works:
config file:
# Run nginx as a normal console program, not as a daemon
daemon off;
user ec2-user;
# Log errors to stdout
error_log /dev/stdout info;
events {} # Boilerplate
http {
# Print the access log to stdout
access_log /dev/stdout;
# Tell nginx that there's an external server called #app living at our socket
upstream app {
server unix:/home/ec2-user/flv/shared/tmp/sockets/puma.sock fail_timeout=0;
}
server {
# Accept connections on localhost:2048
listen 80;
server_name localhost;
# Application root
root /home/ec2-user/flv/shared/public;
# If a path doesn't exist on disk, forward the request to #app
try_files $uri/index.html $uri #app;
# Set some configuration options on requests forwarded to #app
location #app {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://app;
}
location ~ ^/(assets|fonts|system)/|favicon.ico|robots.txt {
gzip_static on;
expires max;
add_header Cache-Control public;
}
error_page 500 502 503 504 /500.html;
client_max_body_size 4G;
keepalive_timeout 10;
}
}
I think it has to do with using the default nginx config file. Try moving /etc/nginx/sites-available/default to /etc/nginx/sites-enabled/flviewer.
$ mv /etc/nginx/sites-available/default /etc/nginx/sites-enabled/flviewer
Then reload and restart nginx.
Hi I am deploying rails application to digital ocean VPS, I have followed https://coderwall.com/p/yz8cha this blog , all things done well, but now the browser shows only
a blank white page
In nginx log file it shows
invalid host in upstream "/tmp/unicorn.testvpsdo.sock" in /etc/nginx/sites-enabled/testvpsdo:2
What causes the error?,
this is my nginx.conf file
upstream unicorn {
server unix:/tmp/unicorn.testvpsdo.sock fail_timeout=0;
}
server {
listen 80 default_server deferred;
# server_name example.com;
root /home/navin/apps/testvpsdo/current/public;
location ^~ /assets/ {
gzip_static on;
expires max;
add_header Cache-Control public;
}
try_files $uri/index.html $uri #unicorn;
location #unicorn {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://unicorn;
}
error_page 500 502 503 504 /500.html;
client_max_body_size 20M;
keepalive_timeout 10;
}
This is my nginx.conf file
upstream unicorn {
server unix:/tmp/unicorn.blog.sock fail_timeout=0;
}
server {
listen 80 default deferred;
# server_name example.com;
root /home/deployer/apps/blog/current/public;
location ^~ /assets/ {
gzip_static on;
expires max;
add_header Cache-Control public;
}
try_files $uri/index.html $uri #unicorn;
location #unicorn {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://unicorn;
}
error_page 500 502 503 504 /500.html;
client_max_body_size 4G;
keepalive_timeout 10;
}
I tried to add SSL - I redeplyed and restarted NGINX and the server. I undid everything redeployed and it was back up.... Now I tried the same thing. Tried to add SSL it failed. Undid. Now server is down. Here is the github with revisions!
Ran /etc/init.d/unicorn_blog start and restarted unicorn. It worked
I have been able to deploy my rials app into a vps system using nginx, unicorn and capistrano with no errors. Now, i want to deploy another rails app using the same nginx config(the two scripts are below) inside the same vps server and after running cap deploy:setup and cap deploy:cold it sets up correctly and the rails app is sent to the server. The problem i get is this. when i type service nginx restart i get the following error
nginx: [emerg] duplicate upstream "unicorn" in /etc/nginx/sites-enabled/cf:1
nginx: configuration file /etc/nginx/nginx.conf test failed
my nginx script for the first app which is currently running is
upstream unicorn {
server unix:/tmp/unicorn.cf.sock fail_timeout=0;
}
server {
listen 80 default deferred;
server_name cfmagazineonline.com;
root /home/deployer/apps/cf/current/public;
location ^~ /assets/ {
gzip_static on;
expires max;
add_header Cache-Control public;
}
try_files $uri/index.html $uri #unicorn;
location #unicorn {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://unicorn;
}
error_page 500 502 503 504 /500.html;
client_max_body_size 4G;
keepalive_timeout 10;
}
my nginx config for the second rails app which fails to run but instead trows an error for the first rails app and makes it to crash is
upstream unicorn {
server unix:/tmp/unicorn.gutrees.sock fail_timeout=0;
}
server {
listen 80 default deferred;
server_name gutrees.com;
root /home/deployer/apps/gutrees/current/public;
location ^~ /assets/ {
gzip_static on;
expires max;
add_header Cache-Control public;
}
try_files $uri/index.html $uri #unicorn;
location #unicorn {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://unicorn;
}
error_page 500 502 503 504 /500.html;
client_max_body_size 4G;
keepalive_timeout 10;
}
any ideas how i can fix this issue and set up the virtual host correctly. Thank you
Name your second apps upstream differently (upstream names need to be unique). So instead of unicorn use i.e. the name #my_shiny_app_server. Then use this name proxy_pass directive http://my_shiny_app_server. Replace the my_shiny string with the name of your app e.g. gutrees, cf.
I have a Rails 3.1 app running in production using Nginx and Unicorn. And for some reason, my custom 404 and 500 html error pages are not showing. Instead I'm getting the actual error message ("Routing Error", for example).
In my production.rb file, I have config.consider_all_requests_local = false
And on the same server with a nearly identical configuration, I have a 'staging' site that works just fine. The only difference, as far as I can tell, is that the production one has SSL while the staging does not.
Here is the Nginx config for the production app:
upstream unicorn_myapp_prod {
server unix:/tmp/unicorn.myapp_prod.sock fail_timeout=0;
}
server {
listen 80;
server_name myapp.com;
root /home/deployer/apps/myapp_prod/current/public;
location ^~ /assets/ {
gzip_static on;
expires max;
add_header Cache-Control public;
}
try_files $uri/index.html $uri #unicorn;
location #unicorn {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://unicorn_myapp_prod;
}
error_page 500 502 503 504 /500.html;
client_max_body_size 4G;
keepalive_timeout 10;
}
server {
listen 443 default;
ssl on;
ssl_certificate /home/deployer/apps/myapp_prod/shared/ssl_certs/myapp_prod.crt;
ssl_certificate_key /home/deployer/apps/myapp_prod/shared/ssl_certs/myapp_prod.key;
server_name myapp.com;
root /home/deployer/apps/myapp_prod/current/public;
location ^~ /assets/ {
gzip_static on;
expires max;
add_header Cache-Control public;
}
try_files $uri/index.html $uri #unicorn;
location #unicorn {
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://unicorn_myapp_prod;
}
error_page 500 502 503 504 /500.html;
client_max_body_size 4G;
keepalive_timeout 10;
}
Any ideas? Thanks!
The https listener's location #unicorn block is missing the X-Forwarded-For directive.
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
It's in your http listener, but not the https listener.
Assuming that Rails' force_ssl is successfully redirecting all of the http requests and your only errors are happening on https requests, it seems that would explain it.
Also, to be very clear, there is a well known problem in Rack/Rails3 with respect to routing errors, which you specifically mention.
https://rails.lighthouseapp.com/projects/8994/tickets/4444-can-no-longer-rescue_from-actioncontrollerroutingerror
If you're using haproxy along with nginx and unicorn (e.g. you're on Engineyard), this fix won't be enough. You'll need to override Rails with something like this:
class ActionDispatch::Request
def local?
Rails.env != 'production'
end
end
Good luck!
not sure if this is applicable but we also have a link in our nginx config after the error_page line which handles the location of the /500.html pages
location = /500.html { root /path/to/rails/app/public; }
obviously substitute the path to rails app portion with your path.