Nginx, Passenger setup with RVM: Permissions problems & 403 error - ruby-on-rails

I'm trying to deploy a rails application to a Ubuntu server running passenger and nginx. The server was operational for about a year on ruby 1.8, but I recently upgraded to 1.9.3 (this time using RVM) and as a result had to reinstall everything. I am currently running into two problems:
403 forbidden error
I was able to start the nginx server, but when I try to access the rails application, I receive a 403 Forbidden error that reads:
*2177 directory index of "/srv/myapp/public/" is forbidden
I did some looking around in the nxinx help docs and made sure that the /srv/myapp/ directory has the proper permissions. It is owned by a deploy user that owns the worker process of nginx, and is set chmod 755.
Nginx and Passenger installation problems
When I restart the nginx server, I also receive another error in indicating a problem with my phusion passenger installation:
Unable to start the Phusion Passenger watchdog because its executable (/usr/local/rvm/gems/ruby-1.9.3-p194/gems/passenger-3.0.13/agents/ PassengerWatchdog) does not exist. This probably means that your Phusion Passenger installation is broken or incomplete, or that your 'passenger_root' directive is set to the wrong value. Please reinstall Phusion Passenger or fix your 'passenger_root' directive, whichever is applicable.
I reinstall the passenger gem from my non-root (but sudo enabled) user, and reinstall nginx using rvmsudo passenger-install-nginx-module, but then I get this error repeatedly:
Your RVM wrapper scripts are too old. Please update them first by running 'rvm get head && rvm reload && rvm repair all'.
I performed the RVM reload (both with rvmsudo and without) and the error still appears. I tried performing the nginx install without rvmsudo, but ran into permissions problems because I couldn't edit the /opt/nginx/ directory (where I have nginx installed). Now I don't even get that far, because the installer fails to pass the required software check.
Background information
This is what my nginx process currently looks like:
PID PPID USER %CPU VSZ WCHAN COMMAND
10174 1 root 0.0 18480 ? nginx: master process /opt/nginx/sbin/nginx
29418 10174 deploy 0.3 18496 ? nginx: worker process
29474 12266 1001 0.0 4008 - egrep (nginx|PID)
My installation process
I've been documenting my installation process in a step-by-step guide for further reference. Please take a look to see how I have my new installation set up.

Try to reinstall all your environment following this guide and put your Rails application on /var/rails/your_app_dir with this permisions:
sudo chown deployment_user:www-data -R /var/rails/your_app_dir
Use this nginx.conf example to put all pieces together:
user www-data;
worker_processes 1;
#error_log logs/error.log;
#error_log logs/error.log notice;
#error_log logs/error.log info;
#pid logs/nginx.pid;
events {
worker_connections 1024;
}
http {
passenger_root /usr/local/rvm/gems/ruby-2.0.0-p195/gems/passenger-4.0.5;
passenger_ruby /usr/local/rvm/wrappers/ruby-2.0.0-p195/ruby;
include mime.types;
default_type application/octet-stream;
#log_format main '$remote_addr - $remote_user [$time_local] "$request" '
# '$status $body_bytes_sent "$http_referer" '
# '"$http_user_agent" "$http_x_forwarded_for"';
log_format gzip '$remote_addr - $remote_user [$time_local] '
'"$request" $status $bytes_sent '
'"$http_referer" "$http_user_agent" "$gzip_ratio"';
access_log logs/access.log gzip buffer=512k;
server_names_hash_bucket_size 64;
sendfile on;
#tcp_nopush on;
#keepalive_timeout 0;
keepalive_timeout 65;
#gzip on;
##
# Virtual Host Configs
##
#include /opt/nginx/conf/*.conf;
include /opt/nginx/conf/sites-enabled/*;
}

Related

Rails + Passenger + Nginx: "404 Not Found" for second app

I'm trying to deploy a second app to Digital Ocean.
I successfully deployed the first app with this tutorial: https://www.phusionpassenger.com/library/walkthroughs/deploy/ruby/digital_ocean/integration_mode.html
I added a second app to the same place following the same tutorial. When I try to visit the second app, I get the message "404 Not Found" and the log says:
2021/08/09 11:39:43 [error] 43452#43452: *21 "/var/www/philosophische_insel/public/index.html" is not found (2: No such file or directory)
There is a troubleshooting-guide for this exact problem: https://www.phusionpassenger.com/docs/advanced_guides/troubleshooting/nginx/troubleshooting/node/
Here is what I tried so far:
To "Cause and solution #1"
I added "passenger_enabled on;":
#cat /etc/nginx/sites-enabled/philosophische_insel.conf
server {
listen 80;
server_name philosophische-insel.ch www.philosophische-insel.ch;
# Tell Nginx and Passenger where your app's 'public' directory is
root /var/www/philosopische_insel/public;
# Turn on Passenger
passenger_enabled on;
passenger_ruby /home/sandro/.rvm/gems/ruby-3.0.0/wrappers/ruby;
}
To "Cause and solution #2"
passenger_root is set to: /usr/lib/ruby/vendor_ruby/phusion_passenger/locations.ini;
cat /etc/nginx/conf.d/mod-http-passenger.conf
### Begin automatically installed Phusion Passenger config snippet ###
passenger_root /usr/lib/ruby/vendor_ruby/phusion_passenger/locations.ini;
passenger_ruby /usr/bin/passenger_free_ruby;
### End automatically installed Phusion Passenger config snippet ###
It is the same as the result of passenger-config --root
/usr/lib/ruby/vendor_ruby/phusion_passenger/locations.ini
To "Cause and solution #3"
I tried to find some errors but I was not successful.
When I reload nginx and check error.log, I get this:
[ N 2021-08-09 12:10:57.2432 44738/T1 age/Wat/WatchdogMain.cpp:1373 ]: Starting Passenger watchdog...
[ N 2021-08-09 12:10:57.2904 44741/T1 age/Cor/CoreMain.cpp:1340 ]: Starting Passenger core...
[ N 2021-08-09 12:10:57.2905 44741/T1 age/Cor/CoreMain.cpp:256 ]: Passenger core running in multi-application mode.
[ N 2021-08-09 12:10:57.3033 44741/T1 age/Cor/CoreMain.cpp:1015 ]: Passenger core online, PID 44741
[ N 2021-08-09 12:10:59.4811 44741/T5 age/Cor/SecurityUpdateChecker.h:519 ]: Security update check: no update found (next check in 24 hours)
2021/08/09 12:11:03 [error] 44756#44756: *1 "/var/www/philosopische_insel/public/index.html" is not found (2: No such file or directory), client: 87.245.104.21, server: philosophische-insel.ch, request: "GET / HTTP/1.1", host: "www.philosophische-insel.ch"
passenger-status only shows the first app
I don't know if it is important but passenger-status only shows the first app, not the second in Application groups:
----------- General information -----------
Max pool size : 6
App groups : 1
Processes : 1
Requests in top-level queue : 0
----------- Application groups -----------
/var/www/dialectica (production):
App root: /var/www/dialectica
Requests in queue: 0
* PID: 45528 Sessions: 0 Processed: 1 Uptime: 1m 57s
CPU: 0% Memory : 34M Last used: 1m 57s ago
Further Information
The first app works. However it has a different ruby version. Here is a comparison:
Ruby version:
First app:
ruby 2.6.3p62 (2019-04-16 revision 67580) [x86_64-linux]
Second app:
ruby 3.0.0p0 (2020-12-25 revision 95aff21468) [x86_64-linux]
Nginx-configuration
First app:
server {
listen 80;
server_name 159.65.120.231;
# Tell Nginx and Passenger where your app's 'public' directory is
root /var/www/dialectica/public;
# Turn on Passenger
passenger_enabled on;
passenger_ruby /home/sandro/.rvm/gems/ruby-2.6.3/wrappers/ruby;
}
Second app:
server {
listen 80;
server_name philosophische-insel.ch www.philosophische-insel.ch;
# Tell Nginx and Passenger where your app's 'public' directory is
root /var/www/philosopische_insel/public;
# Turn on Passenger
passenger_enabled on;
passenger_ruby /home/sandro/.rvm/gems/ruby-3.0.0/wrappers/ruby;
}
Any ideas?
At this point, I don't know how to proceed. Any ideas?
Assuming both apps are working fine, I have three recommendations:
Keep your root statements and add passenger_app_root with your real apps root path instead their public path
In your first app config, you are saying something like: " redirect every request looking for 159.65.120.231:80 to dialecta path", but the problem is that your DNS also resolves philosophische-insel.ch to 159.65.120.231:80. So you will never be able to reach your second app. Try using a different port or different domain (or subdomains) in each of your app's config
Remember to always check your Nginx config with sudo nginx -t and, if
config's fine, restart Nginx with sudo service nginx reload
So the following could be one config for your server:
server {
## Any of the followings should work
## Option 1: use a subdomain for this, remember that your DNS must be
## redirecting subdomains to this IP
listen 80;
server_name dialecta.philosophische-insel.ch www.dialecta.philosophische-insel.ch;
## Option 2: use a different domain. Also needs DNS config
# listen 80;
# server_name dialecta.ch www.dialecta.ch;
## Option 3: use a different port ##
# listen 81;
# server_name 159.65.120.231;
passenger_enabled on;
passenger_app_root /var/www/dialectica;
passenger_ruby /home/sandro/.rvm/gems/ruby-2.6.3/wrappers/ruby;
root /var/www/dialectica/public;
}
server {
listen 80;
server_name philosophische-insel.ch www.philosophische-insel.ch;
passenger_enabled on;
passenger_app_root /var/www/philosopische_insel;
passenger_ruby /home/sandro/.rvm/gems/ruby-3.0.0/wrappers/ruby;
root /var/www/philosopische_insel/public;
}
If you keep getting error, please post the output of sudo nginx -t.

403 Forbidden on Rails app w/ Nginx, Passenger

First off, apologies: I know the 403 Forbidden question is a common one for Rails/Nginx installs, but none of the answers I've read so far have solved it for me.
Disclaimer: This is my first time deploying a Rails app somewhere that isn't Heroku. Please be gentle. ;)
Situation: I have a Rails app running on an Ubuntu 12.04 server, running Nginx (installed with Passenger).
I've deployed my app to my server correctly, but when I attempt to access the site, I receive a 403 Forbidden error.
Checking my error logs, I see:
2013/10/23 22:47:01 [error] 27954#0: *105 directory index of "/var/www/colepeters.com/current/public/" is forbidden, client: 50.3…server: colepeters.com, request: "GET / HTTP/1.1", host: "colepeters.com"
2013/10/23 22:47:10 [error] 27954#0: *106 directory index of "/var/www/colepeters.com/current/public/" is forbidden, client: 184…server: colepeters.com, request: "GET / HTTP/1.1", host: "colepeters.com"
2013/10/23 22:47:12 [error] 27954#0: *107 directory index of "/var/www/colepeters.com/current/public/" is forbidden, client: 151…server: colepeters.com, request: "GET / HTTP/1.1", host: "colepeters.com"
However, when checking permissions on this directory, I see that the user I have setup to use Nginx had both read and execute permissions on it.
Here's the relevant info from my nginx.conf:
user XXXX;
worker_processes 1;
#error_log logs/error.log;
#error_log logs/error.log notice;
#error_log logs/error.log info;
#pid logs/nginx.pid;
events {
worker_connections 1024;
}
http {
passenger_root /home/cole/.rvm/gems/ruby-2.0.0-p247/gems/passenger-4.0.21;
passenger_ruby /home/cole/.rvm/wrappers/ruby-2.0.0-p247/ruby;
include mime.types;
default_type application/octet-stream;
#log_format main '$remote_addr - $remote_user [$time_local] "$request" '
# '$status $body_bytes_sent "$http_referer" '
# '"$http_user_agent" "$http_x_forwarded_for"';
#access_log logs/access.log main;
sendfile on;
#tcp_nopush on;
#keepalive_timeout 0;
keepalive_timeout 65;
#gzip on;
server {
listen 80;
server_name colepeters.com www.colepeters.com;
passenger_enabled on;
root /var/www/colepeters.com/current/public/;
rails_env production;
#charset koi8-r;
#access_log logs/host.access.log main;
location / {
root /var/www/colepeters.com/current/public;
index index.html index.htm;
# autoindex on;
}
I would greatly appreciate any help on resolving this. Thanks!
UPDATE
I have since corrected the erroneus passenger_ruby path, but the 403 Forbidden is persisting, even after restarting Nginx.
You can check the path of your passenger installation with
passenger-config --root
and the path of your ruby installation with
which ruby
then compare with the inserted in nginx.conf.
Adding passenger_enabled on; to the server directive worked for me.
I got the same error. In my case, I fixed it by removing the location / {} entry.
- or make sure that your user have permission to your rails project
...
server {
listen 80;
server_name 127.0.0.1;
passenger_enabled on;
rails_env production;
root /www/kalender/public ;
#charset koi8-r;
access_log /var/log/nginx/host.access.log;
#location / {
#root html;
#index index.html index.htm;
#}
I was running a similar setup to yours and having the same problem with my nginx.conf file. Stumbling across the Nginx pitfalls page helped me solve it.
Your file looks similar to mine, so I'll share two things you may want to try that worked for me:
first, you have the root path in both the server {} block AND the location {} block. While not necessarily a problem, according to the docs linked above "If you add a root to every location block then a location block that isn't matched will have no root." I got rid of the roots in the location blocks but kept it in the server block.
move the 'index' directives (index index.html index.htm;) out of the location block up to within the http {} block. The location blocks will inherit from this.
doing those two things and restarting the server worked for me.
The problem lies in the location / {...} section: the passenger_enabled on doesn't propagate from the server {...} into the location / {...}.
If you either remove location / {...}, or add passenger_enabled on to it, it should work.
you also have config-file for passenger called passenger.conf by default in /etc/nginx/conf.d/passenger.conf
there you have to put correct roots.
you can check the roots with these two commands
passenger-config --root
and
which ruby
so when you get these roots you have to compare them with such in your passenger.conf file and it can be e.g. smth like this
#passenger-config --root
passenger_root /usr/share/ruby/vendor_ruby/phusion_passenger/locations.ini;
#which ruby
passenger_ruby /usr/local/rvm/rubies/ruby-2.4.0/bin/ruby;
passenger_instance_registry_dir /var/run/passenger-instreg;
so if you use this way don't forget to make in http section of your nginx.conf
include /etc/nginx/conf.d/passenger.conf
as well as inserting in server section
passenger_enabled on;
The key things are:
Remove the location block for the / section, assuming that the Rails application is accessible at /
Ensure the passenger_ruby is pointing to the rvm wrapper script for the selected ruby version
Add execute permissions to user, group and others to all the directories reaching to
/var/www/rails_app/public folder
/var
/var/www
/var/www/rails_app
/var/www/rails_app/public_foler
You are declaring the root twice inside the server block and inside the /location block as well as directing nginx to use the index directive. Also remove the "/" after public folder
try doing this
user XXXX;
worker_processes 1;
#error_log logs/error.log;
#error_log logs/error.log notice;
#error_log logs/error.log info;
#pid logs/nginx.pid;
events {
worker_connections 1024;
}
http {
passenger_root /home/cole/.rvm/gems/ruby-2.0.0-p247/gems/passenger-4.0.21;
passenger_ruby /home/cole/.rvm/wrappers/ruby-2.0.0-p247/ruby;
include mime.types;
default_type application/octet-stream;
#log_format main '$remote_addr - $remote_user [$time_local] "$request" '
# '$status $body_bytes_sent "$http_referer" '
# '"$http_user_agent" "$http_x_forwarded_for"';
#access_log logs/access.log main;
sendfile on;
#tcp_nopush on;
#keepalive_timeout 0;
keepalive_timeout 65;
#gzip on;
server {
listen 80;
server_name colepeters.com www.colepeters.com;
passenger_enabled on;
root /var/www/colepeters.com/current/public;
rails_env production;
#charset koi8-r;
#access_log logs/host.access.log main;
}
}

unicorn request queuing

We just migrated from passenger to unicorn to host few rails apps.
Everything works great but we notice via New Relic that request are queuing between 100 and 300ms.
Here's the graph :
I have no idea where this is coming from here's our unicorn conf :
current_path = '/data/actor/current'
shared_path = '/data/actor/shared'
shared_bundler_gems_path = "/data/actor/shared/bundled_gems"
working_directory '/data/actor/current/'
worker_processes 6
listen '/var/run/engineyard/unicorn_actor.sock', :backlog => 1024
timeout 60
pid "/var/run/engineyard/unicorn_actor.pid"
logger Logger.new("log/unicorn.log")
stderr_path "log/unicorn.stderr.log"
stdout_path "log/unicorn.stdout.log"
preload_app true
if GC.respond_to?(:copy_on_write_friendly=)
GC.copy_on_write_friendly = true
end
before_fork do |server, worker|
if defined?(ActiveRecord::Base)
ActiveRecord::Base.connection.disconnect!
end
old_pid = "#{server.config[:pid]}.oldbin"
if File.exists?(old_pid) && server.pid != old_pid
begin
sig = (worker.nr + 1) >= server.worker_processes ? :TERM : :TTOU
Process.kill(sig, File.read(old_pid).to_i)
rescue Errno::ENOENT, Errno::ESRCH
# someone else did our job for us
end
end
sleep 1
end
if defined?(Bundler.settings)
before_exec do |server|
paths = (ENV["PATH"] || "").split(File::PATH_SEPARATOR)
paths.unshift "#{shared_bundler_gems_path}/bin"
ENV["PATH"] = paths.uniq.join(File::PATH_SEPARATOR)
ENV['GEM_HOME'] = ENV['GEM_PATH'] = shared_bundler_gems_path
ENV['BUNDLE_GEMFILE'] = "#{current_path}/Gemfile"
end
end
after_fork do |server, worker|
worker_pid = File.join(File.dirname(server.config[:pid]), "unicorn_worker_actor_#{worker.nr$
File.open(worker_pid, "w") { |f| f.puts Process.pid }
if defined?(ActiveRecord::Base)
ActiveRecord::Base.establish_connection
end
end
our nginx.conf :
user deploy deploy;
worker_processes 6;
worker_rlimit_nofile 10240;
pid /var/run/nginx.pid;
events {
worker_connections 8192;
use epoll;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] '
'"$request" $status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
sendfile on;
tcp_nopush on;
server_names_hash_bucket_size 128;
if_modified_since before;
gzip on;
gzip_http_version 1.0;
gzip_comp_level 2;
gzip_proxied any;
gzip_buffers 16 8k;
gzip_types application/json text/plain text/html text/css application/x-javascript t$
# gzip_disable "MSIE [1-6]\.(?!.*SV1)";
# Allow custom settings to be added to the http block
include /etc/nginx/http-custom.conf;
include /etc/nginx/stack.conf;
include /etc/nginx/servers/*.conf;
}
and our app specific nginx conf :
upstream upstream_actor_ssl {
server unix:/var/run/engineyard/unicorn_actor.sock fail_timeout=0;
}
server {
listen 443;
server_name letitcast.com;
ssl on;
ssl_certificate /etc/nginx/ssl/letitcast.crt;
ssl_certificate_key /etc/nginx/ssl/letitcast.key;
ssl_session_cache shared:SSL:10m;
client_max_body_size 100M;
root /data/actor/current/public;
access_log /var/log/engineyard/nginx/actor.access.log main;
error_log /var/log/engineyard/nginx/actor.error.log notice;
location #app_actor {
include /etc/nginx/common/proxy.conf;
proxy_pass http://upstream_actor_ssl;
}
include /etc/nginx/servers/actor/custom.conf;
include /etc/nginx/servers/actor/custom.ssl.conf;
if ($request_filename ~* \.(css|jpg|gif|png)$) {
break;
}
location ~ ^/(images|javascripts|stylesheets)/ {
expires 10y;
}
error_page 404 /404.html;
error_page 500 502 504 /500.html;
error_page 503 /system/maintenance.html;
location = /system/maintenance.html { }
location / {
if (-f $document_root/system/maintenance.html) { return 503; }
try_files $uri $uri/index.html $uri.html #app_actor;
}
include /etc/nginx/servers/actor/custom.locations.conf;
}
We are not under heavy load so I don't understand why requests are stuck in the queue.
As specified in the unicorn conf, we have 6 unicorn workers.
Any idea where this could be coming from ?
Cheers
EDIT:
Average requests per minute: about 15 most of the time, more than 300 in peeks but we didn't experienced one since the migration.
CPU Load average: 0.2-0.3
I tried with 8 workers, it didn't change anything.
I've also used raindrops to look what unicorn workers were up to.
Here's the ruby script :
#!/usr/bin/ruby
# this is used to show or watch the number of active and queued
# connections on any listener socket from the command line
require 'raindrops'
require 'optparse'
require 'ipaddr'
usage = "Usage: #$0 [-d delay] ADDR..."
ARGV.size > 0 or abort usage
delay = false
# "normal" exits when driven on the command-line
trap(:INT) { exit 130 }
trap(:PIPE) { exit 0 }
opts = OptionParser.new('', 24, ' ') do |opts|
opts.banner = usage
opts.on('-d', '--delay=delay') { |nr| delay = nr.to_i }
opts.parse! ARGV
end
socks = []
ARGV.each do |f|
if !File.exists?(f)
puts "#{f} not found"
next
end
if !File.socket?(f)
puts "#{f} ain't a socket"
next
end
socks << f
end
fmt = "% -50s % 10u % 10u\n"
printf fmt.tr('u','s'), *%w(address active queued)
begin
stats = Raindrops::Linux.unix_listener_stats(socks)
stats.each do |addr,stats|
if stats.queued.to_i > 0
printf fmt, addr, stats.active, stats.queued
end
end
end while delay && sleep(delay)
How i've launched it :
./linux-tcp-listener-stats.rb -d 0.1 /var/run/engineyard/unicorn_actor.sock
So it basically check every 1/10s if there are requests in the queue and if there are it outputs :
the socket | the number of requests being processed | the number of requests in the queue
Here's a gist of the result :
https://gist.github.com/f9c9e5209fbbfc611cb1
EDIT2:
I tried to reduce the number of nginx workers to one last night but it didn't change anything.
For information we are hosted on Engine Yard and have a High-CPU Medium Instance 1.7 GB of memory, 5 EC2 Compute Units (2 virtual cores with 2.5 EC2 Compute Units each)
We host 4 rails applications, this one has 6 workers, we have one with 4, one with 2 and another with one. They're all experiencing request queuing since we migrated to unicorn.
I don't know if Passenger was cheating but New Relic didn't log any request queuing when we were using it. We also have a node.js app handling file uploads, a mysql database and 2 redis.
EDIT 3:
We're using ruby 1.9.2p290, nginx 1.0.10, unicorn 4.2.1 and newrelic_rpm 3.3.3.
I'll try without newrelic tomorrow and will let you know the results here but for the information we were using passenger with new relic, the same version of ruby and nginx and didnt have any issue.
EDIT 4:
I tried to increase the client_body_buffer_size and proxy_buffers with
client_body_buffer_size 256k;
proxy_buffers 8 256k;
But it didn't do the trick.
EDIT 5:
We finally figured it out ... drumroll ...
The winner was our SSL cypher. When we changed it to RC4 we saw the request queuing droppping from 100-300ms to 30-100ms.
I've just diagnosed a similar looking New relic graph as being entirely the fault of SSL. Try turning it off. We are seeing 400ms request queuing time, which drops to 20ms without SSL.
Some interesting points on why some SSL providers might be slow: http://blog.cloudflare.com/how-cloudflare-is-making-ssl-fast
What version of ruby, unicorn, nginx (shouldn't matter much but worth mentioning) and newrelic_rpm are you using?
Also, I would try running a baseline perf test without newrelic. NewRelic parses the response and there are cases where this can be slow due to the issue with 'rindex' in ruby pre-1.9.3. This is usually only noticeable when you're response is very large and doesn't contain 'body' tags (e.g. AJAX, JSON, etc). I saw an example of this where a 1MB AJAX response was taking 30 seconds for NewRelic to parse.
Are you sure that you are buffering the requests from the clients in nginx and then buffering the responses from the unicorns before sending them back to the clients. From your setup it seems that you do (because this is by default), but I will suggest you double check that.
The config to look at is:
http://wiki.nginx.org/HttpProxyModule#proxy_buffering
This is for the buffering of the response from the unicorns. You definitely need it because you don't want to keep unicorns busy sending data to a slow client.
For the buffering of the request from the client I think that you need to look at:
http://wiki.nginx.org/HttpCoreModule#client_body_buffer_size
I think all this can't explain a delay of 100ms, but I am not familiar with all of your system setup, so it is worth it to have a look at this direction. It seems that your queuing is not caused by a CPU contention, but by some kind of IO blocking.

Unicorn Rails stack + Vagrant not serving some assets

I'm using a Rails Stack with nginx + unicorn + rails meant for a production server, but I'm staging it under Vagrant for testing purposes. While doing this, I encountered a strange behavior of the rails application, where very often one or other asset wasn't being served, i.e. application.css isn't being served and therefore the whole page is displayed without any styles applied to it.
I've googled the problem and found that Vagrant's FS driver isn't completely implemented and that would bring some problems while using Apache (haven't found any mentions to nginx). The solution to this problem was to disable sendfile by adding sendfile off; to the configuration file. And... it didn't work.
Further, I went through the logs (Rails, unicorn and nginx) and found that when the file isn't served, there isn't any mention to it in any of the logs. This brought me to the question that the problem can be in the mechanism used by Vagrant to share the rails app folder through the VM. As mentioned in vagrant's website, Vagrant uses Virtual Box's Shared Folders, which is quite slow comparing to other alternatives (as shown here), and the workaround is to set up NFS Shared Folders. So, I decided to give NFS a try and the result was... the same. Unfortunately, some assets are kept from being served.
Does anyone have any ideas on this? I've searched for quite a while but haven't found any pointers additional to those I described here.
I'm using:
Mac OS X 10.6.8 + rbenv (to develop)
Vagrant + nginx + rbenv + unicorn + bundler (to stage)
unicorn.rb
rails_env = ENV['RAILS_ENV'] || 'production'
app_directory = File.expand_path(File.join(File.dirname(__FILE__), ".."))
worker_processes 4
working_directory app_directory
listen "/tmp/appname.sock", :backlog => 64
#listen "#{app_directory}/tmp/sockets/appname.sock", :backlog => 64
timeout 30
pid "#{app_directory}/tmp/pids/unicorn.pid"
stderr_path "#{app_directory}/log/unicorn.stderr.log"
stdout_path "#{app_directory}/log/unicorn.stdout.log"
preload_app true
GC.respond_to?(:copy_on_write_friendly=) and
GC.copy_on_write_friendly = true
before_fork do |server, worker|
defined?(ActiveRecord::Base) and
ActiveRecord::Base.connection.disconnect!
end
after_fork do |server, worker|
defined?(ActiveRecord::Base) and
ActiveRecord::Base.establish_connection
end
/etc/nginx/sites-enabled/appname
upstream unicorn_server {
server unix:/tmp/appname.sock fail_timeout=0;
}
server {
listen 80;
client_max_body_size 4G;
server_name _;
keepalive_timeout 5;
# Location of our static files
root /home/appname/www/current/public;
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
if (!-f $request_filename) {
proxy_pass http://unicorn_server;
break;
}
}
error_page 500 502 503 504 /500.html;
location = /500.html {
root /home/hemauto/www/current/public;
}
}

Hide Headers in Passenger/Nginx Server

I am trying to hide this headers for the production server but without success :
X-Powered-By: Phusion Passenger (mod_rails/mod_rack) 3.0.7
X-Runtime: 0.021429
Server: nginx/1.0.0 + Phusion Passenger 3.0.7 (mod_rails/mod_rack)
Using :
- Rails 3.0.9
- Passenger 3.0.7
- Nginx 1.0.0
Any ideas ?
To remove nginx Server: header you could use server_tokens off directive.
For other headers try using Headers More nginx module:
more_set_headers 'Server: anon'; # replace the default 'nginx + Passenger'
more_set_headers 'X-Powered-By'; # clear header entirely
It possible to hide passenger headers, but require specific configuration. Something like this should work:
External world faced part:
upstream x {
server your-server:8040;
}
server {
server_name your-domain;
# ...
location / {
# ...
proxy_hide_header X-Powered-By;
proxy_hide_header X-Runtime;
proxy_pass http://x;
}
}
Passenger powered site:
server {
server_name local-site;
listen 8040 default_server;
location / {
passenger_enabled on;
# regular site configuration
}
}
local-site can be on same nginx with your-domain part, but this, probably, slight slow down request handling.

Resources