unicorn request queuing - ruby-on-rails

We just migrated from passenger to unicorn to host few rails apps.
Everything works great but we notice via New Relic that request are queuing between 100 and 300ms.
Here's the graph :
I have no idea where this is coming from here's our unicorn conf :
current_path = '/data/actor/current'
shared_path = '/data/actor/shared'
shared_bundler_gems_path = "/data/actor/shared/bundled_gems"
working_directory '/data/actor/current/'
worker_processes 6
listen '/var/run/engineyard/unicorn_actor.sock', :backlog => 1024
timeout 60
pid "/var/run/engineyard/unicorn_actor.pid"
logger Logger.new("log/unicorn.log")
stderr_path "log/unicorn.stderr.log"
stdout_path "log/unicorn.stdout.log"
preload_app true
if GC.respond_to?(:copy_on_write_friendly=)
GC.copy_on_write_friendly = true
end
before_fork do |server, worker|
if defined?(ActiveRecord::Base)
ActiveRecord::Base.connection.disconnect!
end
old_pid = "#{server.config[:pid]}.oldbin"
if File.exists?(old_pid) && server.pid != old_pid
begin
sig = (worker.nr + 1) >= server.worker_processes ? :TERM : :TTOU
Process.kill(sig, File.read(old_pid).to_i)
rescue Errno::ENOENT, Errno::ESRCH
# someone else did our job for us
end
end
sleep 1
end
if defined?(Bundler.settings)
before_exec do |server|
paths = (ENV["PATH"] || "").split(File::PATH_SEPARATOR)
paths.unshift "#{shared_bundler_gems_path}/bin"
ENV["PATH"] = paths.uniq.join(File::PATH_SEPARATOR)
ENV['GEM_HOME'] = ENV['GEM_PATH'] = shared_bundler_gems_path
ENV['BUNDLE_GEMFILE'] = "#{current_path}/Gemfile"
end
end
after_fork do |server, worker|
worker_pid = File.join(File.dirname(server.config[:pid]), "unicorn_worker_actor_#{worker.nr$
File.open(worker_pid, "w") { |f| f.puts Process.pid }
if defined?(ActiveRecord::Base)
ActiveRecord::Base.establish_connection
end
end
our nginx.conf :
user deploy deploy;
worker_processes 6;
worker_rlimit_nofile 10240;
pid /var/run/nginx.pid;
events {
worker_connections 8192;
use epoll;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] '
'"$request" $status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
sendfile on;
tcp_nopush on;
server_names_hash_bucket_size 128;
if_modified_since before;
gzip on;
gzip_http_version 1.0;
gzip_comp_level 2;
gzip_proxied any;
gzip_buffers 16 8k;
gzip_types application/json text/plain text/html text/css application/x-javascript t$
# gzip_disable "MSIE [1-6]\.(?!.*SV1)";
# Allow custom settings to be added to the http block
include /etc/nginx/http-custom.conf;
include /etc/nginx/stack.conf;
include /etc/nginx/servers/*.conf;
}
and our app specific nginx conf :
upstream upstream_actor_ssl {
server unix:/var/run/engineyard/unicorn_actor.sock fail_timeout=0;
}
server {
listen 443;
server_name letitcast.com;
ssl on;
ssl_certificate /etc/nginx/ssl/letitcast.crt;
ssl_certificate_key /etc/nginx/ssl/letitcast.key;
ssl_session_cache shared:SSL:10m;
client_max_body_size 100M;
root /data/actor/current/public;
access_log /var/log/engineyard/nginx/actor.access.log main;
error_log /var/log/engineyard/nginx/actor.error.log notice;
location #app_actor {
include /etc/nginx/common/proxy.conf;
proxy_pass http://upstream_actor_ssl;
}
include /etc/nginx/servers/actor/custom.conf;
include /etc/nginx/servers/actor/custom.ssl.conf;
if ($request_filename ~* \.(css|jpg|gif|png)$) {
break;
}
location ~ ^/(images|javascripts|stylesheets)/ {
expires 10y;
}
error_page 404 /404.html;
error_page 500 502 504 /500.html;
error_page 503 /system/maintenance.html;
location = /system/maintenance.html { }
location / {
if (-f $document_root/system/maintenance.html) { return 503; }
try_files $uri $uri/index.html $uri.html #app_actor;
}
include /etc/nginx/servers/actor/custom.locations.conf;
}
We are not under heavy load so I don't understand why requests are stuck in the queue.
As specified in the unicorn conf, we have 6 unicorn workers.
Any idea where this could be coming from ?
Cheers
EDIT:
Average requests per minute: about 15 most of the time, more than 300 in peeks but we didn't experienced one since the migration.
CPU Load average: 0.2-0.3
I tried with 8 workers, it didn't change anything.
I've also used raindrops to look what unicorn workers were up to.
Here's the ruby script :
#!/usr/bin/ruby
# this is used to show or watch the number of active and queued
# connections on any listener socket from the command line
require 'raindrops'
require 'optparse'
require 'ipaddr'
usage = "Usage: #$0 [-d delay] ADDR..."
ARGV.size > 0 or abort usage
delay = false
# "normal" exits when driven on the command-line
trap(:INT) { exit 130 }
trap(:PIPE) { exit 0 }
opts = OptionParser.new('', 24, ' ') do |opts|
opts.banner = usage
opts.on('-d', '--delay=delay') { |nr| delay = nr.to_i }
opts.parse! ARGV
end
socks = []
ARGV.each do |f|
if !File.exists?(f)
puts "#{f} not found"
next
end
if !File.socket?(f)
puts "#{f} ain't a socket"
next
end
socks << f
end
fmt = "% -50s % 10u % 10u\n"
printf fmt.tr('u','s'), *%w(address active queued)
begin
stats = Raindrops::Linux.unix_listener_stats(socks)
stats.each do |addr,stats|
if stats.queued.to_i > 0
printf fmt, addr, stats.active, stats.queued
end
end
end while delay && sleep(delay)
How i've launched it :
./linux-tcp-listener-stats.rb -d 0.1 /var/run/engineyard/unicorn_actor.sock
So it basically check every 1/10s if there are requests in the queue and if there are it outputs :
the socket | the number of requests being processed | the number of requests in the queue
Here's a gist of the result :
https://gist.github.com/f9c9e5209fbbfc611cb1
EDIT2:
I tried to reduce the number of nginx workers to one last night but it didn't change anything.
For information we are hosted on Engine Yard and have a High-CPU Medium Instance 1.7 GB of memory, 5 EC2 Compute Units (2 virtual cores with 2.5 EC2 Compute Units each)
We host 4 rails applications, this one has 6 workers, we have one with 4, one with 2 and another with one. They're all experiencing request queuing since we migrated to unicorn.
I don't know if Passenger was cheating but New Relic didn't log any request queuing when we were using it. We also have a node.js app handling file uploads, a mysql database and 2 redis.
EDIT 3:
We're using ruby 1.9.2p290, nginx 1.0.10, unicorn 4.2.1 and newrelic_rpm 3.3.3.
I'll try without newrelic tomorrow and will let you know the results here but for the information we were using passenger with new relic, the same version of ruby and nginx and didnt have any issue.
EDIT 4:
I tried to increase the client_body_buffer_size and proxy_buffers with
client_body_buffer_size 256k;
proxy_buffers 8 256k;
But it didn't do the trick.
EDIT 5:
We finally figured it out ... drumroll ...
The winner was our SSL cypher. When we changed it to RC4 we saw the request queuing droppping from 100-300ms to 30-100ms.

I've just diagnosed a similar looking New relic graph as being entirely the fault of SSL. Try turning it off. We are seeing 400ms request queuing time, which drops to 20ms without SSL.
Some interesting points on why some SSL providers might be slow: http://blog.cloudflare.com/how-cloudflare-is-making-ssl-fast

What version of ruby, unicorn, nginx (shouldn't matter much but worth mentioning) and newrelic_rpm are you using?
Also, I would try running a baseline perf test without newrelic. NewRelic parses the response and there are cases where this can be slow due to the issue with 'rindex' in ruby pre-1.9.3. This is usually only noticeable when you're response is very large and doesn't contain 'body' tags (e.g. AJAX, JSON, etc). I saw an example of this where a 1MB AJAX response was taking 30 seconds for NewRelic to parse.

Are you sure that you are buffering the requests from the clients in nginx and then buffering the responses from the unicorns before sending them back to the clients. From your setup it seems that you do (because this is by default), but I will suggest you double check that.
The config to look at is:
http://wiki.nginx.org/HttpProxyModule#proxy_buffering
This is for the buffering of the response from the unicorns. You definitely need it because you don't want to keep unicorns busy sending data to a slow client.
For the buffering of the request from the client I think that you need to look at:
http://wiki.nginx.org/HttpCoreModule#client_body_buffer_size
I think all this can't explain a delay of 100ms, but I am not familiar with all of your system setup, so it is worth it to have a look at this direction. It seems that your queuing is not caused by a CPU contention, but by some kind of IO blocking.

Related

Unicorn+Nginx concurrency and data duplication

I have 4 Nginx workers and 4 unicorn workers. We hit a concurrency issue in some of our models that validate unique names. We are getting duplicated names when we send multiple requests at the same time on the same resource. For instance if we send around 10 requests to create Licenses we get duplicated serial_numbers...
Here's some context:
Model (simplified)
class License < ActiveRecord::Base
validates :serial_number, :uniqueness => true
end
Unicorn.rb
APP_PATH = '.../manager'
worker_processes 4
working_directory APP_PATH # available in 0.94.0+
listen ".../manager/tmp/sockets/manager_rails.sock", backlog: 1024
listen 8080, :tcp_nopush => true # uncomment to listen to TCP port as well
timeout 600
pid "#{APP_PATH}/tmp/pids/unicorn.pid"
stderr_path "#{APP_PATH}/log/unicorn.stderr.log"
stdout_path "#{APP_PATH}/log/unicorn.stdout.log"
preload_app true
GC.copy_on_write_friendly = true if GC.respond_to?(:copy_on_write_friendly=)
check_client_connection false
run_once = true
before_fork do |server, worker|
ActiveRecord::Base.connection.disconnect! if defined?(ActiveRecord::Base)
MESSAGE_QUEUE.close
end
after_fork do |server, worker|
ActiveRecord::Base.establish_connection if defined?(ActiveRecord::Base)
end
Nginx.conf (simplified)
worker_processes 4;
events {
multi_accept off;
worker_connections 1024;
use epoll;
accept_mutex off;
}
upstream app_server {
server unix:/home/blueserver/symphony/manager/tmp/sockets/manager_rails_write.sock fail_timeout=0;
}
try_files $uri #app;
location #app {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_connect_timeout 600;
proxy_send_timeout 600;
proxy_read_timeout 600;
proxy_pass http://app_server;
}
Every time I send multiple requests (more than 4+) to create Licenses I get some duplicates. I understand why. It's because each unicorn process doesn't have a resource with the serial_number created yet. So, it allow to create it multiple times...
ActiveRecord is validating the uniqueness of the field at the process level rather than a database level. One workaround could be moving the validations to the database (but it will be very cumbersome and hard to maintain).
Another workaround is to limit the write requests (POST/PUT/DELETE) to only one unicorn and have multiple unicorns to reply to read requests (GET). Something like this in the location in Nginx...
# 4 unicorn workers for GET requests
proxy_pass http://app_read_server;
# 1 unicorn worker for POST/PUT/DELETE requests
limit_except GET {
proxy_pass http://app_write_server;
}
we are currently using that. It fixes the concurrency issue. However, one write server is not enough to reply at peak times and its creating a bottleneck.
Any idea to solve the concurrency and scalability issues with Nginx+Unicorn?
Take a look at transaction isolation. For example, PostgreSQL - http://www.postgresql.org/docs/current/static/transaction-iso.html.
Normally, you can go two ways:
use unique index for unique key column(via migration) and catch appropriate exception;
maintain the database constraints in a way described here and catch appropriate exception as well.
or use PostureSQL transaction with isolation level "serialised" which is basically transforms parallel translations into consecutive ones as it was described by Andrey Kryachkov early.

Streaming mp4 in Chrome with rails, nginx and send_file

I can't for the life of me stream a mp4 to Chrome with a html5 <video> tag. If I drop the file in public then everything is gravy and works as expected. But if I try to serve it using send_file, pretty much everything imaginable goes wrong. I am using a rails app that is proxied by nginx, with a Video model that has a location attribute that is an absolute path on disk.
At first I tried:
def show
send_file Video.find(params[:id]).location
end
And I was sure I would be basking in the glory that is modern web development. Ha. This plays in both Chrome and Firefox, but neither seek and neither have any idea how long the video is. I poked at the response headers and realized that Content-Type is being sent as application/octet-stream and there is no Content-Length set. Umm... wth?
Okay, I guess I can set those in rails:
def show
video = Video.find(params[:id])
response.headers['Content-Length'] = File.stat(video.location).size
send_file(video.location, type: 'video/mp4')
end
At this point everything works pretty much as expected in Firefox. It knows how long the video is and seeking works as expected. Chrome appears to know how long the video is (doesn't show timestamps, but seek bar looks appropriate) but seeking doesn't work.
Apparently Chrome is pickier than Firefox. It requires that the server respond with a Accept-Ranges header with value bytes and respond to subsequent requests (that happen when the users seeks) with 206 and the appropriate portion of the file.
Okay, so I borrowed some code from here and then I had this:
video = Video.find(params[:id])
file_begin = 0
file_size = File.stat(video.location).size
file_end = file_size - 1
if !request.headers["Range"]
status_code = :ok
else
status_code = :partial_content
match = request.headers['Range'].match(/bytes=(\d+)-(\d*)/)
if match
file_begin = match[1]
file_end = match[2] if match[2] && !match[2].empty?
end
response.header["Content-Range"] = "bytes " + file_begin.to_s + "-" + file_end.to_s + "/" + file_size.to_s
end
response.header["Content-Length"] = (file_end.to_i - file_begin.to_i + 1).to_s
response.header["Accept-Ranges"]= "bytes"
response.header["Content-Transfer-Encoding"] = "binary"
send_file(video.location,
:filename => File.basename(video.location),
:type => 'video/mp4',
:disposition => "inline",
:status => status_code,
:stream => 'true',
:buffer_size => 4096)
Now Chrome attempts to seek, but when you do the video stops playing and never works again until the page reloads. Argh. So I decided to play around with curl to see what was happening and I discovered this:
$ curl --header "Range: bytes=200-400" http://localhost:8080/videos/1/001.mp4
ftypisomisomiso2avc1mp41 �moovlmvhd��#��trak\tkh��
$ curl --header "Range: bytes=1200-1400" http://localhost:8080/videos/1/001.mp4
ftypisomisomiso2avc1mp41 �moovlmvhd��#��trak\tkh��
No matter the byte range request, the data always starts from the beginning of the file. The appropriate amount of bytes is returned (201 bytes in this case), but it's always from the beginning of the file. Apparently nginx respects the Content-Length header but ignores the Content-Range header.
My nginx.conf is untouched default:
user www-data;
worker_processes 4;
pid /run/nginx.pid;
events {
worker_connections 768;
}
http {
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
gzip on;
gzip_disable "msie6";
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
and my app.conf is pretty basic:
upstream unicorn {
server unix:/tmp/unicorn.app.sock fail_timeout=0;
}
server {
listen 80 default deferred;
root /vagrant/public;
try_files $uri/index.html $uri #unicorn;
location #unicorn {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header HOST $http_host;
proxy_redirect off;
proxy_pass http://unicorn;
}
error_page 500 502 503 504 /500.html;
client_max_body_size 4G;
keepalive_timeout 5;
}
First I tried the nginx 1.4.x that comes with Ubuntu 14.04, then tried 1.7.x from a ppa - same results. I even tried apache2 and had exactly the same results.
I would like to reiterate that the video file is not the problem. If I drop it in public then nginx serves it with the appropriate mime types, headers and everything needed for Chrome to work properly.
So my question is a two-parter:
Why doesn't nginx/apache handle all this stuff automagically with send_file (X-Accel-Redirect/X-Sendfile) like it does when the file is served statically from public? Handling this stuff in rails is so backwards.
How the heck can I actually use send_file with nginx (or apache) so that Chrome will be happy and allow seeking?
Update 1
Okay, so I thought I'd try to take the complication of rails out of the picture and just see if I could get nginx to proxy the file correctly. So I spun up a dead-simple nodjs server:
var http = require('http');
http.createServer(function (req, res) {
res.writeHead(200, {
'X-Accel-Redirect': '/path/to/file.mp4'
});
res.end();
}).listen(3000, '127.0.0.1');
console.log('Server running at http://127.0.0.1:3000/');
And chrome is happy as a clam. =/ curl -I even shows that Accept-Ranges: bytes and Content-Type: video/mp4 is being inserted by nginx automagically - as it should be. What could rails be doing that's preventing nginx from doing this?
Update 2
I might be getting closer...
If I have:
def show
video = Video.find(params[:id])
send_file video.location
end
Then I get:
$ curl -I localhost:8080/videos/1/001.mp4
HTTP/1.1 200 OK
Server: nginx/1.7.9
Date: Sun, 18 Jan 2015 12:06:38 GMT
Content-Type: application/octet-stream
Connection: keep-alive
Status: 200 OK
X-Frame-Options: SAMEORIGIN
X-XSS-Protection: 1; mode=block
X-Content-Type-Options: nosniff
Content-Disposition: attachment; filename="001.mp4"
Content-Transfer-Encoding: binary
Cache-Control: private
Set-Cookie: request_method=HEAD; path=/
X-Meta-Request-Version: 0.3.4
X-Request-Id: cd80b6e8-2eaa-4575-8241-d86067527094
X-Runtime: 0.041953
And I have all the problems described above.
But if I have:
def show
video = Video.find(params[:id])
response.headers['X-Accel-Redirect'] = video.location
head :ok
end
Then I get:
$ curl -I localhost:8080/videos/1/001.mp4
HTTP/1.1 200 OK
Server: nginx/1.7.9
Date: Sun, 18 Jan 2015 12:06:02 GMT
Content-Type: text/html
Content-Length: 186884698
Last-Modified: Sun, 18 Jan 2015 03:49:30 GMT
Connection: keep-alive
Cache-Control: max-age=0, private, must-revalidate
Set-Cookie: request_method=HEAD; path=/
ETag: "54bb2d4a-b23a25a"
Accept-Ranges: bytes
And everything works perfectly.
But why? Those should do exactly the same thing. And why doesn't nginx set Content-Type automagically here like it does for the simple nodejs example? I have config.action_dispatch.x_sendfile_header = 'X-Accel-Redirect' set. I have moved it back and forth between application.rb and development.rb with the same results. I guess I never mentioned... this is rails 4.2.0.
Update 3
Now I've changed my unicorn server to listen on port 3000 (since I already changed nginx to listen on 3000 for the nodejs example). Now I can make requests directly to unicorn (since it's listening on a port and not a socket) so I have found that curl -I directly to unicorn shows that no X-Accel-Redirect header is sent and just curling unicorn directly actually sends the file. It's like send_file isn't doing what it's supposed to.
I finally have the answers to my original questions. I didn't think I'd ever get here. All my research had lead to dead-ends, hacky non-solutions and "it just works out of the box" (well, not for me).
Why doesn't nginx/apache handle all this stuff automagically with send_file (X-Accel-Redirect/X-Sendfile) like it does when the file is served statically from public? Handling this stuff in rails is so backwards.
They do, but they have to be configured properly to please Rack::Sendfile (see below). Trying to handle this in rails is a hacky non-solution.
How the heck can I actually use send_file with nginx (or apache) so that Chrome will be happy and allow seeking?
I got desperate enough to start poking around rack source code and that's where I found my answer, in the comments of Rack::Sendfile. They are structured as documentation that you can find at rubydoc.
For whatever reason, Rack::Sendfile requires the front end proxy to send a X-Sendfile-Type header. In the case of nginx it also requires a X-Accel-Mapping header. The documentation also has examples for apache and lighttpd as well.
One would think the rails documentation could link to the Rack::Sendfile documentation since send_file does not work out of the box without additional configuration. Perhaps I'll submit a pull request.
In the end I only needed to add a couple lines to my app.conf:
upstream unicorn {
server unix:/tmp/unicorn.app.sock fail_timeout=0;
}
server {
listen 80 default deferred;
root /vagrant/public;
try_files $uri/index.html $uri #unicorn;
location #unicorn {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header HOST $http_host;
proxy_set_header X-Sendfile-Type X-Accel-Redirect; # ADDITION
proxy_set_header X-Accel-Mapping /=/; # ADDITION
proxy_redirect off;
proxy_pass http://localhost:3000;
}
error_page 500 502 503 504 /500.html;
client_max_body_size 4G;
keepalive_timeout 5;
}
Now my original code works as expected:
def show
send_file(Video.find(params[:id]).location)
end
Edit:
Although this worked initially, it stopped working after I restarted my vagrant box and I had to make further changes:
upstream unicorn {
server unix:/tmp/unicorn.app.sock fail_timeout=0;
}
server {
listen 80 default deferred;
root /vagrant/public;
try_files $uri/index.html $uri #unicorn;
location ~ /files(.*) { # NEW
internal; # NEW
alias $1; # NEW
} # NEW
location #unicorn {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header HOST $http_host;
proxy_set_header X-Sendfile-Type X-Accel-Redirect;
proxy_set_header X-Accel-Mapping /=/files/; # CHANGED
proxy_redirect off;
proxy_pass http://localhost:3000;
}
error_page 500 502 503 504 /500.html;
client_max_body_size 4G;
keepalive_timeout 5;
}
I find this whole thing of mapping one URI to another and then mapping that URI to a location on disk to be totally unnecessary. It's useless for my use case and I'm just mapping one to another and back again. Apache and lighttpd don't require it. But at least it works.
I also added Mime::Type.register('video/mp4', :mp4) to config/initializers/mime_types.rb so the file is served with the correct mime type.

Nginx, Passenger setup with RVM: Permissions problems & 403 error

I'm trying to deploy a rails application to a Ubuntu server running passenger and nginx. The server was operational for about a year on ruby 1.8, but I recently upgraded to 1.9.3 (this time using RVM) and as a result had to reinstall everything. I am currently running into two problems:
403 forbidden error
I was able to start the nginx server, but when I try to access the rails application, I receive a 403 Forbidden error that reads:
*2177 directory index of "/srv/myapp/public/" is forbidden
I did some looking around in the nxinx help docs and made sure that the /srv/myapp/ directory has the proper permissions. It is owned by a deploy user that owns the worker process of nginx, and is set chmod 755.
Nginx and Passenger installation problems
When I restart the nginx server, I also receive another error in indicating a problem with my phusion passenger installation:
Unable to start the Phusion Passenger watchdog because its executable (/usr/local/rvm/gems/ruby-1.9.3-p194/gems/passenger-3.0.13/agents/ PassengerWatchdog) does not exist. This probably means that your Phusion Passenger installation is broken or incomplete, or that your 'passenger_root' directive is set to the wrong value. Please reinstall Phusion Passenger or fix your 'passenger_root' directive, whichever is applicable.
I reinstall the passenger gem from my non-root (but sudo enabled) user, and reinstall nginx using rvmsudo passenger-install-nginx-module, but then I get this error repeatedly:
Your RVM wrapper scripts are too old. Please update them first by running 'rvm get head && rvm reload && rvm repair all'.
I performed the RVM reload (both with rvmsudo and without) and the error still appears. I tried performing the nginx install without rvmsudo, but ran into permissions problems because I couldn't edit the /opt/nginx/ directory (where I have nginx installed). Now I don't even get that far, because the installer fails to pass the required software check.
Background information
This is what my nginx process currently looks like:
PID PPID USER %CPU VSZ WCHAN COMMAND
10174 1 root 0.0 18480 ? nginx: master process /opt/nginx/sbin/nginx
29418 10174 deploy 0.3 18496 ? nginx: worker process
29474 12266 1001 0.0 4008 - egrep (nginx|PID)
My installation process
I've been documenting my installation process in a step-by-step guide for further reference. Please take a look to see how I have my new installation set up.
Try to reinstall all your environment following this guide and put your Rails application on /var/rails/your_app_dir with this permisions:
sudo chown deployment_user:www-data -R /var/rails/your_app_dir
Use this nginx.conf example to put all pieces together:
user www-data;
worker_processes 1;
#error_log logs/error.log;
#error_log logs/error.log notice;
#error_log logs/error.log info;
#pid logs/nginx.pid;
events {
worker_connections 1024;
}
http {
passenger_root /usr/local/rvm/gems/ruby-2.0.0-p195/gems/passenger-4.0.5;
passenger_ruby /usr/local/rvm/wrappers/ruby-2.0.0-p195/ruby;
include mime.types;
default_type application/octet-stream;
#log_format main '$remote_addr - $remote_user [$time_local] "$request" '
# '$status $body_bytes_sent "$http_referer" '
# '"$http_user_agent" "$http_x_forwarded_for"';
log_format gzip '$remote_addr - $remote_user [$time_local] '
'"$request" $status $bytes_sent '
'"$http_referer" "$http_user_agent" "$gzip_ratio"';
access_log logs/access.log gzip buffer=512k;
server_names_hash_bucket_size 64;
sendfile on;
#tcp_nopush on;
#keepalive_timeout 0;
keepalive_timeout 65;
#gzip on;
##
# Virtual Host Configs
##
#include /opt/nginx/conf/*.conf;
include /opt/nginx/conf/sites-enabled/*;
}

Unicorn Rails stack + Vagrant not serving some assets

I'm using a Rails Stack with nginx + unicorn + rails meant for a production server, but I'm staging it under Vagrant for testing purposes. While doing this, I encountered a strange behavior of the rails application, where very often one or other asset wasn't being served, i.e. application.css isn't being served and therefore the whole page is displayed without any styles applied to it.
I've googled the problem and found that Vagrant's FS driver isn't completely implemented and that would bring some problems while using Apache (haven't found any mentions to nginx). The solution to this problem was to disable sendfile by adding sendfile off; to the configuration file. And... it didn't work.
Further, I went through the logs (Rails, unicorn and nginx) and found that when the file isn't served, there isn't any mention to it in any of the logs. This brought me to the question that the problem can be in the mechanism used by Vagrant to share the rails app folder through the VM. As mentioned in vagrant's website, Vagrant uses Virtual Box's Shared Folders, which is quite slow comparing to other alternatives (as shown here), and the workaround is to set up NFS Shared Folders. So, I decided to give NFS a try and the result was... the same. Unfortunately, some assets are kept from being served.
Does anyone have any ideas on this? I've searched for quite a while but haven't found any pointers additional to those I described here.
I'm using:
Mac OS X 10.6.8 + rbenv (to develop)
Vagrant + nginx + rbenv + unicorn + bundler (to stage)
unicorn.rb
rails_env = ENV['RAILS_ENV'] || 'production'
app_directory = File.expand_path(File.join(File.dirname(__FILE__), ".."))
worker_processes 4
working_directory app_directory
listen "/tmp/appname.sock", :backlog => 64
#listen "#{app_directory}/tmp/sockets/appname.sock", :backlog => 64
timeout 30
pid "#{app_directory}/tmp/pids/unicorn.pid"
stderr_path "#{app_directory}/log/unicorn.stderr.log"
stdout_path "#{app_directory}/log/unicorn.stdout.log"
preload_app true
GC.respond_to?(:copy_on_write_friendly=) and
GC.copy_on_write_friendly = true
before_fork do |server, worker|
defined?(ActiveRecord::Base) and
ActiveRecord::Base.connection.disconnect!
end
after_fork do |server, worker|
defined?(ActiveRecord::Base) and
ActiveRecord::Base.establish_connection
end
/etc/nginx/sites-enabled/appname
upstream unicorn_server {
server unix:/tmp/appname.sock fail_timeout=0;
}
server {
listen 80;
client_max_body_size 4G;
server_name _;
keepalive_timeout 5;
# Location of our static files
root /home/appname/www/current/public;
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
if (!-f $request_filename) {
proxy_pass http://unicorn_server;
break;
}
}
error_page 500 502 503 504 /500.html;
location = /500.html {
root /home/hemauto/www/current/public;
}
}

nginx with passenger don't handle static assets

I have rails app running with helpfull nginx and passenger, and I want to add static page (conteins code coverage analysis tool - simplecov).
Localy this works fine (without passenger), but on the server this don't works.
My nginx.conf:
#user nobody;
worker_processes 1;
events {
worker_connections 1024;
#speed up for linux 2.6+
use epoll;
}
http {
passenger_root /home/demo/.rvm/gems/ruby-1.9.3-p0#gm/gems/passenger-3.0.9;
passenger_ruby /home/demo/.rvm/wrappers/ruby-1.9.3-p0#gm/ruby;
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
server {
listen 80;
server_name *.dev.mysite.com;
root /var/www/projects/mysite/qa/current/public;
passenger_enabled on;
rails_env qa;
charset utf-8;
error_log /var/www/projects/mysite/qa/shared/log/host.error.log;
}
#Coverage code tool (SimpleCov gem)
server {
listen 4444;
server_name coverage.mysite.com;
location / {
root /var/lib/jenkins/jobs/WebForms/workspace/coverage;
index index.html index.htm;
}
}
#Yard server
server {
listen 5555;
server_name yard.mysite.com;
location / {
proxy_pass http://127.0.0.1:8808;
}}}
And nothing receive when I try to hit to coverage.mysite.com:4444.
I think I remember coming across something similar to this on one of my rails apps.
Have you tried commenting and uncommenting the lines below?:
# in config/environments/production.rb
# Specifies the header that your server uses for sending files
#config.action_dispatch.x_sendfile_header = "X-Sendfile"
# For nginx:
config.action_dispatch.x_sendfile_header = 'X-Accel-Redirect'
It should be near the top, around line 12 through 16.
Try that, then redploy and test in the browser.

Resources