Nginx timeout on Digital Ocean - ruby-on-rails

I have a Ruby on Rails application running on DigitalOcean. I keep running into a 504 Gateway Timeout error with Nginx. Most recently it happened when running service unicorn restart and service nginx restart. I don't see anything apparent in the production logs about any problems with nginx or unicorn. In fact it is saying that my root view was rendered, despite me getting a time out error. I'm a bit confused about what might be causing this error and I'm not a little unsure what information I can provide in this question to be of any help.

Edit your unicorn config at /home/unicorn/unicorn.conf
and comment out the line user 'rails'
then:
service unicorn restart

Related

Production server using an outdated release of capistrano

I deploy new version via capistrano to a Ubuntu 14.04 server and now Unicorn + Nginx setup is referring to inexistent release. I get the ActionView::MissingTemplate and also a I18n::InvalidLocaleData because it failed to load the devise.en.yml file.
I pretty much followed this repo. I already restart nginx and unicorn but still gives me the same error. It's searching on a release/release_timestamp that no longer exists
You can at least confirm that Nginx is not part of the problem by directly connecting to the port or socket that Unicorn is listening on from within the server.
If Unicorn is running on a socket, see Can Curl send requests to sockets?.

Do I need to restart NginX after reconfigure for passenger to work

I just updated my NginX configuration.
In the config I've added a new server with a different environment for a rails application.
I've reloaded the configuration (sbin/nginx -s reload) and deployed the application to the right folder but nothing seems to happen, NginX throws 404 not found..
Is there anything more I need to do?
Do I need to restart NginX or passenger for example?
Turns out you don't need to restart.
Passenger isn't able to restart (as far as I've found out) and NginX only needs a reload.

Why it won't apply changes when using Nginx?

I've just switched from Apache + Passenger to Nginx + Unicorn
I used to command /etc/init.d/httpd restart after I added change.
Then all the changes used to be applied to Rails Application.
But with Nginx, even if I command service nginx restart, changes won't be applied:(
Why? and How can I fix this problem?
You need to restart unicorn itself. See http://unicorn.bogomips.org/SIGNALS.html
With Apache+Passenger, when you restart Apache, it restarts Passenger. Unicorn is it's own server however and needs to be restarted/reloaded itself.

Bitnami Rails Nginx Thin - The service is not available. Please try again later

I'm trying to get a Bitnami Rails stack running with Nginx and 5 Thin app servers.
I have the Thin app servers running OK and I've got Nginx started and it's connected to the 5 Thin servers.
But, some code is giving me "The service is not available. Please try again later." html when I access my app from a browser. I don't know where the code is that's giving me that message.
I have the Nginx server listening on port 80.
This is my nginx.conf file: https://dl.dropbox.com/u/35302780/nginx.conf
Thanks for the help!
Your config seems fine, except those 2 if statements because IF is evil inside location block. You may want to fix that.
Seems that you are using a 3rd party upstream module. Try to uninstall that module and use the normal upstream module.

Nginx + unicorn (rails) often gives "Connection refused" in nginx error log

At work we're running some high traffic sites in rails. We often get a problem with the following being spammed in the nginx error log:
2011/05/24 11:20:08 [error] 90248#0: *468577825 connect() to unix:/app_path/production/shared/system/unicorn.sock failed (61: Connection refused) while connecting to upstream
Our setup is nginx on the frontend server (load balancing), and unicorn on our 4 app servers. Each unicorn is running with 8 workers. The setup is very similar to the one GitHub uses.
Most of our content is cached, and when the request hits nginx it looks for the page in memcached and serves that it if can find it - otherwise the request goes to rails.
I can solve the above issue - SOMETIMES - by doing a pkill of the unicorn processes on the servers followed by a:
cap production unicorn:check (removing all the pid's)
cap production unicorn:start
Do you guys have any clue to how I can debug this issue? We don't have any significantly high load on our database server when these problems occurs..
Something killed your unicorn process on one of the servers, or it timed out. Or you have an old app server in your upstream app_server { } block that is no longer valid. Nginx will retry it from time to time. The default is to re-try another upstream if it gets a connection error, so hopefully your clients didn't notice anything.
I don't think this is a nginx issue for me, restarting nginx didn't help. It seems to be gunicorn...A quick and dirty way to avoid this is to recycle the gunicorn instances when the system is not being used, say 1AM for example if that is an acceptable maintenance window. I run gunicorn as a service that will come back up if killed so a pkill script takes care of the recycle/respawn:
start on runlevel [2345]
stop on runlevel [06]
respawn
respawn limit 10 5
exec /var/web/proj/server.sh
I am starting to wonder if this is at all related to memory allocation. I have MongoDB running on the same system and it reserves all the memory for itself but it is supposed to yield if other applications require more memory.
Other things worth a try is getting rid of eventlet or other dependent modules when running gunicorn. uWSGI can also be used as an alternative to gunicorn.

Resources