Configure unicorn on heroku - ruby-on-rails

I follow these links for configuration
https://devcenter.heroku.com/articles/rails-unicorn
http://www.neilmiddleton.com/getting-more-from-your-heroku-dynos/
my config/unicorn.rb:
worker_processes 2
timeout 60
With this config, it still gives a timeout error after 30sec.

The Heroku router will timeout all requests at 30 seconds. You cannot reconfigure this.
See https://devcenter.heroku.com/articles/request-timeout
It is considered a good idea to set the application level timeouts to a lower value than the hard 30 second limit so that you don't leave dynos processing requests that the router has already timed out.
If you have requests that are regularly taking longer than 30 seconds you may need to push some of the work involved onto a background worker process.

Related

Rails long running controller action and scaling to 500-1000 requests per second

I'm currently trying to optimize and scale an API built on Ruby on Rails behind an AWS ALB that sends traffic to NGINX and then into Puma to our Rails application. Our API has a timeout option of 30 seconds maximum which is when we eventually timeout the request. Currently we have a controller action that queues a Sidekiq worker and then we poll a Redis key every 100ms for the first 1 second and then move to polling every 500ms for the remaining 29 seconds. Many of our requests can be completed in under 1 second, but some of them will take the full 30 seconds before they succeed or timeout, telling the user to retry in a little while.
We're currently trying to load test this API and scale it to 500-1000 RPS and we're running into problems where the slower requests will block up all of our connections. When a slow request is running shouldn't Puma be able to take other requests in during the sleep period of the slow requests?
If this was not an API we could easily just immediately respond after we queue the background worker, but in this case we need to wait for the response and hold the connection for up to 30 seconds for the API request.
My first thought is that you can have multiple redis queues and push specific tasks to certain queues.
If you have a queue for the quick tasks and a queue for the slower tasks, then both can run in parallel without the slow tasks holding everything else up.

How to set rails request timeout longer?

My app is built on rails and the web server is puma.
I need to load data from database and it takes more than 60 seconds to load all of them. Every time I send a get request to the server, I have to wait more than 60 seconds.
The timeout of request get is 60 seconds, so I always get 504 gateway timeout. I can't find the place to change the request timeout in puma configuration.
How can I set the request timeout longer than 60 seconds?
Thanks!
UPDATE: Apparently worker_timeout is NOT the answer, as it relates to the whole process hanging, not just an individual request. So it seems to be something Puma doesn't support, and the developers are expecting you to implement it with whatever is fronting Puma, such as Nginx.
ORIGINAL: Rails itself doesn't time out, but use worker_timeout in config/puma.rb if you're running Puma. Example:
worker_timeout (246060) if ENV['RAILS_ENV']=='development'
Source
The 504 error here is with the gateway in front of the rails server, for example it could be Cloudflare, or nginx etc.
So the settings would be there. You'd have to increase the timeout there, as well as in rails/puma.
Preferably, you should be optimizing your code and queries to respond in faster duration of time so that in production environment there is no bottleneck with large traffic coming on your application.
If you really want to increase the response time then you can use rack timeout to do this:
https://github.com/kch/rack-timeout

Override request timeout in pyramid/gunicorn

Pyramid (v1.5) application served by gunicorn (v19.1.1) behind nginx on a heroic BeagleBone Black "server".
One specific request requires significant I/O and processor time on the server (data exporting from database, formatting to xls and serving)
which results in gunicorn worker timeout and a 'Bad gateway' error returned by nginx.
Is there a practical way to handle this per request instead of increasing the global request timeout for all requests?
It is just this one specific request so I'm looking for the quickest and dirtiest solution instead of implementing a correct, asynchronous client notification protocol.
From the docs:
timeout¶
-t INT, --timeout INT
30
Workers silent for more than this many seconds are killed and restarted.
Generally set to thirty seconds. Only set this noticeably higher if you’re sure of the repercussions for sync workers. For the non sync workers it just means that the worker process is still communicating and is not tied to the length of time required to handle a single request.
graceful_timeout
--graceful-timeout INT
30
Timeout for graceful workers restart.
Generally set to thirty seconds. How max time worker can handle request after got restart signal. If the time is up worker will be force killed.
keepalive
--keep-alive INT
2
The number of seconds to wait for requests on a Keep-Alive connection.
Generally set in the 1-5 seconds range.

Heroku app goes down every few days for 10 - 30 minutes. Normal?

It's a simple rails app. It just uses the postgresql database and it doesn't do any http calls except for newrelic monitoring. But once or twice a week I get an alert from newrelic that my app is down, and it lasts anywhere from 10 to 30 minutes. Is this normal?
The logs just show a bunch of H12 Request Timeout errors, but nothing else.
This is not a free account, I have two dynos running. This is not immediately after a deployment.
I've tried puma and unicorn, following all the guides out there for configuration. In the case of puma, the heroku router eventually starts timing out on requests. In the case of unicorn, unicorn itself starts timing out.
If you're running on a single dyno application the dyno is idled after a long period of inactivity. Its possible and likely that New Relic is reporting this as downtime. If it takes a long period for your application to start up then it may have already reported it as down. All applications that run 2 or more dynos are never idled.

My passenger powered Rails app sometimes needs a long time to load

I use Apache + Passenger to host some Rails applications. Something seems to go in a sleep mode when there is no request for a longer time. It then takes 10-20 seconds for the site to load. Feels like there is something that has to wake up when there have been no requests for a longer time.
How can I fix that? I have enough RAM so it should be no problem if whatever goes to sleep just stays awake. ;)
Take a look at the PassengerPoolIdleTime parameter for Passenger.
It states the maximum number of seconds an application instance can be idle before it shuts down to conserve memory.
The default is 300, but you could try to set a higher number and see if that helps.
Also, if you're on a shared host and can't change that setting, you could always write a cron script to hit your site once every x seconds (where x is slightly less than PassengerPoolIdleTime), and update your analytics package to ignore requests from the IP address of the box that's doing the polling.
The passenger documentation recommends setting the PassengerPoolIdleTime to 0 on non-shared hosts that are running only a few Rails apps. That should prevent it from getting unloaded unless it's absolutely necessary.
#x0ne, you can set PoolIdleTime (pool_idle_time in nginx) in the global server configuration. In my installation of Nginx that's /opt/nginx/conf/nginx.conf.
Here's the part of passenger's documentation that covers PoolIdleTime: http://www.modrails.com/documentation/Users%20guide.html#PassengerPoolIdleTime

Resources