Preventing Timeout (H12 error) on Heroku - ruby-on-rails

So I'm using Heroku to host a simple script, which runs whenever a specific page is loaded. The script takes longer than 30 seconds to run, which Heroku returns as an H12 error - Request Timeout (https://devcenter.heroku.com/articles/limits#router). I can't use a background process for this task, as I'm using its run time as a loading screen for the user. I know the process will still complete, but I want a 200 code to be spent when the script finishes.
Is there a way to send a single byte every, say, 20 seconds, so that the request doesn't time-out, and will stop whenever the script finishes? (a response from the heroku page will start a rolling 55-second window preventing timeout). Do I have to run another process simultaneously to check if the longer process is finished, sending a kind of 'heartbeat' to the requesting page, letting it know the process is still running - and preventing heroku from timing out? I'm extremely new to rails, any and all help is appreciated!

Related

How do I diagnose random long response time issues?

My heroku app occasionally experiences long run times, on the order of 8 seconds (which is the trigger point to recieve email warnings about long response times). I originally assumed the issue was related to dyno sleeping, but our new production environment has redundant dynos, and shouldn't sleep.
The issue doesn't occur on any specific route -- even a route as simple as the 'ping' route used by the front-end to keep a session alive can produce it. I don't think it changes anything, but the latest example occured in an 'options' request -- and the followup request didn't experience any delay at all.
How can I further diagnose this issue? I've examined the logs around the request in question, there were few log entries in that time period, most chatter from the POSTGRES DB that -- if I'm reading it right -- was saying that it was up, running, and had no connections currently running code. For some reason, the request just... randomly... took forever.

Why can't one of my Rails processes see what the other has committed to the DB?

I'm developing an app on top of Amazon FPS. When you make a payment with FPS, the payment succeeds asynchronously: you make a request and wait for a POST (an Instant Payment Notification) informing you whether the charge completed.
I need the user to see whether the charge completed by the next page load (if possible), so I'm having the server:
Charge the user, then
Spin in a loop checking the database for a status update, and
Time out if it takes too long
Meanwhile, another server process is:
Receiving the IPN and
Noting the success in the database for the other process to see.
I'm running Unicorn with 3 workers. They're all logging to the same terminal window. I see the first process begin to spin, reporting repeatedly that the charge is still pending. Then I see the IPN come in, and the second process pick it up and write to the database that it has succeeded. Then I see the first process continue to see that it's pending.
Why does it never see the success value that was written to the database?
It feels to me like a transaction issue, so I ran a separate process which loops and outputs the status of the latest charge. When the second process reported that it marked the charge successful, this third independent process agreed. It's just the first server process that's failing to see the updated value.
As far as I can tell, the loop in that first process is not inside a transaction, and so it shouldn't be reading an old snapshot. But perhaps it is? How would I tell?
My Stack:
Unicorn 4.6.3
Rails 4.0
Ruby 2.0
Postgres 9.2

background tasks executing immediately and parallelly in rails

our rails web app has to download/unpack archives with html pages from ftp on request for user's viewing through the browser.
the archive can be quite big, so user has to wait until it downloads/unpacks on the server.
i implemented progress bar the way that i call fork/Process.detach in user's request, so that his request is done but downloading/unpacking process continues running in the background. and javascript rendered in his browser pings our server for status until all is ready and then it redirects him to unpacked html pages.
as long as user requests one archive, everything goes smoothly, but if he tries to run 2 or more requests at the same time(so that more forks are started), it seems that only one of them completes, and the rest expires/times outs/gets killed by passenger(?). i suppose its the issue with Passenger/forking.
i am not sure if its possible to fix it somehow so i guess i need to switch to another solution. the solution needs to permit immediate and parallel processing of downloads. so that if user requests multiple archives, he has to see download/decompression progress in all of them at the same time.
i was thinking about running background rake job immediately but it seems very slow to startup(also there's a lot of cron rake tasks happening every minute on our server). reason i liked fork was that it was very fast to start. i know there is delayed job, we also use it heavily for other tasks. but can it start multiple processes at the same time immediately without queues?
solved by keeping the fork and using single dj worker. this way i can have as many processes starting at the same time as needed without trouble with passenger/modifying our product's gemset (which we are trying to avoid since it resulted in bugs in the past)
not sure if forking inside dj worker can cause any troubles, so asked at
running fork in delayed job
if id be free to modify gemset, id probably use resque as wrdevos suggested, or sidekiq, or girl_friday(but thats less probable because it depends on the server running).
Use Resque: https://github.com/defunkt/resque
More on bg jobs and Resque here.
https://github.com/blog/542-introducing-resque

Heroku app takes too long to load at times

My app is basic (1 dyno, 20MB slug size) and some of the pages take too long to load at times. Using Firebug, I've observed that most of the times the pages load within 3-4 sec but sometimes it takes more than a minute for the page to load (both data points are when the cache on the browser is cleared). The basic html response was within 500ms and the main component of the time was downloading a png image (17kb image) for which the wait time (after sending request) was more than a min. I cannot understand why this would be the case.
I am using Yslow to analyze the entire page (gave a B grade) and I think this has something to do with Heroku taking long to send images at times.
I have referred to the question - Why are my basic Heroku apps taking two seconds to load?
As suggested in the answers, I have put a simple cron task in heroku that accesses the homepage every hour through a URI GET request.
What could I do to improve the speed?
I am considering the following things:
1. Move images to a CDN
2. Put a get expires header as given in http://upstre.am/blog/tag/heroku/
I have put a simple cron task in heroku that accesses the homepage every hour through a URI GET request.
From what you are describing you are using Heroku cron job to ping your app. Unfortunately this will not work as you have to use an external ping service such as Pingdom.
Update: seems like external ping services like Pingdom no longer works either..
Heroku 'idles' dynos if they aren't used for more than 30min I believe. This means you'll need more than 1 web dyno if you want to keep the app active and ready to load at any time. Worker dynos are never idled.
Basically just set your app to 2 web dynos.

Ruby mod_passenger process timeout

A few Ruby apps I've worked with hang for a long time on slow calls causing processes to backup on the machine eventually requiring a reboot. Is there a quick and easy way in Passenger to limit a execution time for a single Apache call.
In PHP if a process exceeds the max execution time setting in php.ini the process returns an error to Apache and the server keeps merrily plugging away.
I would take a look at fixing the application. Cutting off requests at the web server level is really more of a band aid and not addressing the core problem - which is request failures, one way or another. If the Ruby app is dependent on another service that is timing out, you can patch the code like this, using the timeout.rb library:
require 'timeout'
status = Timeout::timeout(5) {
# Something that should be interrupted if it takes too much time...
}
This will let the code "give up" and close out the request gracefully when needed.

Resources