Heroku app takes too long to load at times - ruby-on-rails

My app is basic (1 dyno, 20MB slug size) and some of the pages take too long to load at times. Using Firebug, I've observed that most of the times the pages load within 3-4 sec but sometimes it takes more than a minute for the page to load (both data points are when the cache on the browser is cleared). The basic html response was within 500ms and the main component of the time was downloading a png image (17kb image) for which the wait time (after sending request) was more than a min. I cannot understand why this would be the case.
I am using Yslow to analyze the entire page (gave a B grade) and I think this has something to do with Heroku taking long to send images at times.
I have referred to the question - Why are my basic Heroku apps taking two seconds to load?
As suggested in the answers, I have put a simple cron task in heroku that accesses the homepage every hour through a URI GET request.
What could I do to improve the speed?
I am considering the following things:
1. Move images to a CDN
2. Put a get expires header as given in http://upstre.am/blog/tag/heroku/

I have put a simple cron task in heroku that accesses the homepage every hour through a URI GET request.
From what you are describing you are using Heroku cron job to ping your app. Unfortunately this will not work as you have to use an external ping service such as Pingdom.
Update: seems like external ping services like Pingdom no longer works either..

Heroku 'idles' dynos if they aren't used for more than 30min I believe. This means you'll need more than 1 web dyno if you want to keep the app active and ready to load at any time. Worker dynos are never idled.
Basically just set your app to 2 web dynos.

Related

Rails 4 on Heroku extremely slow with loading

I've got Heroku deployment with my Rails 4 app and it's proving to be extremely slow. I'm not sure if my location has a factor as I'm based in Australia
I've got NewRelic addon and below is the problem that I'm seeing.
Category Segment % Time Avg calls Avg Time (ms)
View layouts/users Template 98.4 1.0 16,800
Based on this breakdown, I see that layout users is the problem for the performance (which is nearly 16.8 seconds!).
Is there a good way to profile this to find out exactly what functions are causing this problem and what are the best way to fix those?
Also another important thing to note is that when I go to map report it shows End User of 19.5 seconds which takes up a lot of time.
When an app on Heroku has only one web dyno and that dyno doesn't receive any traffic in 1 hour, the dyno goes to sleep.
When someone accesses the app, the dyno manager will automatically wake up the web dyno to run the web process type. This causing delay for this first request.
Are you noticing similar behaviour?

High traffic volume in short amount of time

I'm looking for some advice here. My school's student section registration process is online and involves around 6,000 students
They base seating off first come first serve basis. Every year they open the site at noon and floods of people get on a try to register as fast as possible to get good seats. Every year without fail the server crashes and everyone is mad.
After several years of being frustrated myself, I've offered to redo their registration system.
My plan is to rewrite it in ruby on rails, and use heroku for hosting.
Does a heroku dyno only handle one request at a time?
Heroku scales up to 50 dynos. Will that be enough to handle around 6,000 users with about 5 pageviews per transaction in a short amount of time, say a half hour?
Any helpful strategies or tips you can give me before I dive into this project?
Does a heroku dyno only handle one request at a time?
Yes. Heroku dynos are single threaded.
Heroku scales up to 50 dynos. Will that be enough to handle around
6,000 users with about 5 pageviews per transaction in a short amount
of time, say a half hour?
This depends on how fast your page loads. For arguments sake let's pretend it takes 2s per page request (as per Google Analytics recommendation) and you need to load 6,000 users x 5 page views / 30 minutes - 1000 page views per minute.
At 2s per page load, one single dyno would load 30 page views per minute. At 50 dynos, this would be 1500 page views per minute. This would obviously allow you to exceed your overall goal and leave you some room for error, but if all 6000 users are hitting the page at once then a single Heroku app may not be able to keep up depending on your timeout. You would need to implement a user queue system - explained below.
Any helpful strategies or tips you can give me before I dive into this
project?
All that said, a 2s load time may vary depending on the assets your page needs to load, the amount it needs to interact with a database, it's queries, caching, etc. Your page can also potentially serve much faster.
You also need to worry about the initial hit of all the users. This could be taken care of via a first come first serve queue system - similar to that used by Ticketmaster if you've ever used their site. This could be accomplished via AWS SQS or your preference of queue system.
With a user queue and caching of your assets and common database queries, you should be able to accomplish this with 50 or less dynos.
EDIT: I'm taking your word for it that Heroku will run 50 web dynos. They show 24 as max on their pricing page, but I cannot find any info one way or another.
Does a heroku dyno only handle one request at a time?
It depends on web server you use (https://devcenter.heroku.com/articles/dynos#dynos-and-requests). If you want more concurrency within a dyno, I'd suggest taking a look at something like Puma.
Heroku scales up to 50 dynos. Will that be enough to handle around 6,000 users with about 5 pageviews per transaction in a short amount of time, say a half hour?
Any helpful strategies or tips you can give me before I dive into this project?
You can have more than 50 dynos. A specific answer for you app is going to be way better that a guess or generalization. Run a load test against your site (e.g., using Blitz) and find out the real numbers. Costs for add-ons are pro-rated per second, so you only pay for the period you have it installed. So make sure you uninstall or downgrade Blitz once you've finished your test.

Uploading multiple files to heroku serially locks up all dynos

Its my understanding that when I upload a file to my heroku instance its a synchronous request and I will get a 200 back when the request is done, which means my upload has been processed and stored by paperclip.
I am using plupload which does a serial upload (one file at a time). On Heroku I have 3 dynos and my app becomes unresponsive and I get timeouts trying to use the app. My upload should really only tie up at most a single dyno while all the files are being uploaded since its done serially and file 2 doesnt start until a response is returned from file 1.
As a test I bumped my dynos to 15 and ran the upload. Again I see the posts come into the logs and then I start seeing output of paperclip commands (cant remember if it was identify or convert) and I start getting timeouts.
I'm really lost as to why this is happening. I do know I 'can' upload directly to s3 but my current approach should be just fine. Its an admin interface that is only used by a single person and again at most it should tie up a single dyno since all the uploaded files are sent serially.
Any ideas?
I've been working on the same problem for a couple of days. The problem, so far as I understand, is that when uploading files through heroku, your requests are still governed by the 30 second timeout limit. On top of this, it seems that subsequent requests issued to the same dyno (application instance) can cause it to accrue the response times and terminate. For example, if you issue two subsequent requests to your web app that each take 15 seconds to upload, you could recieve a timeout, which will force the dyno to terminate the request. This is most likely why you are receiving timeout errors. If this continues on multiple dynos, you could end up with an application crash, or just generally poor performance.
What I ended up doing was using jquery-file-upload. However, if you are uploading large files (multiple MBs), then you will still experience errors as heroku is still processing the uploads. In particular I used this technique to bypass heroku entirely and upload directly from the client's browser to s3. I use this to upload to a temp directory, and then use carrierwave to 're-download' the file and process medium and thumbnail versions in the background by pushing the job to Qu. Now, there are no timeouts, but the user has to wait for the jobs to get processed in the background.
Also important to note is that heroku dynos operate independently of each other, so by increasing the number of web dynos, you are creating more instances of your application for other users, but each one is still subject to 30 second timeouts and 512Mb of memory. Regardless of how many dynos you have, you will still have the same issues. More dynos != better performance.
You can use something like Dropzonejs in order to divide your files in queue and send them separately. That way the request wont timeout.

Heroku. Request taking 100ms, intermittently Times out

After performing load testing against an app hosted on Heroku, I am finding that the most DB intensive request takes 50-200ms depending upon load. It never gets slower, no matter the load. However, seemingly at random, the request will outright timeout (30s or more).
On Heroku, why might a relatively high performing query/request work perfectly 8 times out of 10 and outright timeout 2 times out of 10 as load increases?
If this is starting to seem like a question for Heroku itself, I'm looking to first answer the question of whether "bad code" could somehow cause this issue -- or if it is clearly a problem on their end.
A bit more info:
Multiple Dynos
Cedar Stack
Dedicated Heroku DB (16 connections, 1.7 GB RAM, 1 comp. unit)
Rails 3.0.7
Thanks in advance.
Since you have multiple dynos and a dedicated DB instance and are paying hundreds of dollars a month for their service, you should ask Heroku
Edit: I should have added that when you check your logs, you can look for a line that says "routing" That is the Heroku routing layer that takes HTTP request and sends them to your app. You can add those up to see how much time is being spent outside your app. Unfortunately I don't know how easy it is to get large volumes of those logs for a load test.

For Ruby and Rails, how to print out the true page rendering time on the webpage?

If using in the controller:
def index
#time_start_in_controller
...
end
and at the end of view, use
<%= "took #(Time.now - #time_start_in_controller} seconds" %>
but isn't the time at the view not the true ending of rendering, because it needs to mix with the layout and so forth. What is a more accurate way (just as accurate as possible) to print out the page generation time right on the webpage?
(update: also, the console showing the log as taking 61ms, but the page definitely took 2 to 3 seconds to load, and the network I am using is super fast, at home or at work, at 18mbps or higher with a ping of maybe 30ms)
update: it is a bit strange that if I use the http performance test ab
ab -n 10 http://www.my-web-site.com:8080
it takes 3 seconds total for 10 requests. But if I use Firefox or Chrome to load the page, each page load is about 3 seconds. This is tunneling to my work's computer back to my notebook running Rails 3, but shouldn't make a difference because I run Bash locally for the above statement and use Firefox locally too.
in a typical production environment, static content (images, css, js) are handled by the web server (eg. apache, nginx etc) not you rails server. so you should check their logs as well. if you are serving static content from your rails server that could be your problem right there.
If your browser time is slow but the time taken in rails (According to the logs) is fast, that can mean many things including but not quite limited to:
network speed is slow
your dns server is slow and browser can't resolve your dns quickly (this happens with for instance if you use godaddy for your dns server they throttle dns lookups)
the requests concurrency exceeds how many threads you have in rails
one way to debug these types of performance issues is to put something in front of the rails server (for example Haproxy) and turn the logging to full. As they will show how many waiting requests there are and how long the actual request/response transferring took along with how long it took your rails thread to process.

Resources