Rails 4 on Heroku extremely slow with loading - ruby-on-rails

I've got Heroku deployment with my Rails 4 app and it's proving to be extremely slow. I'm not sure if my location has a factor as I'm based in Australia
I've got NewRelic addon and below is the problem that I'm seeing.
Category Segment % Time Avg calls Avg Time (ms)
View layouts/users Template 98.4 1.0 16,800
Based on this breakdown, I see that layout users is the problem for the performance (which is nearly 16.8 seconds!).
Is there a good way to profile this to find out exactly what functions are causing this problem and what are the best way to fix those?
Also another important thing to note is that when I go to map report it shows End User of 19.5 seconds which takes up a lot of time.

When an app on Heroku has only one web dyno and that dyno doesn't receive any traffic in 1 hour, the dyno goes to sleep.
When someone accesses the app, the dyno manager will automatically wake up the web dyno to run the web process type. This causing delay for this first request.
Are you noticing similar behaviour?

Related

Heroku: How expensive could running my application be?

I'm about to release an iOS app, and deploy its backend (rails backend that serves the iOS app) to heroku.
I have very little knowledge when it comes to the practical price you will pay based on traffic, etc. This link (http://notes.ericjiang.com/posts/881) states... Nowadays, especially with faster code and faster computers, a standard 512MB dyno can power websites with tens of thousands of hits per hour.
I'm trying to get a rough estimate of how much running my backend on heroku could cost me. What's the best way to figure this out? The pricing is all very straightforward. It basically just comes down to how many dyno's I'm going to need.
If I get 5 beta testers to run my iOS app for a 10 minute window, can I extrapolate some statistics as to how much my backend is being used? Is it the 'hits' that matter, or the 'data' transferred, or the 'time' the backend is actively doing something, like queueing up some resulting data?
Is there a formula to figure it out? Let's say say a user averages 10 hits per minute, and I constantly have an average of 5000 users. That would be 3 million hits per hour. What exactly should I be looking for in trying to determine an accurate pricing for my first backend?
While Heroku do have some limits surrounding bandwidth (not requests) for the most part your cost is close to fixed.
Monthly pricing is typically made up from a combination of:
Dynos
Databases
Addons
Premium support
Heroku provide a price calculator on their website. Further, standard (non-hobby) dynos and up include metrics around CPU usage and memory usage.
My suggestion if you're just starting out? Start with one web dyno and a Postgres database. Beta test your app and check your metrics. A Rails app on a single Standard 1X dyno can handle a reasonable amount of traffic (depending on what else it might be doing) and if you need to add more dynos it's only a command line interface away.
Hope that helps.

New Relic's pings not improving cold start

There's a similar question about app harbor on StackOverflow, but the user didn't try to use new relic to overcome the problem.
I deployed my ASP.NET MVC project on App Harbor. It's very easy to configure and you can even set automatic deployments from Git. However, as my website is still mainly used only by me, I was getting very long cold starts (over 15 secs). To avoid it, I installed New Relic. The idea was to simultaneously to monitor the application but also to create periodic pings that, according to "a lot of people", would drastically reduce the loading time.
It's not working. I have New Relic correctly pinging my application every minute, but I still get very long cold starts. For instance, 5 min ago, I've got a cold start of 16 seconds. 1 minute after, I got the page loaded in less than a second.
I know I could have used Pingdom or StillAlive to achieve the same result:
How do I improve app performance on AppHarbor?
I wouldn't like to do it because I like New Relic and I don't want to have a lot of add-on's on app harbor as they will slow down my website. Do you have any idea what might be causing it?
I'm not familiar with AppHarbor's setup. But if it's using IIS, the pinging is just keeping the application pool from reaching the idle timeout. But there the default IIS setting for the application pool to be recycled every 29 hours no matter the number of requests. And it's normally in the best interest to let it recycle once in a while, so working around it may not be in your best interest.
Your best bet is to reduce the number of things happening on application start. Precompiling your views is a good place to start. And heck, Stack Exchange/Stack Overflow precompiles views to avoid the application start up cost.

High traffic volume in short amount of time

I'm looking for some advice here. My school's student section registration process is online and involves around 6,000 students
They base seating off first come first serve basis. Every year they open the site at noon and floods of people get on a try to register as fast as possible to get good seats. Every year without fail the server crashes and everyone is mad.
After several years of being frustrated myself, I've offered to redo their registration system.
My plan is to rewrite it in ruby on rails, and use heroku for hosting.
Does a heroku dyno only handle one request at a time?
Heroku scales up to 50 dynos. Will that be enough to handle around 6,000 users with about 5 pageviews per transaction in a short amount of time, say a half hour?
Any helpful strategies or tips you can give me before I dive into this project?
Does a heroku dyno only handle one request at a time?
Yes. Heroku dynos are single threaded.
Heroku scales up to 50 dynos. Will that be enough to handle around
6,000 users with about 5 pageviews per transaction in a short amount
of time, say a half hour?
This depends on how fast your page loads. For arguments sake let's pretend it takes 2s per page request (as per Google Analytics recommendation) and you need to load 6,000 users x 5 page views / 30 minutes - 1000 page views per minute.
At 2s per page load, one single dyno would load 30 page views per minute. At 50 dynos, this would be 1500 page views per minute. This would obviously allow you to exceed your overall goal and leave you some room for error, but if all 6000 users are hitting the page at once then a single Heroku app may not be able to keep up depending on your timeout. You would need to implement a user queue system - explained below.
Any helpful strategies or tips you can give me before I dive into this
project?
All that said, a 2s load time may vary depending on the assets your page needs to load, the amount it needs to interact with a database, it's queries, caching, etc. Your page can also potentially serve much faster.
You also need to worry about the initial hit of all the users. This could be taken care of via a first come first serve queue system - similar to that used by Ticketmaster if you've ever used their site. This could be accomplished via AWS SQS or your preference of queue system.
With a user queue and caching of your assets and common database queries, you should be able to accomplish this with 50 or less dynos.
EDIT: I'm taking your word for it that Heroku will run 50 web dynos. They show 24 as max on their pricing page, but I cannot find any info one way or another.
Does a heroku dyno only handle one request at a time?
It depends on web server you use (https://devcenter.heroku.com/articles/dynos#dynos-and-requests). If you want more concurrency within a dyno, I'd suggest taking a look at something like Puma.
Heroku scales up to 50 dynos. Will that be enough to handle around 6,000 users with about 5 pageviews per transaction in a short amount of time, say a half hour?
Any helpful strategies or tips you can give me before I dive into this project?
You can have more than 50 dynos. A specific answer for you app is going to be way better that a guess or generalization. Run a load test against your site (e.g., using Blitz) and find out the real numbers. Costs for add-ons are pro-rated per second, so you only pay for the period you have it installed. So make sure you uninstall or downgrade Blitz once you've finished your test.

Heroku. Request taking 100ms, intermittently Times out

After performing load testing against an app hosted on Heroku, I am finding that the most DB intensive request takes 50-200ms depending upon load. It never gets slower, no matter the load. However, seemingly at random, the request will outright timeout (30s or more).
On Heroku, why might a relatively high performing query/request work perfectly 8 times out of 10 and outright timeout 2 times out of 10 as load increases?
If this is starting to seem like a question for Heroku itself, I'm looking to first answer the question of whether "bad code" could somehow cause this issue -- or if it is clearly a problem on their end.
A bit more info:
Multiple Dynos
Cedar Stack
Dedicated Heroku DB (16 connections, 1.7 GB RAM, 1 comp. unit)
Rails 3.0.7
Thanks in advance.
Since you have multiple dynos and a dedicated DB instance and are paying hundreds of dollars a month for their service, you should ask Heroku
Edit: I should have added that when you check your logs, you can look for a line that says "routing" That is the Heroku routing layer that takes HTTP request and sends them to your app. You can add those up to see how much time is being spent outside your app. Unfortunately I don't know how easy it is to get large volumes of those logs for a load test.

Response time increasing (worsening) over time with consistent load

Ok. I know I don't have a lot of information. That is, essentially, the reason for my question. I am building a game using Flash/Flex and Rails on the back-end. Communication between the two is via WebORB.
Here is what is happening. When I start the client an operation calls the server every 60 seconds (not much, right?) which results in two database SELECTS and an UPDATE and a resulting response to the client.
This repeats every 60 seconds. I deployed a test version on heroku and NewRelic's RPM told me that response time degraded over time. One client with one task every 60 seconds. Over several hours the response time drifted from 150ms to over 900ms in response time.
I have been able to reproduce this in my development environment (my Macbook Pro) so it isn't a problem on Heroku's side.
I am not doing anything sophisticated (by design) in the server app. An action gets called, gets some data from the database, performs an AR update and then returns a response. No caching, etc.
Any thoughts? Anyone? I'd really appreciate it.
What does the development log say is slow for those requests? The view or db? If it's the db, check to see how many records there are in database and see how to optimize the queries. Maybe you need to index some fields.
Are you running locally in development or production mode? I've seen Rails apps performance degrade faster (memory usage) over time in development mode. I'm not sure if one can run an app on Heroku in development mode but if I were you I would check into that.

Resources