Can't figure out what is causing my performance bottleneck in my rails app - ruby-on-rails

My rails app, according to my heroku logs, is serving requests on average of about 1700 to 2500 milliseconds (this is the entire roundtrip). I used new relic to profile my app, and it seems that the majority of the request is not spent in my database but rather in the "Web Transaction" section of New Relic. It seems like the "Controller" category tends to be the slowest among requests, followed by the "SQL - SELECT" segment in the "Database" category.
I'm not quite sure what could be causing my performance bottleneck in my controllers, nor do I think I can dive deeper into new relic without paying for the premium version. I recently added indexes to the foreign keys of my application, although I do not think this made much of a difference in terms of database response times.
I know this is not enough information to figure out what is causing these bottlenecks, but I do not even know where to start or what info to give. If people could tell me what info is needed to diagnose these issues, then that would be helpful to me.

New Relic for Ruby includes a free, standalone developer mode. When running in RAILS_ENV=development, the New Relic gem adds a route that will show you a detailed profile for each request. Go to http://localhost:3000/newrelic after you hit your app a few times.
The profile includes time for each SQL query, as well as for components of your code. You can use custom instrumentation to break down big chunks of code into smaller segments (or individual methods) that get timed separately. This feature is a lot like the transaction traces you get in the paid Pro version, one major difference being that you wouldn't want to run the free dev mode in production.
(Full disclosure: I work for NR. Not many people know about the free dev mode, though, so I thought it was worth mentioning.)
You could potentially make Javascript loading appear even faster with something like head.js, which will load your JS files asynchronously and in parallel.

Take a look at this slide show:
http://www.slideshare.net/drhenner/optimize-the-obvious-7636674
Might not be enough but it goes through some common faults.
Digging a little deaper take a look at this video: http://windycityrails.org/videos2011/#2
It is longer but gives a lot of places to look.
On a different note. Do you use a CDN?

Related

Can you trust the New Relic "Page average load time"?

Using: rails 3.2.11 and New Relic (free version).
I have had some problems with an app being quite slow. I have examined it and speeded up my app quite a lot. According to New Relic the app is still very slow, especially in the rendering phase. See pic:
According to Pingdom however, it seems to be loading in the matter of 2-4 seconds which is my experience when I visit the website as well.
I am using Memcachier and this speeds up the pages a lot but maybe New Relic always counts for un-cached controller runs?
My big question is, can you trust the New Relic "Average page load time" as a key to how slow your website really is? Would you trust the results of Pingdom more?
New Relic measures browser experience of real-world users from all over the globe with various connection speeds, browsers, and computers. As Jesse mentioned, comparing RUM with Pingdom isn't an apples to apples comparison. It's also unlikely that real-world experience on your website will match your experience and that's why RUM is so useful.
There are many ways to test the performance of a web page including webpagetest.org and YSlow. These tools might give you some more information about why your page is taking longer to load than you expect.
With access to the full suite of New Relic tools, you can access a geographic breakdown of page load time as described here: https://newrelic.com/docs/mobile-apps/geography-dashboard where you might discover that connections from a certain location are skewing your results unexpectedly. You can also access the browsers breakdown as described here: https://newrelic.com/docs/site/browsers where you might find that one particular browser is exceptionally slow for your page. If it's related to a browser, that's something you can certainly address. If it's just geography, you can rest easily knowing there's not much you can do beside perhaps a CDN which addresses connectivity issues in that location.
On the web transactions tab, you can see the browser performance by transaction even with a free subscription and that might help you see that one page is much slower than you realized and give you a target for optimization.
I feel like it's pretty accurate. What it does is inject some JavaScript in the footer of your page, and measures the difference between the time of the event like clicking links and submitting forms, and the page ready event. See here for more information: https://newrelic.com/docs/features/how-does-real-user-monitoring-work

Scaling to support a massive amount of traffic in a short period of time

Until now, our site has had a modest amount of traffic. None of our developers are big ops guys, but we've stayed ahead of it and keep the site up and running pretty quick. That said, our dev team is stretched, we've accumulated some technical debt, and there's plenty of opportunity to optimize.
Without getting into specifics, we just found out that we'll be expecting a massive amount of traffic in the near future in a very short period time. On the order of several million hits in a few hours. Scaling is one thing, but this is several orders of magnitude greater than what we're seeing now.
We're a Rails app hosted on S3 using ELB, and Postgresql.
I wanted to field some recommendations for broad starting points for scaling and load testing given this situation.
Update: Sorry, EC2, late night :)
#LastZactionHero
Pretty interesting question, let me answer you in detail, I hope you are talking about some e-commerce applications, enterprise or B2B apps doenst see spikes as such. Since you already mentioned that you are hosted your rails app on s3. Let me make couple of things clear.
1)You cant host an rails app on s3. S3 is simple storage service. Where you can only store files.
2) I guess you have hosted your rails app on AWS ec2 with a elastic load balancer attached above the ec2 instances which is pretty good.
3)You have a self managed Postgresql deployed on a ec2 instance.
If you are running on AWS you are half way safe and you can easily scale up and scale down.
I can see one problem in your present model, that your db. AWS has got db as a service. Thats called Relation database service.Which supports Mysql Oracle and MS SQL server.
RDS comes with lot of features like auto back up of your database, high IOPS etc.
But it doesnt support your Postgresql. You need to have or manage a self managed ec2 instance and run postgresql database, but make sure its fail safe and you do have proper back and restore system at place.
AWS provides auto scaling api and command line tools, pretty easy.
You dont have worry about the bandwidth issue etc, but I admit Angelo's answer too.
You can use elastic mem cache for caching your app. Use CDN if need to speed your app. RDS can manage upto 30000 IOPS, its a monster to it will do lot of work for you.
Feel free to ask me if you need any kind of help.
(Disclaimer: I am a senior devOps engineer working for an e-commerce company, use ruby on rails)
Congratulations and I hope your expectation pans out!!
This is such a difficult question to comprehensively answer given the available information. For example, is your site heavy on db reads, writes or both (and is your sharding/replication strategy in line with your db strain)? Is bandwidth an issue, etc? Obvious points would focus on making sure you have access to the appropriate hardware and that your recipies for whatever you use to provision/deploy your hardware is up to date and good to go. You can often throw hardware at a sudden spike in traffic until you can get to the root of whatever bottlenecks you discover (and yes, you will discover them at inconvenient times!)
Regarding scaling your app, you should at least:
1) Cache whatever you can. Pay attention to cache expiration, etc.
2) Be sure your DB has appropriate indexes set up (essentially, you should have an index on any field you're searching on.)
3) Watch your logs closely to identify potential long queries, N+1 queries, long view renders, etc.
4) Do things like what Shopify outlines in this post: http://www.shopify.com/technology/7535298-what-does-your-webserver-do-when-a-user-hits-refresh#axzz2O0gJDotV
5) Set up a good monitoring system (Monit, God, etc) for each layer of your stack - sudden spikes in traffic can quickly bottleneck your application in unexpected places and lead to more issues. The cascade can happen quickly.
6) Set up cron to automate all those little tasks you currently do manually...that you will probably forget about doing once you're dealing with traffic spikes.
7) Google scaling rails and you'll see tons of good info.
8) etc, etc, etc...
You can use some profiling tools (rubyperf, or something like NewRelic, etc) Whatever response you get from them is probably best to be considered as a rough baseline at best. Simple reason being that your profiling is dependent on your hardware stack which will certainly change depending on actual traffic patterns. Pretty easy to do if you have a site with one page of static content...incredibly difficult to do if you have a CMS site with a growing db and growing traffic.
Good luck!!!

Is it possible to simulate page requests in Rails using rake?

I've been working on a rails project that's unusual for me in a sense that it's not going to be using a MySQL database and instead will roll with mongoDB + Redis.
The app is pretty simple - "boot up" data from mongoDB to Redis, after which point rails will be ready to take requests from users which will consist mainly of pulling data from redis, (I was told it'd be pretty darn fast at this) doing a quick calculation and sending some of the data back out to the user.
This will be happening ~1500-4500 times per second, with any luck.
Before the might of the user army comes down on the server, I was wondering if there was a way to "simulate" the page requests somehow internally - like running a rake task to simply execute that page N times per second or something of the sort?
If not, is there a way to test that load and then time the requests to get a rough idea of the delay most users will be looking at?
Caveat
Performance testing is a very broad topic, and the right tool often depends on the type and quality of results that you need. As just one example of the issues you have to deal with, consider what happens if you write a benchmark spec for a specific controller action, and call that method 1000 times in a row. This might give a good idea of performance of that controller method, but it might be making the same redis or mongo query 1000 times, the results of which the database driver may be caching. This also ignores the time it'll take your web server to respond and serve up the static assets that are part of the request (this may be okay, especially if you have other tests for this).
Basic Tools
ab, or ApacheBench, is a simple commandline tool that you can use to test the throughput and speed of your app. I usually go to this first when I want to send a thousand requests at a web server, or test how many simultaneous requests my app can handle (e.g. when comparing mongrel, unicorn, thin, and goliath). Because all requests originate from the same server, this is good for a small number of requests, but as the number of requests grow, you'll be limited by the resources on your testing machine (network stack, cpu, and maybe memory).
Benchmark is a standard ruby class, and is great for quickly spitting out some profiling information. It can also be used with Test::Unit and RSpec. If you want a rake task for doing some quick benchmarking, this is probably the place to start
mechanize - I like using mechanize for quickly scripting an interaction with a page. It handles cookies and forms, but won't go and grab assets like images by default. It can be a good tool if you're rolling your own tests, but shouldn't be the first one to go to.
There are also some tools that will simulate actual users interacting with the site (they'll download assets as a browser would, and can be configured to simulate several different users). Most notable are The Grinder and Tsung. While still very much in development, I'm currently working on tsung-rails to make it easier to automate rails load testing with tsung, and would love some help if you choose to go in this direction :)
Rails Profiling Links
Good overview for writing performance tests
Great slide deck covering most of the latest tools for profiling at various levels

will I be see a speed change by moving off of Heroku?

We're using Heroku. It's great. I love it. We spend a few thousand a month on it, between instances and databases, and generally couldn't be happier. However, we're scoping a new project that would require us to hit pretty aggressive latency targets -- sub 100ms.
Currently, we're a Rails 3 app processing between 80-90% of our requests sub 100ms, as measured by new-relic. Given that we have to hit the latency target from the client's perspective, I'm going to fudge another 10% hit to that success rate (thus down to about 3/4 of requests).
Yes, there are occasional network blips on Heroku that can cause latency, but we're fine tying our success to Heroku's at least in the short term.
If we want to hit the 100ms latency target for 95% of requests, I see 3 options:
a Optimize on our current rails stack in Heroku
b Move to straight AWS, or other hosting provider
c Rebuild functionality into something I feel more confident could hit the latency target. Probably Java.
a is obviously the prime candiate, but I've already picked off the low-hanging fruit. c is probably most certainly the most likely to succeed, but also take the most work (most of the engineers today came from Java, so we don't have any Ruby->Java penalty). But b is kind of a dark horse, where it could be good or it could be worse -- I don't know how to tell without trying it out and load testing, but then, that's basically just doing it and seeing if it was successful.
I'm trying to decide between them, or find a way to decide between them. Any advice? experience?
*The majority of our requests do not require us to render html.
Rewriting generally doesn't give you the return you're expecting.
Personally, I would look at optimising your existing application. 100ms is more than easy enough to get if architected properly, and another host will not really give you anything speed-wise that Heroku won't, without also adding a load of headaches.
We have numerous Rails apps running on Heroku, some of which are consistently returning less than 50ms response times.

Diagnosing Rails 3 Heroku Slowness

I have a Rails 3 app that I am running on Heroku. The app is usually really fast but sometimes I'll get cases where the app seems to hang for upwards of 2 minutes before finally returning the requested page.
I have the New Relic addon installed and there doesn't seem to be anything sticking out at me. It seems to be kind of sporadic and doesn't seem to be connected to a particular controller/action.
How would you suggest I go about pinpointing the cause of this problem?
http://github.com/kyledecot/skateparks-web
Always check the logs. When it happens, immediately go check your logs. Pretty sure all SQL queries are logged and timed, and you might want to add logging and timing to some of your own service calls.
If you upgrade to the Pro level of New Relic, you can get detailed traces specifically of your slow transactions. Turn up your Transaction Trace threshold to a large number (1s is pretty big), and wait for traces to show up. You'll see a detailed breakdown of the performance of an individual request, including SQL queries.
(Full disclosure: I work for New Relic.)

Resources