Rails complete quickly but it takes longer to finish loading the page - ruby-on-rails

I've been running with this problem lately. I can spot the following in server console:
Completed 200 OK in 2748ms (Views: 0.2ms | ActiveRecord: 42.8ms)
As you can see active record and views takes less than a second but still takes around 4 second to get a response from the server. I've run multiple Active record optimization but I can't find why does it take that long to see it.

Give rack-mini-profiler a try.
Middleware that displays speed badge for every html page. Designed to work both in production and in development.
Features:
Database profiling - Currently supports Mysql2, Postgres, Oracle (oracle_enhanced ~> 1.5.0) and Mongoid3 (with fallback support to ActiveRecord)
Call-stack profiling - Flame graphs showing time spent by gem
Memory profiling - Per-request memory usage, GC stats, and global allocation metrics

Related

Unexplained high response time on Heroku environment

I'm using a paid Heroku ($35/mo 2x web dyno, $50/mo silver postgresdb) plan for a small internal app with typically 1-2 concurrent users. Database snapshot size is less than 1MB.
App is Rails 4.1. In the last two weeks, there's been a significant performance drop in production env, where Chrome dev tools reports response times like 8s
Typical:
Total ​8.13s
Stalled ​3.643 ms
DNS Lookup ​2.637 ms
Initial connection 235.532 ms
SSL ​133.738 ms
Request sent ​0.546 ms
Waiting (TTFB) ​3.43 s
Content Download ​4.47 s
I'm using Nitrous dev environment and get sub-1s response on dev server with non-precompiled assets (with mirrored db).
I'm a novice programmer and am not clear on how to debug this. Why would I see 800% slower performance than the dev environment on an $85/mo+ Heroku plan? Given my current programming skill level, my app is probably poorly optimized (a few N+1 queries in there...) but how bad can it be when production has 1-2 concurrent users??
Sample from logs if it helps:
sample#current_transaction=16478 sample#db_size=18974904bytes sample#tables=28 sample#active-connections=6 sam
ple#waiting-connections=0 sample#index-cache-hit-rate=0.99929 sample#table-cache-hit-rate=0.99917 sample#load-avg-1m=0.365 sample#load-avg-5m=0.45 sample#load-avg-15m=0.445 sample#read-iops=
37.587 sample#write-iops=36.7 sample#memory-total=15405616kB sample#memory-free=1409236kB sample#memory-cached=12980840kB sample#memory-postgres=497784kB
Sample from server logs:
Completed 200 OK in 78ms (Views: 40.9ms | ActiveRecord: 26.6ms
I'm seeing similar numbers on dev server but the actual visual performance is night and day. The dev responds as you would expect - sub-1s response and render. The production server is 4-5s delay.
I don't think it's related to ISP as suggested because I've actually been traveling and seeing identical performance problems from USA and Europe.
Add NewRelic add-on to your app's ecosystem on Heroku and explore what's going on. You can do it from Heroku's dashboard. Choose free plan and you will be provided with 2 weeks trial period of full functionality. After trial period there will be some constraints but anyway it's enough for measuring performance of the small app.
Also you can add the Logentries add-on and you will get access to the app's log history via web interface.

Ruby on Rails site slow/down

I'm taking over a Ruby on Rails site and I'm discovering that the site has huge performance issues. Sometimes the site doesn't even load. And this is not new for the site.
It's on a Rackspace Server (First Generation) with 2gb Ram.
It's running on Ubuntu Hardy using Apache2 and MySQL. The RoR site is running an older version of Ruby (1.8.7) and Rails (3.2.1).
According to top (shift m), Apache (res column) uses about 6mb to 10mb per process.
It's using prefork mpm with the below specs:
StartServers 2
MinSpareServers 2
MaxSpareServers 2
MaxClients 50
MaxRequestsPerChild 100
And Passenger is set to have:
PassengerMaxPoolSize 10
Passenger-memory-stats show that Rails uses, on average, about 40mb.
free -ml consistently shows 100mb to 1500mb of free memory.
Passenger-status sometimes shows as high as 250 on the waiting on global queue but generally varies from 0 to 100.
I have tried playing around with MaxClients and the PoolSize but eventually the site succumbs to slowness or simply being unaccessible at some point but may, I'm assuming when traffic eases up, load fine again at a later point.
Loading the actual Rails sites can sometimes takes forever but loading static files (images, txt files) works fine. Although sometimes it gets to the point where you can't even load static files.
Any pointers on trying to get this working? For the amount of traffic it gets (about 250k impressions per month) it seems like the server should be fine for this site.
Edit:
I responded to comments about though I put it in here anyway.
Database is about 1gb in size. There are quite a bit of spam issues (new accounts that are obvious spam which average about 1k per day, spam posts/comments, etc). Mysql-slow.log shows nothing so far.
Thanks for all the comments. I had hoped that it was simply me being an idiot on the Apache or Passenger server settings. My next steps is to start investigating code and queries.
Without knowing much I can only give more general advice. For improving site performance remember the Perfomance Golden Rule:
80-90% of the end-user response time is spent on the front-end. Start there.
Below is a non-exhaustive list areas of improvement for increasing performance in a Rails app:
Diagnosing a Slow Rails App:
YSlow
A useful diagonsis tool for identifying perfomance issues is Yslow. It's a browser extension that diagnoses and identifies common issues slowing down your app (particularly on the front end).
Back-end Tools
For your Rails back-end I recommend incorporating tools such as Bullet & NewRelic directly into your development processes, so that while you're developing you can spot bad queries immediately while they are still easy to fix.
Check Server Console Logs
Checking your server logs is an effective method for diagnosing what components of your Rails app is taking the longest. E.g. below are sample logs from two unrelated production Rails apps running in my local development environment (with the huge database i/o portion excluded) :
# app1: slowish
Rendered shared/_header.html.erb (125.9ms)
Rendered clients/details.html.erb within layouts/application (804.6ms)
Completed 200 OK in 3655ms (Views: 566.9ms | ActiveRecord: 1236.9ms)
# app2: debilitatingly slow
Rendered search/_livesearch_division_content.js.erb (5390.0ms)
Rendered search/livesearch.js.haml (34156.6ms)
Completed 200 OK in 34173ms (Views: 31229.5ms | ActiveRecord: 2933.4ms)
App1 & App2 both suffer from performance issues, but App2's performance issues are clearly debilitatingly slow. But with these server logs, I know for App1 that I should look into clients/details.html.erb, and that for App2 I absolutely need to investigate search/livesearch.js.haml. (I discovered from looking that App2 had uncached, large N+1 queries over lots of data -- I'll touch on this later)
Improving Front-end Performance
Budget your page size strictly
To maintain fast load times you need reduce the amount/size of your page assets (JS/CSS/Images). So think about your page size like a budget. For example, Hootesuite recently declared that their home page now has a strict page-size budget of 1 mb. No exceptions. Now check out their page. Pretty fast isn't it?
Easy wins for reducing your page size include stripping out unused JS or CSS files, including them only where needed, and changing static images into much smaller vectors.
Serve smaller image resolutions based on screen width
Image loading is a large cause of slow page loading times. A large 5mb image used in the background of your splash page can easily be brought down to 200kb-400kb in size, and still be high quality enough to be hardly indistinguishable from the higher resolution original. The difference in page load times will be dramatic.
You should do the same with uploaded images as well. E.g. if a user uploads a 5mb image for his banner or avatar, then it's essential that you serve this uploaded image at lower file sizes/resolutions depending on the size that it will be displayed. Carrierwave, Fog, combined with rmagick OR minimagic is a popular technique used with Amazon S3 to achieve better image uploading/resizing. With them, you can dynamically serve smaller image resolutions to fit the screen size of your user. You can then use media queries to ensure that mobile devices get served smaller image resolutions & file sizes than desktops with Retina screens.
Use a Content Delivery Network to speed up asset loading
If your site deals with lots of images or large files, then you should look into using a Content Delivery Network (CDN's) such as Cloudfront to speed up asset/image loading times. CDN's distribute your assets files across many servers located around the world and then use servers closest to the geographical region of the user to serve the asset. In addition to faster speeds, another benefit of a CDN is that it reduces the load on your own server.
Fingerprint Static Assets
When static assets are fingerprinted, when a user visits your page their browser will cache a copy of these assets, meaning that they no longer need to be reloaded again for the next request.
Move Javascript files to the bottom of the page
If there are other javascript files included without using the Rails way, then be aware that if javascript assets are placed on the top of the page, then a page will remain blank as its loading while a user's browser attempts to load them. Rails will automatically place javascript files to the bottom of your page if you use the asset pipeline or specify javascript files using the javascript_include_tag to prevent this problem, but if that's not the case make sure that external Javascript files are included at the bottom of the page.
Improving Back-end Performance
Cache, Cache, Cache (with Memcache/Dalli)!
Among backend performance optimizations, there is no single performance enhancement that can come even close to matching the benefits provided with caching. Caching is an essential component of pushing any Rails app to high scalability. A well implemented caching regime greatly minimizes your server's exposure to inefficient queries, and can reduce the need to undergo painful refactorings of existing non-performant code.
For example, with my App2 example mentioned above, after we implemented a simple page cache, our 34 second page load time dropped down to less than a second in production. We did not refactor a single line of this code.
Page content that is accessed frequently, yet change relatively infrequently are easiest to cache and benefit most from caching. There multiple ways to cache on the serverside, including page caching and fragment caching. Russian doll caching is now the favoured fragment caching technique in Rails. Here's a good place to start.
Index (Nearly) Everything
If you are using SQL for your database layer, make sure that you specify indexes on join tables for faster lookups on large associations used frequently. You must add them during migrations explicitly since indexing is not included by default in Rails.
Eliminate N+1 queries with Eager Loading
A major performance killer for Rails apps using relational (SQL) databases are N+1 queries. If you see in your logs hundreds of database read/writes for a single request in a way that resembles a pattern, then it's likely you already have a serious N+1 query issue. N+1 queries are easy to miss during development but can rapidly cripple your app as your database grows (I once dealt with an that had twelve N+1 queries. After accumulating only ~1000 rows of production data, some pages began taking over a minute to load).
Bullet is a great gem for catching N+1 queries early as you develop your app. A simple method for resolving N+1 queries in your Rails app is to eager load the associated Model where necessary. E.g. Post.all changes to Post.includes(:comments).all if you are loading all the comments of each post on the page.
Upgrade to Rails 4 and/or Ruby 2.1.x or higher
The newer version of Rails contains numerous performance improvements that can speed up your app (such as Turbolinks.) If you are still running Rails 3.2.1 then you need to upgrade to at least Rails 3.2.18 for security reasons alone.
Ruby 2.1.x+ contain much better garbage collection over older versions of Ruby. So far reports of people upgrading have found notable performance increases from upgrading.
These are a few performance improvements that I can recommend. Without more information I can't be of much help.
Index only the models that are accessed normally. That should give you a significant performance boost as well.

How to discover Heroku bottleneck on Rails app

I'm running a Rails 2.3.11 app on Heroku Bamboo stack and I'm getting awful performance issues, as lots of pages take more than 15-30 seconds to load (resulting in frequent timeout errors).
The same app running on my local development environment, using the same database load, instead runs reasonably well (around 1000 milliseconds).
I've tried to use addons as NewRelic but I cannot make sense of it, I find it too difficult to understand.
Basically I need to understand if my bottleneck are slow db queries, slow remote urls (es: google maps api queries), some misconfiguration issue or the infamous Heroku random routing.
What would you suggest me to do?
UPDATE
As suggested I had a look at the logs and for instance when I call the homepage I get this result:
Completed in 8014ms (View: 437, DB: 698) | 200 OK
What does it mean that it is completed in 8014ms when the sum of the milliseconds of the DB and the View should be around 1 second?

ActiveRecord Postgres 8-10x slower on DotCloud (EC2) production vs Macbook air development

I have a Rails 3.2.12 app set up locally and on DotCloud. I'm seeing very slow ActiveRecord (postgres) performance on Dotcloud and can't work out why:
Page Load Macbook Air (rails app in development mode):
Completed 200 OK in 617ms (Views: 361.3ms | ActiveRecord: 39.1ms)
Page Load DotCloud (rails app in production mode, identical DB and page):
Completed 200 OK in 796ms (Views: 315.3ms | ActiveRecord: 329.4ms)
This is not an erratic time, but the standard performance delta on all page loads. My database is only 16MB, so not large. Memory allocation on the postgres service is sufficient (128MB), with only 30MB being utilized. I checked my local postgres.conf and the settings are default, untuned postgresql.app settings.
Is this poor performance just what to expect in the cloud? Is it network latency between the web server and db server?
Would very much appreciate thoughts on how to debug & fix this!
As a dotCloud employee, I have to tell you that if you want a thorough look at your application to see why it's not performing to your expectations, you have to file a support ticket. I should also tell you that if you're in sandbox mode, you can expect lesser performance compared to if you were in one of the paying tiers (such as Live or Enterprise).
As a fellow developer, however, I can quickly point out two key differences between your local dev environment and your dotCloud environment.
Your dev environment runs your DBMS (PostgreSQL) and your rails app on the same host, eliminating any noticeable latency caused by round-trips to the database. With dotCloud, these are likely to be on separate hosts and possibly even in separate datacenters.
Your dev environment probably uses a Solid State Drive (SSD) (depending on the age of your MacBook Air) and you just moved into EBS (Amazon Elastic Block Store), a form of network-attached storage (on hard disk drives).
Between these two changes, I'm not totally shocked by the increase in response time, but my curiosity is piqued by your ActiveRecord time increasing by 290ms while your overall response time only increased by 180ms.
I don't want to speculate where your response time is being eaten up, so once again I recommend you file a support ticket at support.dotcloud.com as soon as possible so that we can take a closer look. If and when you do, mention my name or this StackOverflow thread and include the URL for your app.

What's the extra time for when loading web page in rails?

I got a page which is generated in a few milliseconds.
Completed 200 OK in 83ms (Views: 75.9ms)
When I try to load this page with "time wget http://localhost:3000/search", I fould that it takes 1.5 seconds.
real 0m1.562s
user 0m0.001s
sys 0m0.003s
There's no js execution time in it as it's not loaded in browser. So what's the extra time for? Is there any tools that can figure out time cost details in rails?
Thanks.
If Rails is loaded up in development, the controllers are not cached (by default) which can add to latency and might not be captured by Rails' measurement (can't say I've looked).
Another thing to look at is the Rack server in use. WEBrick works like a champ but is painfully slow; Thin and Unicorn are significantly faster at servicing requests. I would consider looking into using Rack middleware or Rails Metal if you need an action to be optimized for speed.
At the TCP level, the localhost->127.0.0.1 DNS lookup will take some small amount of time. Also, wget has to establish a new socket to the app. Neither of these will be measurable by the app itself.
For what it's worth, that same experiment against a Rails 2.3.12 app running on Unicorn 4.0.1 hitting the login page:
Completed in 16ms (DB: 4) | 302 Found [http://localhost/]
real 0m0.407s
user 0m0.000s
sys 0m0.010s
Update 9/28/2011:
Digging in at the HTTP layer, curl supports tracing a URL request with timestamp outputs:
curl --trace-ascii trace.log --trace-time http://localhost:3000
To dig a bit further and see the actual calls, try using strace with timing mode on:
strace -T curl http://localhost:3000
Example output from both of these is available as a gist.

Resources