Our application is a very large (read: "enterprise") Single Page Application.
Even though our app is 99% client side with a Java RESTful API, we built the web application on top of rails because the Asset Pipeline seemed like a a great fit for us.
And until recently, it was.
Our application has several hundred CoffeeScript and SASS files in it (as we've broken our code down in to many smaller pieces for reusability).
We deploy to Heroku, who automatically compiles assets for us. It used to be pretty fast. Then it was a minute. Then 2. Then 5. Now, we have apparently hit the upper limit that Heroku is willing to wait for assets to compile.
Pre-compiling the assets on a developers machine is possible but its error prone and doesnt solve the problem of our compilation process taking forever.
I notice that even some of our smaller CoffeeScript files (~50 lines) take 250ms to compile while some take 30 seconds or more on my i7 MBA. This quickly adds up.
We are on the latest Rails 3.2.x.
Is there a better way? Is there any tuning I can do for improving the speed of compilation?
Related
I have an application hosted on Heroku. Part of it's job is to store / serve up files of varying sizes, zipped up in bundles. (We're planning to phase out the bundling process at a later date, but that's going to be a major revamp of consuming software)
The 5GB limit on file uploads to S3 (caused by S3 requiring multiple part uploads) is becoming increasingly untenable for our use-case. In fact, it's become an outright pain and outright unacceptable to the business model.
Rails 6.1 is supposed to fix this, but we can't wait for it to come out, especially since there isn't an ETA on it yet. I tried using the alpha version off master, and got hit with an error about not being able to load coffee script (which is weird since I don't use coffeescript).
I'm now trying to find other viable alternatives that will allow our application to store files of 5GB or larger. I'm experimenting with compressing the files, but that isn't a long-term solution either.
I'm taking over a Ruby on Rails site and I'm discovering that the site has huge performance issues. Sometimes the site doesn't even load. And this is not new for the site.
It's on a Rackspace Server (First Generation) with 2gb Ram.
It's running on Ubuntu Hardy using Apache2 and MySQL. The RoR site is running an older version of Ruby (1.8.7) and Rails (3.2.1).
According to top (shift m), Apache (res column) uses about 6mb to 10mb per process.
It's using prefork mpm with the below specs:
StartServers 2
MinSpareServers 2
MaxSpareServers 2
MaxClients 50
MaxRequestsPerChild 100
And Passenger is set to have:
PassengerMaxPoolSize 10
Passenger-memory-stats show that Rails uses, on average, about 40mb.
free -ml consistently shows 100mb to 1500mb of free memory.
Passenger-status sometimes shows as high as 250 on the waiting on global queue but generally varies from 0 to 100.
I have tried playing around with MaxClients and the PoolSize but eventually the site succumbs to slowness or simply being unaccessible at some point but may, I'm assuming when traffic eases up, load fine again at a later point.
Loading the actual Rails sites can sometimes takes forever but loading static files (images, txt files) works fine. Although sometimes it gets to the point where you can't even load static files.
Any pointers on trying to get this working? For the amount of traffic it gets (about 250k impressions per month) it seems like the server should be fine for this site.
Edit:
I responded to comments about though I put it in here anyway.
Database is about 1gb in size. There are quite a bit of spam issues (new accounts that are obvious spam which average about 1k per day, spam posts/comments, etc). Mysql-slow.log shows nothing so far.
Thanks for all the comments. I had hoped that it was simply me being an idiot on the Apache or Passenger server settings. My next steps is to start investigating code and queries.
Without knowing much I can only give more general advice. For improving site performance remember the Perfomance Golden Rule:
80-90% of the end-user response time is spent on the front-end. Start there.
Below is a non-exhaustive list areas of improvement for increasing performance in a Rails app:
Diagnosing a Slow Rails App:
YSlow
A useful diagonsis tool for identifying perfomance issues is Yslow. It's a browser extension that diagnoses and identifies common issues slowing down your app (particularly on the front end).
Back-end Tools
For your Rails back-end I recommend incorporating tools such as Bullet & NewRelic directly into your development processes, so that while you're developing you can spot bad queries immediately while they are still easy to fix.
Check Server Console Logs
Checking your server logs is an effective method for diagnosing what components of your Rails app is taking the longest. E.g. below are sample logs from two unrelated production Rails apps running in my local development environment (with the huge database i/o portion excluded) :
# app1: slowish
Rendered shared/_header.html.erb (125.9ms)
Rendered clients/details.html.erb within layouts/application (804.6ms)
Completed 200 OK in 3655ms (Views: 566.9ms | ActiveRecord: 1236.9ms)
# app2: debilitatingly slow
Rendered search/_livesearch_division_content.js.erb (5390.0ms)
Rendered search/livesearch.js.haml (34156.6ms)
Completed 200 OK in 34173ms (Views: 31229.5ms | ActiveRecord: 2933.4ms)
App1 & App2 both suffer from performance issues, but App2's performance issues are clearly debilitatingly slow. But with these server logs, I know for App1 that I should look into clients/details.html.erb, and that for App2 I absolutely need to investigate search/livesearch.js.haml. (I discovered from looking that App2 had uncached, large N+1 queries over lots of data -- I'll touch on this later)
Improving Front-end Performance
Budget your page size strictly
To maintain fast load times you need reduce the amount/size of your page assets (JS/CSS/Images). So think about your page size like a budget. For example, Hootesuite recently declared that their home page now has a strict page-size budget of 1 mb. No exceptions. Now check out their page. Pretty fast isn't it?
Easy wins for reducing your page size include stripping out unused JS or CSS files, including them only where needed, and changing static images into much smaller vectors.
Serve smaller image resolutions based on screen width
Image loading is a large cause of slow page loading times. A large 5mb image used in the background of your splash page can easily be brought down to 200kb-400kb in size, and still be high quality enough to be hardly indistinguishable from the higher resolution original. The difference in page load times will be dramatic.
You should do the same with uploaded images as well. E.g. if a user uploads a 5mb image for his banner or avatar, then it's essential that you serve this uploaded image at lower file sizes/resolutions depending on the size that it will be displayed. Carrierwave, Fog, combined with rmagick OR minimagic is a popular technique used with Amazon S3 to achieve better image uploading/resizing. With them, you can dynamically serve smaller image resolutions to fit the screen size of your user. You can then use media queries to ensure that mobile devices get served smaller image resolutions & file sizes than desktops with Retina screens.
Use a Content Delivery Network to speed up asset loading
If your site deals with lots of images or large files, then you should look into using a Content Delivery Network (CDN's) such as Cloudfront to speed up asset/image loading times. CDN's distribute your assets files across many servers located around the world and then use servers closest to the geographical region of the user to serve the asset. In addition to faster speeds, another benefit of a CDN is that it reduces the load on your own server.
Fingerprint Static Assets
When static assets are fingerprinted, when a user visits your page their browser will cache a copy of these assets, meaning that they no longer need to be reloaded again for the next request.
Move Javascript files to the bottom of the page
If there are other javascript files included without using the Rails way, then be aware that if javascript assets are placed on the top of the page, then a page will remain blank as its loading while a user's browser attempts to load them. Rails will automatically place javascript files to the bottom of your page if you use the asset pipeline or specify javascript files using the javascript_include_tag to prevent this problem, but if that's not the case make sure that external Javascript files are included at the bottom of the page.
Improving Back-end Performance
Cache, Cache, Cache (with Memcache/Dalli)!
Among backend performance optimizations, there is no single performance enhancement that can come even close to matching the benefits provided with caching. Caching is an essential component of pushing any Rails app to high scalability. A well implemented caching regime greatly minimizes your server's exposure to inefficient queries, and can reduce the need to undergo painful refactorings of existing non-performant code.
For example, with my App2 example mentioned above, after we implemented a simple page cache, our 34 second page load time dropped down to less than a second in production. We did not refactor a single line of this code.
Page content that is accessed frequently, yet change relatively infrequently are easiest to cache and benefit most from caching. There multiple ways to cache on the serverside, including page caching and fragment caching. Russian doll caching is now the favoured fragment caching technique in Rails. Here's a good place to start.
Index (Nearly) Everything
If you are using SQL for your database layer, make sure that you specify indexes on join tables for faster lookups on large associations used frequently. You must add them during migrations explicitly since indexing is not included by default in Rails.
Eliminate N+1 queries with Eager Loading
A major performance killer for Rails apps using relational (SQL) databases are N+1 queries. If you see in your logs hundreds of database read/writes for a single request in a way that resembles a pattern, then it's likely you already have a serious N+1 query issue. N+1 queries are easy to miss during development but can rapidly cripple your app as your database grows (I once dealt with an that had twelve N+1 queries. After accumulating only ~1000 rows of production data, some pages began taking over a minute to load).
Bullet is a great gem for catching N+1 queries early as you develop your app. A simple method for resolving N+1 queries in your Rails app is to eager load the associated Model where necessary. E.g. Post.all changes to Post.includes(:comments).all if you are loading all the comments of each post on the page.
Upgrade to Rails 4 and/or Ruby 2.1.x or higher
The newer version of Rails contains numerous performance improvements that can speed up your app (such as Turbolinks.) If you are still running Rails 3.2.1 then you need to upgrade to at least Rails 3.2.18 for security reasons alone.
Ruby 2.1.x+ contain much better garbage collection over older versions of Ruby. So far reports of people upgrading have found notable performance increases from upgrading.
These are a few performance improvements that I can recommend. Without more information I can't be of much help.
Index only the models that are accessed normally. That should give you a significant performance boost as well.
I am trying to improve the performance of my pre-beta Rails 3.2 app hosted on Heroku.
Aggressive caching has dramatically improved things, but I still notice a large contribution from "Time spent in Ruby" when looking at my app server response time on New Relic (light blue on the graph).
What parts of a Rails app typically contribute to this 'Ruby time'?
I initially thought this was due to complex conditionals in one of my main controllers, but have simplified this. My views are now very aggressively cached using Russian Doll fragment caching and memcache (wow!).
Could serving of static assets be a cause? (Moving to S3 / CloudFont is on the todo list...)
Thanks!
(I already have delayed_job setup and have moved all that I can into the background. I'm also using Unicorn as my web server.)
UPDATED Performance Tuning
After aggressive caching, I started looking for other ways to improve app performance.
First I added Garbage Collection monitoring as suggesting, finding that GC was not significantly contributing to the Ruby time.
Next I decided to hit my asset serving by adding in a CDN (Cloudfront via the CDNsumo add-on). Interesingly this did actually decrease my Ruby time on NR monitoring. (The CDN was provisioned and then warmed by the last request-test on the far right of the graph below.) Most of my pages have a couple-hundred kb of css & javascript - so not tiny but not massive.
Finally, I upgraded from the 'Basic' starter database to the smallest production db 'Crane'. This had a dramatic effect on performance. After a little caching by PG the app flies. (final 3 request spikes on the graph below).
Take home messages for others trying to tune their Heroku apps:
Simple performance tuning in multiple areas (ie. caching, CDN, database, Ruby code) has a synergistic effect across the stack.
Conversely, any single performance drain will be a bottleneck that you cannot overcome even if you tune the other areas (ie. the slow starter Basic or Dev databases on Heroku versus an 'expensive' production database - the slow Basic db was killing my app performance).
NewRelic is essential in working out where the most gains can be made.
"Ruby" time is really the "Other" bucket for NewRelic tracing. The other categories are explicit measures (ie: how much time is spent in calls out to memcached). Ruby time is all of the time not spent in one of those buckets.
So what other things happen in "Ruby" time? The number one candidate is Garbage Collection (GC). If you're running Ruby 1.9+, you can enable NewRelic profiling of GC by creating an initializer like config/initializers/newrelic.rb with the following:
GC::Profiler.enable
This will track GC time as a separate NewRelic category for you.
If you're in good shape on GC, the next step is to use the Web Transactions view to see how these average times are distributed. Perhaps one or two actions in your application are horrible performers and responsible for these averages.
Good luck and feel free to reach out directly if you're still having trouble. Performance tuning is a black art.
We are running 2 rails application on server with 4GB of ram. Both servers use rails 3.2.1 and when run in either development or production mode, the servers eat away ram at incredible speed consuming up-to 1.07GB ram each day. Keeping the server running for just 4 days triggered all memory alarms in monitoring and we had just 98MB ram free.
We tried active-record optimization related to bloating but still no effect. Please help us figure out how can we trace the issue that which of the controller is at fault.
Using mysql database and webrick server.
Thanks!
This is incredibly hard to answer, without looking into the project details itself. Though I am quite sure you won't be using Webrick in your target production build(right?), so check if it behaves the same under Passenger or whatever is your choice.
Also without knowing the details of the project I would suggest looking at features like generating pdfs, csv parsing, etc. Seen a case, where generating pdf files have been eating resources in a similar fashion, leaving like 5mb of not garbage collected memory for each run.
Good luck.
My 3.1.3 rails app takes quite a while to start up, and even running rails console seems to take longer than it reasonably should. For example, with my app it's 50 seconds from rails c to the command prompt. In a test fresh rails app (e.g. from rails new) it's about 5 seconds.
Needless to say, this is really annoying, particularly when trying to run tests, etc.
I've seen the links at https://stackoverflow.com/a/5652640/905282 but they're pretty involved; I was hoping for maybe something that would be at a higher level, like "oh yeah, here's how long each gem is taking up during startup".
Suggestions, or do I just need to dive into the details?
Ruby 1.9.3 fixes a performance problem in 1.9.2 when a large number of files have been loaded with require.
That post describes how the performance of including new files is O(N), getting progressively slower the more files are already loaded. Since Rails loads in a lot of files, it is a serious drag on start-up time.