Rails: slow partial rendering when cache_classes is false - ruby-on-rails

I'm upgrading a Rails 3.2 application up to version 5. The first thing that I'm doing is to upgrade to version 4 and make all tests are still passing.
One thing that I noticed is that my application is now very slow in development mode. Some requests are taking 1500ms when they used to take less than 50ms in 3.2.
After some debugging I found that if I enable cache_classes configuration in version 4, I will get a similar performance to the one that I used to have in version 3.2 with cache_classes disabled.
Rails 4.2.9 ( config.cache_classes = false )
(previous render messages intentionally removed )
Rendered application/_row.html.erb (91.2ms)
Rendered application/_row.html.erb (104.1ms)
Rendered application/_row.html.erb (103.9ms)
Completed 200 OK in 1617ms (Views: 1599.0ms | ActiveRecord: 8.3ms)
Rails 4.2.9 ( config.cache_classes = true )
(previous render messages intentionally removed )
Rendered application/_row.html.erb (2.3ms)
Rendered application/_row.html.erb (2.5ms)
Rendered application/_row.html.erb (2.0ms)
Completed 200 OK in 59ms (Views: 41.8ms | ActiveRecord: 7.7ms
From what I was able to check, when config.cache_classes = false, partials and classes aren't cached by Rails anymore. Not even in the same request like they used to be in 3.2.
I also checked config.reload_classes_only_on_change which seemed pertinent and it's set to true.
Is there something else that I should check/change to get a similar to the one that I used to have in 3.2 with cache_classes disabled? Or is it a regression in Rails that can't be fixed?
Unfortunately developing with cache_classes enabled isn't viable.
Note 1: I created an empty rails 4.2.9 and was able to reproduce this issue by creating a lot of partials and enabling/disabling cache_classes.
Note 2: The same issue seems to happen in Rails 5. I'm focusing in Rails 4 because that's the one that I'm working now.

Related

Log4r - output is unordered

I'm a newbie in Rails, and I had a problem with my Rails project. I used Log4r for my logger. I also used config for Log4r at How to configure Log4r with Rails 3.0.x? . But when I send some request at the same time, the output of Log4r has error. It is unordered. :(
Example about error output log file:
Started GET "/task_results.json" for 172.29.73.182 at 2013-06-17 17:36:38 +0700
Processing by TaskResultsController#index as JSON
Started GET "/favicon.ico" for 172.29.73.182 at 2013-06-17 17:36:38 +0700
Processing by ApplicationController#missing_page as
Parameters: {"path"=>"favicon"}
Completed 404 Not Found in 1ms (Views: 0.2ms | ActiveRecord: 0.0ms)
[1m [36m (994.0ms) [0m [1mSELECT task_result_id,task_id,worker_id,product_id,passed,date_time,details FROM task_results ORDER BY task_result_id DESC; [0m
Completed 200 OK in 8656ms (Views: 0.2ms | ActiveRecord: 994.0ms)
And I wanna ask that how can I config Log4r to synchronize output? Or how can I fix my problem?
ruby is single-threaded, so I assume you are running your rails app on a server like unicorn?
The only solution I can think of is timestamps (with micro seconds) and not using the multi-line Rails log output which is not suitable for anything but single requests in dev environments. But even this does not guarantee "ordered" logs because they can be written when the request was served or at a random time afterwards when the buffer was flushed again.
This is an in-depth tutorial on how to use Log4r that also shows how to get custom timestamps into your logs and have all the important information on a single line. This prevents logs from other requests writing inside your main request and you can have a timestamp resolution of microseconds and order afterwards manually. However the order should not matter if you are running concurrent app-servers because the order is not guaranteed there anyway.

Rails App on Heroku Cannot Write to PostgreSQL Database, Only Read

I have a Ruby on Rails app that works fine locally with a sqlite3 database and can save and retrieve records without issue.
When deployed to Heroku at http://moviedata.herokuapp.com/ using a postgresql database, records are not saving even though it looks like the logs say they are. Records read from the db fine and data is displayed as expected.
The tailed logs for adding a record are:
2012-08-21T19:51:31+00:00 app[web.1]:
2012-08-21T19:51:31+00:00 app[web.1]:
2012-08-21T19:51:31+00:00 app[web.1]: Started POST "/" for 50.53.6.156 at 2012-08-21 19:51:31 +0000
2012-08-21T19:51:31+00:00 app[web.1]: Parameters: {"utf8"=>"✓", "authenticity_token"=>"+BYQLzhrfDkUVW8UaHikHpmtGHxpeQ/yF4VByHh9m1I=", "movie"=>{"title"=>"The Running Man", "description"=>"A documentary about a public execution game show.", "year"=>"1987", "genre"=>"science fiction"}, "commit"=>"Create Movie"}
2012-08-21T19:51:31+00:00 app[web.1]: Processing by MoviesController#index as HTML
2012-08-21T19:51:31+00:00 app[web.1]: Rendered movies/index.html.erb within layouts/application (5.1ms)
2012-08-21T19:51:31+00:00 app[web.1]: Completed 200 OK in 9ms (Views: 6.7ms | ActiveRecord: 0.9ms)
2012-08-21T19:51:31+00:00 heroku[router]: POST moviedata.herokuapp.com/ dyno=web.1 queue=0 wait=0ms service=17ms status=200 bytes=3479
The 'heroku pg' command shows the same number of rows (11) on the postgres database after a record is added.
This is a simple app I built to learn Rails and the Heroku platform. To reproduce this, just visit http://moviedata.herokuapp.com/ and click "New Movie", enter some junk data in the form, and hit "create movie". The record should be saved and show up in the list on the front page, but it doesn't.
Is there perhaps something I have to turn on, configure, or activate in order to be able to write to the postgres database? Seems very strange to me that it could be read from but not written to. Any better way to troubleshoot than the logs?
Locally I'm using Ruby 1.9.3, Rails, 3.2.8, PostgreSQL 9.1.5, SQLite 3.7.9, Heroku Toolbelt 2.30.3.
Edit/Update: I switched the local version to use psql. It also experiences the same problem where records are not saved. With the user set to log_statement='all' The log in at /var/log/postgresql/posgresql-9.1.main.log shows lots of selects, but when the record add is attempted, the log shows the database never being hit.
Foreman shows the data being posted, like so:
22:38:03 web.1 | Started POST "/" for 127.0.0.1 at 2012-08-21 22:38:02 -0700
22:38:03 web.1 | Processing by MoviesController#index as HTML
22:38:03 web.1 | Parameters: {"utf8"=>"✓", "authenticity_token"=>"0AyxRbwl/Kgi05uI1KX8uxVUJjx9ylAA1ltdWgmunm4=", "movie"=>{"title"=>"Army of Darkness", "description"=>"A man fights the living dead using a boomstick.", "year"=>"1997", "genre"=>"horror"}, "commit"=>"Create Movie"}
22:38:03 web.1 | Movie Load (0.8ms) SELECT "movies".* FROM "movies" ORDER BY title
22:38:03 web.1 | Rendered movies/index.html.erb within layouts/application (14.9ms)
A failed commit does sound like a great explanation. I'm not yet sure how to check whether the driver is set to commit or to see how/when a commit might have failed.
This is a very simple application, with no load balancing or complex configuration and most of the code was generated by the 'generate scaffold' command, but it's entirely possible that there's some constraint that's being violated somewhere before the db is ever hit. Perhaps there's a way to crank the Foreman (or Rails) log level up to 11? I also tried using thin instead and scoured the log files in the log/ folder and didn't find anything other than what's logged above.
This sounds a lot like a transaction issue, where you aren't COMMITting your transactions after you do work, so the changes are lost. If your SQLite driver defaults to COMMITting transactions that're closed without an explicit COMMIT or rollback, and your Pg driver defaults to ROLLBACK, you'd get the behaviour described. The same will happen if the SQLite defaults to autocomitting each statement by default, and the Pg driver driver defaults to opening a transaction.
This is one of the many good reasons to use the same local database for testing as you're going to deploy to when you want to go live.
If you were on a normal Pg instance I'd tell you to enable log_statement = 'all' in postgresql.conf, reload Pg, and watch the logs. You can't do that on Heroku, but you do have access to the Pg logs with heroku logs --ps postgres. Try running ALTER USER my_heroku_user SET log_statement = 'all';, re-testing, and examining the logs.
Alternately, install Pg locally.
Other less likely possibilities that come to mind:
You're using long-running SERIALIZABLE transactions for reads, so their snapshot never gets updated. Pretty unlikely.
Permissions on database objects are causing INSERTs, UPDATEs, etc to fail, and your app is ignoring the resulting errors. Again, unlikely.
You have DO INSTEAD rules that don't do what you expect, or BEFORE triggers that return NULL, thus silently turning operations into no-ops. Seems unlikely if you're testing with SQLite.
You're writing to a different DB than you're reading from. Not impossible in setups that're attempting to read from a cluster of hot standbys, etc.

Rails app takes a long time to generate error page

My Rails app generates error page very slowly (Rail 3.1/3.2, ruby 1.9.2/1.9.3). E.g. I have added my_bad_variable to some .haml template and
Rendered fees/index.html.haml within layouts/application (97752.1ms)
Completed 500 Internal Server Error in 99579ms
ActionView::Template::Error (undefined local variable or method `my_bad_variable' for #<#:0x00000003bbf0c8>):
After deleting this fake variable:
Completed 200 OK in 327ms (Views: 274.7ms | ActiveRecord: 9.8ms)
Any suggestions?
I had this issue when I upgraded to rails 3.2. I added this initializer to fix it:
module ActionDispatch
module Routing
class RouteSet
alias :inspect :to_s
end
end
end
I think it was related to ree. Are you using ree?

How to use CouchRest with Sunspot?

I have a problem with integration between CouchRest and Sunspot. When I search the book detail, the result from Sunspot is empty. I try to google it for a long time but no help.
Started GET "/books/search?utf8=%E2%9C%93&query=Book of Life&commit=Search%21" for 127.0.0.1 at 2011-09-08 11:27:41 +0700
Processing by BooksController#search as HTML
Parameters: {"utf8"=>"?", "query"=>"Book of Life", "commit"=>"Search!"}
Rendered books/index.html.erb within layouts/application (10.7ms)
Completed 200 OK in 145ms (Views: 20.6ms | ActiveRecord: 0.0ms)
[] <-- I got empty result
My System
Ruby 1.9.2p290
Rails 3.0.10
CouchDB 1.1.0
File structure ( https://gist.github.com/1164637/ )
Model (/app/models/book_detail.rb)
Controller (/app/controllers/books_controller.rb)
Sunspot Adapter for CouchRest (/config/initializers/couchdb.rb)
Sunspot Adapter Module (/config/initializers/sunspot_couch.rb)
NOTE: Sorry about code link. I always got "Please indent all code by 4 spaces using the code toolbar button". I try to remove all tab and follow SO code formatting guideline but it not work anymore.
Forgive me if I'm missing something, but I can't see how Sunspot is mapping "keywords" to the searchable fields on your CouchRest objects.
To debug first I'd visit Couch in the browser admin UI to make sure my that end is working. Then I'd double check that sunspot is getting anything. If sunspot contains your records then the bug is on the search side, if it is empty that maybe something is up with the object lifecycle management code it injects into your model class.
It's been ages since I did any serious Ruby, wish I could be more helpful. One option is to take advantage of some of the direct CouchDB full text offerings like CouchDB Lucene: https://github.com/rnewson/couchdb-lucene

How do I figure out why my rails 3 app, using mod_rails, is so slow?

I've developed a small Rails app, using Rails 3.0.0 and Ruby 1.9.2. During test, on my personal computer, it's performance is fine. I put it on my VPS for production, using Apache and mod_rails, and sometimes the performance is horrible.
Here's an example from the production.log:
Started GET "/tracker" for XX.XX.XX.XX at 2010-11-21 21:49:56 -0500
Processing by FleetsController#index as HTML
Rendered layouts/_stylesheets.html.haml (0.8ms)
Rendered layouts/_header.html.haml (1.0ms)
Rendered layouts/_footer.html.haml (0.0ms)
Rendered pages/about.html.haml within layouts/application (4.5ms)
Completed 200 OK in 15ms (Views: 14.3ms | ActiveRecord: 0.0ms)
Started GET "/tracker/" for XX.XX.XX.XX at 2010-11-21 21:50:02 -0500
Processing by FleetsController#index as HTML
Rendered layouts/_stylesheets.html.haml (0.7ms)
Rendered layouts/_header.html.haml (1.1ms)
Rendered layouts/_footer.html.haml (0.0ms)
Rendered fleets/index.html.haml within layouts/application (7.8ms)
Completed 200 OK in 1901ms (Views: 7.8ms | ActiveRecord: 1.5ms)
Started GET "/tracker/fleets/XXXXXXXXX" for XX.XX.XX.XX at 2010-11-21 21:50:06 -0500
Processing by FleetsController#show as HTML
Parameters: {"id"=>"XXXXXXXXX"}
Rendered fleets/_details_inner.html.haml (1.2ms)
Rendered fleets/_details.html.haml (2.1ms)
Rendered fleets/_summary.html.haml (3.5ms)
Rendered fleets/_scouts_inner.html.haml (1.3ms)
Rendered fleets/_scouts.html.haml (3.5ms)
Rendered reports/_report.html.haml (0.5ms)
Rendered fleets/_reports.html.haml (3.0ms)
Rendered fleets/_recon_form.html.haml (39.9ms)
Rendered fleets/_recon.html.haml (40.8ms)
Rendered users/_user.html.haml (1.2ms)
Rendered fleets/_pilots.html.haml (1.9ms)
Rendered layouts/_stylesheets.html.haml (0.5ms)
Rendered layouts/_header.html.haml (0.9ms)
Rendered layouts/_footer.html.haml (0.0ms)
Rendered fleets/show.html.haml within layouts/application (60.2ms)
Completed 200 OK in 495ms (Views: 59.1ms | ActiveRecord: 2.9ms)
The first hit didn't have any database access. The second does have a database access, but the views only took 7.8ms to generate, and the database only 1.5ms, yet the entire page wasn't complete for almost 2 minutes! This is a pretty common example, but I've got some log entries with 14+ seconds for a page response. And no, this is not during the initial rails load after a reboot.
What could possibly be taking up that time?
1) Have I misinterpreted the ActiveRecord time reports and that's really just the code time, but the realtime database time is where the time is going?
2) I'm using sqlite. I know eventually I'll probably have to switch to MySQL since I will have concurrency issues since (most) every page hit does cause a database write. But right now, I have barely any traffic; at most perhaps 15 people on the site at the same time. In the log example above, there was only 1 hit at a time, with 4-6 seconds between each hits. I'd think sqlite could handle that...
3) I'm on a shared VPS. This means it's possible some other user on the VPS was doing something at the same time that caused the server to slow down. Most of the time, my VPS has very low CPU load, but it's possible that I got unlucky and something was going on at that exact moment. But I've seen this happen often enough that I don't buy that as an answer.
4) The VPS only has 512+512MB of memory. I'm showing there's 150MB free, but is it possible that I'm just hitting memory limits and this is page swapping or something?
5) I've also seen a few BusyException's in the log. I upped the database.yml timeout to 15 seconds (from 5) to see if that helps. Haven't done a real test since to see if it did.
I know I probably haven't provided enough information for you to actually tell me what's going on, so the real question is, how do I even start trying to track this down?
So two things..
Use New Relic to help diagnose code that's slow
Based on the logging, I would bet that you are doing some array manipulation or returning a large array of items in FleetsController#index ... it looks like your application code is doing stuff there.
http://www.newrelic.com/
If that looks wrong, post the code in FleetsController#index. But NewRelic can help you figure out where exactly you are spending your cycles in slow web requests.
SQLite doesn't do concurrency at all. I think that perhaps connections are being blocked on the database. The actual queries are fine, but I suspect that the SQLite db file is locked while another query is running.
You really need to move to an actual server database like MySQL or PostgreSQL.

Resources