I am working on an app where certain pages take longer to load because of the views are a list of X to 1000 table rows. Each row has Y columns that take time to construct.
Right now, the first time the user hits the page with the table rows, it takes say N seconds. However, on refreshes, the page loads very quickly b/c of the cache.
I would like preload these partials and store it in my cache from a Rails model when the user logs in so if the user visits the page with the table rows, it will already be cached.
I have attempted to do this by calling a Sidekiq job when the user logins and cache the partial via a call below however even though I used the same cache key, it doesn't seem to be pulling from the same cache?
view = ActionView::Base.new(ActionController::Base.view_paths, {})
view.render(...)
Any suggestions? I do know it is because when I call from the above, the fragment cache logic does not generate the digest. I ended up solving the issue by using Rails.cache fetch calls within a helper.
Related
When saving changes to a model in rails, two popular gems to track changes are paper_trail and audited. These gems create records that contain versioning information every time the model is updated.
How can I reduce the number of records created when saving user input as they type?
For example:
User start filling out an input for the title of a model
User starts typing "App" then a few second later "le"
The database now contains two versioning records "App" and "le"
Note: Putting a delay on the user interface (say 250ms before saving) will reduce the number of records, but we will still have a bunch of updates that are really just one.
I think the best solution would be to skip versioning based on conditions (column, last_updated_at, etc). Then wrap the controller action with the following block.
PaperTrail.request(enabled: false) do
# no versions created
end
Solution
Use different endpoints for the keystroke requests and form submission.
Keystroke Endpoint
Used only by keystroke requests and is wrapped with the code above to skip_versioning.
Form submission endpoint
Used only when the form is submitted.
Let's say we have these two outputs in a view:
#post.user.first_name
current_user.posts.size
If the outputs above would be called multiple times each in a single view, is Rails "smart enough" to not hit the database every time?
If the answer is yes - are there any "general rule" about this worth knowing?
If the answer is no - would a good practice then be to store the associated object/objects in it's own variable?
ActiveRecord by default caches queries for performance. If you do AC query in console a few times you will see that the second query executes much faster due to AC cache. So I guess this works for queries in the view as well.
You can manually cache objects with Rails Fragment Caching feature.
Fragment Caching allows a fragment of view logic to be wrapped in a
cache block and served out of the cache store when the next request
comes in.
Also there is Cache Stores for you to use.
Rails provides different stores for the cached data (apart from SQL
and page caching).
Queries for a view are done in the controller action that renders the view. You will notice that you define #post in your controller action but you may not see current_user defined. This is typically because you're using the devise gem and the code defining the current_user method is part of the gem.
Everything you need to render a view should be queried in your controller by ActiveRecord and in memory of the app in preparation of the rendering of the view. So multiple calls of #post or current_user shouldn't matter.
Sometimes objects are called via associations for a view eg. #post.user.name Is going to have to query for the user. This will work but it is better Model-View-Controller separation to eager load the users along with the posts in the controller. Following MVC and making sure your queries happen in the controller will also help you avoid N + 1 query performance issues. See Rails Active Record Query Eager Load
An example of querying users with their posts.
#post = Post.find(post_params).includes(:user)
I have an action in my Rails controller that accepts some parameters that are used for filtering and pagination. The response returned is only in JSON format and there is no view template.
I understand that caching can be done for the action and parameters using caches_action and cache_path. What I do not understand and have not been able to find is how to invalidate the cache for that action across all parameters. The cached action is similar to an "index" action and if there is a create/update/delete, the pagination is affected so the entire cache for that action must be removed.
This must be a fairly common problem, but I haven't been able to identify how it is solved.
The only way I could think of and from googling is to create custom cache keys for the cache_path attribute:
If looking up a single item, the custom cache key includes the updated_at date for that item.
If looking up a list of items, the custom cache key includes the count of items and the last updated_at date for the list of items.
The negative side-effect is that there will be database calls even if there is a cache hit.
The positive side-effect is that the cache is self-expiring since I am using memcached and memcached uses LRU replacement.
I get the feeling I am re-inventing the wheel but I cannot find solid documentation anywhere to solve this problem better.
I have an E-Commerce Rails Application where we need to output Orders placed by Customers on a page within last one year for reporting reasons. Now, the data set is quite large and displaying these Orders on a single page takes quite a bit of SQL processing. This task initially was very slow and hence I moved all the required order details to a Redis Server and fetching of data has become really fast now but we are still not quite there.
Here's what we have:
Rendered **path**/sales_orders.html.haml within layouts/admin (39421.1ms)
Completed 200 OK in 44925ms (Views: 39406.8ms | ActiveRecord: 417.2ms)
The application is hosted on Heroku and if a request takes more than 30s it is killed. As you can see we are well above that limit. Most of the time is lost in rendering the view.
The page contains a date filter where the user gets to choose what Date Range to select the Orders from. So, caching is not the ideal solution since Date Ranges might change every time.
Any ideas how this can be done?
The Redis keys are of the format (The following is a Redis Hash):
orders:2012-01-01:123
orders:yyyy-mm-dd:$order-id
User simply provides a Date range and I get all the keys within that date range under the orders namespace.
Here's how I would get the Customer Name for instance from the Redis order key:
= REDIS.hget(order_key, "customer_name")
Consider building the report with a periodic task using the Heroku Scheduler addon.
As long as last-minute orders are not required to be included in the report, you can build your reports nightly and have them available for immediate download to read with your morning coffee, or even have them mailed to you (or whoever needs to read them.)
If you need interactive selection of periods for reports, you will need to queue the requests up and build the reports using background jobs.
Almost all your time is spent in rendering views. That probably means you have a lot of partials or other complex view logic. Some of your options are:
Paginate your output, but offer a PDF or CSV for unpaginated output.
Simplify your view logic...a lot.
Try a helper like cycle instead of rendering complex tables or nested partials.
Move your rendering into the client with JSON and JavaScript.
That's about it, really. If one or more of those don't get you where you need to go, it may be time to revisit your requirements.
I suggest you use fragment caching. Reading a fragment is very fast (~0.5 ms) and in my experience you'll see a huge speedup gain by not re-rendering your partials again and again. It's also a fairly cheap solution as Rails takes care of invalidating the fragments (if you use the model as part of the cache key) and it requires minimal changes in your template. I.e. the solution could be as simple as:
<% #orders.each do |order| %>
<% cache ["v1", order] do %>
<%= render order %>
<% end %>
<% end %>
I'm not seeing something obvious re:caching strategies in rails.
I have a prices table that logs prices at particular times. So the current price would be the last row added to the prices table. Very frequently, the table is queried to display the entries, so caching that fetching would be great to stop (very) frequent queries hitting the database.
So as far as I can see it would be fine for my entire app to cache that data completely until a new row gets added.
Does rails caching handle that well? I see examples for on update of an active record object you expire the cache and force the updated object to be retrieved again - but I want the collection of objects (e.g. Price.find(:all) to be cached until Price.find(:all) contains a new object. So adding a new row to the db would have to expire the cache and force a new retrieval - the new price might be the latest for a few days, or it might only last a few minutes.)
If not self-evident, this is the first time I've ever looked at caching. I'll be attempting to deploy memcache on heroku.
Thanks a lot!
Edit: Just thought it might be useful to point out that the rails controllers only render JSON requests - rich single page app - so the main things to cache are the database query. This is why it is confusing me, I see partial caching, page caching, but I'm struggling to understand the type of caching I'm hopefully describing above.
Dave
To cache your prices, you can use the following. It would be helpful to place this somewhere it could be reused (such as your model).
cached_prices = Rails.cache.fetch("cached_prices", :expires_in => 300) do
Price.find(:all)
end
The above code caches the query for 5 minutes (300 seconds). In order to manually delete the cache entry (say when a new Price entry is created), call the following:
Rails.cache.delete("cached_prices")
To make this easier, you can place the cache delete code in an Observer on the Price model.
You can find much more information on all of the types of caching you have available to you here: http://guides.rubyonrails.org/caching_with_rails.html