I want to cache query results so that the same results are fetched "for more than one request" till i invalidate the cache. For instance, I want to render a sidebar which has all the pages of a book, much like the index of a book. As i want to show it on every page of the book, I have to load it on every request. I can cache the rendered sidebar index using action caching, but i also want to actually cache the the query results which are used to generate the html for the sidebar. Does Rails provide a way to do it? How can i do it?
You can cache the query results using ActiveSupport's cache store, which can by backed by a memory store such as memcached, or a database store if you provide your own implementation. Note that you'll want to use a database store if the cache is shared across multiple Ruby processes which will be the case if you're deploying to Mongrel Cluster or Phusion Passenger.
This Railscast has the details
You also could check for your action cache before querying the database in the controller.
Related
I have a Rails app that searches a static set of documents, and I need to figure out the best way to cache result sets. I can't just use the native ActiveRecord caching.
My situation:
I'm using the will_paginate gem, and at the moment, the query is running every time the user changes pages. Some queries are too complex for this to be responsive, so I need to cache the results, at least during an individual session. Also, the user may have multiple tabs open, running separate queries simultaneously. The document set is static; the size of the set is on the order of tens of thousands of documents.
Why straight-forward ActiveRecord caching won't work:
The user can search the contents of the documents, or search based on metadata restrictions (like a date range), or both. The metadata for each document is stored in an ActiveRecord, so those criteria are applied with an ActiveRecord query.
But if they add a search term for the document content, I run that search using a separate FastCGI application, because I'm doing some specialized search logic. So, I pass the term & the winnowed-down document list to the FastCGI application, which responds with the final result list. Then I do another ActiveRecord query: where("id IN (?)',returnedIds)
By the way, it's these FastCGI searches that are sometimes complex enough to be unresponsive.
My thoughts:
There's the obvious-to-a-newbie approach: I can use the metadata restrictions plus the search term as a key; they're already stored in a hash. They'd be paired up with the returnedIds array. And this guide at RubyOnRails.org mentions the cache stores that are available. But it's not clear to me which store is best, and I'm also assuming there's a gem that's better for this.
I found the gem memcached, but it's not clear to me whether it would work for caching the results of my FastCGI request.
I have a Ruby on Rails application which gets lots of data from social media sites like Twitter, Facebook etc.
There is an index page that shows records as paged. I am using Kaminari for paging.
My issue is big data, I guess. Let's say I have millions of records and want to show them on my index page with Kaminari. When I tried to run the system by browser, Heroku gives me H12 error (request timeout).
What can I do to improve my app's performance? I have this idea of getting only the records that will be shown on the index page. Likewise, when clicked to Kaminari second page link, only fetching the second page records from database. Idea is basically that but I don't know where to start and how to implement it.
Here an example piece of code from my controller:
#ca_responses = #ca_responses_for_adaptors.where(:ca_request_id => #conditions)
.order(sort_column + " " + sort_direction)
.page(params[:page]).per(5)
#ca_responses: My records
#ca_responses_for_adaptor: Records based on adaptor. Think as admin and this returns all of the records.
#conditions: Getting specified adaptor records. For example getting only Twitter related records etc.
You could start by creating a page cache table which will be filled in with your data for your search results. That could be one approach.
There could be few downsides, but if I would know the exact problem, then I could propose better solution. I doubt that you will be listing million users on one page and then to access them by paginating the pages (?) or I am mistaken
EDIT:
There could be few problems with pagination. First is that the paginating gems work like this: They fetch all data, and then when you click on page number it only fetches the second 5 elements (or however you have set it) from the whole list. The problem here is fetching all the data before paginating. If you have a million of records, then this could take a while for every page. You could define new method that will run SQL query to select one amount of data from the database , and you can set offset instruction to fetch the data only for that page. In this case paginate gem is useless, so you would need to remove it.
The second option is that you could use something like user_cashe, something like that. By this I mean to create new table that will have just a few records - the records that will be displayed on the screen. The table will be smaller then the usuall user table, and then, it would be faster to search trough it.
There could be other more advanced solutions, but I doubt you could (want) to use it in your application.
Kaminari already paginates your records as expected.
Heroku is prone to random timeout errors due to its random router.
Try to reproduce on local. You may have bottlenecks in your code which make indeed your request being too long to return. You should not have any problem requesting 5 items from database, so you may have code before or after that that takes long time to run.
If everything is ok on local with production data, you may add new_relic to analyze your requests and see whether some problem occurs specifically on production (and why).
If it appears heroku router is indeed the problem, you can still try to use unicorn as a webserver, but you have to take special care that your app does not consume too much memory (each unicorn worker will consume the ram of a whole app, and you may hit heroku memory limits, which would produce R14 errors in place of those H12).
I have a rails app that shows users posts. Users posts can be sorted in many ways, paginated, categorized, etc. I am doing all of these clicks over ajax.
However, everytime I click a category or a sortby param or a new page, it loads the ENTIRE request again and then returns it in the way specified. Is there any way to cache my first results and THEN sort, paginate, categorize quickly?
In order to enhance the performance, a better way is to use memcache. If you have memcache installed and implemented in your rails app, you can cache that query for a certain period of time without actually querying the database again instead it will fetch directly from memory again, thus it improved performance greatly. You may want to check out https://github.com/nkallen/cache-money.
This is a question of using AJAX vs a non-AJAX javascript implementation.
When using Ajax, the question you should ask yourself is, does this action require more information or logic from the server? If the answer is no and you can do it using only the information you have already loaded on the client browser, you should try and implement it using Javascript.
In this case, there are plenty of solutions for javascript sorting out there. For example, if you want a simple table-like sorting (your table could be invisible) you could use something like this plugin: http://yoast.com/articles/sortable-table/
If you're looking for a more custom solution, you could write the javascript by hand, this post: Sort <div> elements using jQuery gives a good starting point.
Hope that helps!
I'm working on a life-streaming type app in Rails and am constantly parsing several different RSS feeds using Feedzirra. To support scaling of the app, I have to implement some kind of caching while also allowing to paginate the cached results in the view. The cache can expire as little as once a day.
Being a novice to caching in Rails, what types of caching would be recommended for this? Also, I'm doing most of the feed parsing operations in modules in my lib/ directory, not sure if this would have affect / not be ideal for caching. Ideally I'd like to cache the array of results the RSS feed returns and do some post-processing to them before I send it to the view.
Any help would be much appreciated!
I suggest you to use a gem for run a schedule task in the cron, collect all desired results from all your rss feeds and save it to an xml or even in a table.
For the next time, load the results from this xml or table and create an static cached pages (html files).
And everytime you run your schedule task, erase your previous saved files, preventing old results tyo be displayed.
In the process of looking at my logs from Mongrel, I found some SQL statements that I wanted to optimize. While looking into these, I noticed that these entries sometimes have CACHE in front of them, e.g.:
CACHE (0.0ms) SELECT * FROM `customers` WHERE (`customers`.`id` = 35)
Given the execution time, I'm assuming Mongrel really is caching this data. My question is how is this configured? I haven't been able to find much online about caching of model data; most of what I've read has to do with caching static pages or page fragments. I haven't done anything explicitly to enable this caching, so I'm really just looking for a pointer on how it's configured and how it works. Thanks in advance!
It isn't actually anything to do with mongrel. Rails does a ActiveRecord::Base.cache around every controller action by default. This means that in the scope of that action it will cache the results of queries and provide the results from the cache rather than hitting the database again. You should see an identical query higher in the log (within the same action) that is not prefixed with CACHE which is the original query for which the results have been stored.
Some more details here.