In the process of looking at my logs from Mongrel, I found some SQL statements that I wanted to optimize. While looking into these, I noticed that these entries sometimes have CACHE in front of them, e.g.:
CACHE (0.0ms) SELECT * FROM `customers` WHERE (`customers`.`id` = 35)
Given the execution time, I'm assuming Mongrel really is caching this data. My question is how is this configured? I haven't been able to find much online about caching of model data; most of what I've read has to do with caching static pages or page fragments. I haven't done anything explicitly to enable this caching, so I'm really just looking for a pointer on how it's configured and how it works. Thanks in advance!
It isn't actually anything to do with mongrel. Rails does a ActiveRecord::Base.cache around every controller action by default. This means that in the scope of that action it will cache the results of queries and provide the results from the cache rather than hitting the database again. You should see an identical query higher in the log (within the same action) that is not prefixed with CACHE which is the original query for which the results have been stored.
Some more details here.
Related
I've experienced a very strange behaviour with a really simple show page of a Rails project with ActiveAdmin.
When I first load my show#Projects page on ActiveAdmin, the correct project data is displayed on the page and the logs are ok (no sql queries are being cached)
After 3 manual page reload, the displayed data is outdated, and the logs show that all SQL queries use cache !
How is it possible ?
I have the most basic cache config ever :
no use of manual cache in code at all
I only have the default config config.action_controller.perform_caching = true in config/production.rb, that use the default caching store (FileStore)
Even stranger, if I change perform_caching config to false, the problem remains exactly the same !
Does someone would be able to tell me where this cache is coming from and how I can disable it ?
Thanks a lot!
I found my solution !
My problem was that all SQL queries were being cached from the beginning of an action and they should not.
In theory, with SQL Query caching, query caches are created at the start of an action and destroyed at the end of that action and thus persist only for the duration of the action.
For some reason, in Rails 6.0.4.4, this was bugged because :
Under certain circumstances, the middleware isn't informed that the response body has been fully closed which result in request state not being fully reset before the next request
This was fixed by 6.0.4.5 with this commit.
It's now working as expected !
I have search page in which I show results.
I configured redis and able to do redis fragment caching for result set and it is working as expected.
For my application depending on user preferences, some elements in search results will be changed for individual users. So, what I need now is to cache for individual user.
I'm able do this part as well with below statement.
$redis.set("result_#{current_user.id if current_user}")
My question is how can I delete all this cache for every user,
because I won't be knowing for which user cache is created and also I
don't to perform sql query on User to get all those ID's.
Does redis-namespace help me in this and how can I delete a namespace and keys under it.
And also is this best practice to have caching for user specific
Does redis clears on each deployment. I mean when does redis cache get clears.
You can let Redis remove keys with the LRU algorithm. Configure the memory limit with the maxmemory directive, and then set maxmemory-policy to the value allkeys-lru. You can find more information about how to use Redis as a cache at redis.io/topics/lru-cache.
I have a Ruby on Rails application which gets lots of data from social media sites like Twitter, Facebook etc.
There is an index page that shows records as paged. I am using Kaminari for paging.
My issue is big data, I guess. Let's say I have millions of records and want to show them on my index page with Kaminari. When I tried to run the system by browser, Heroku gives me H12 error (request timeout).
What can I do to improve my app's performance? I have this idea of getting only the records that will be shown on the index page. Likewise, when clicked to Kaminari second page link, only fetching the second page records from database. Idea is basically that but I don't know where to start and how to implement it.
Here an example piece of code from my controller:
#ca_responses = #ca_responses_for_adaptors.where(:ca_request_id => #conditions)
.order(sort_column + " " + sort_direction)
.page(params[:page]).per(5)
#ca_responses: My records
#ca_responses_for_adaptor: Records based on adaptor. Think as admin and this returns all of the records.
#conditions: Getting specified adaptor records. For example getting only Twitter related records etc.
You could start by creating a page cache table which will be filled in with your data for your search results. That could be one approach.
There could be few downsides, but if I would know the exact problem, then I could propose better solution. I doubt that you will be listing million users on one page and then to access them by paginating the pages (?) or I am mistaken
EDIT:
There could be few problems with pagination. First is that the paginating gems work like this: They fetch all data, and then when you click on page number it only fetches the second 5 elements (or however you have set it) from the whole list. The problem here is fetching all the data before paginating. If you have a million of records, then this could take a while for every page. You could define new method that will run SQL query to select one amount of data from the database , and you can set offset instruction to fetch the data only for that page. In this case paginate gem is useless, so you would need to remove it.
The second option is that you could use something like user_cashe, something like that. By this I mean to create new table that will have just a few records - the records that will be displayed on the screen. The table will be smaller then the usuall user table, and then, it would be faster to search trough it.
There could be other more advanced solutions, but I doubt you could (want) to use it in your application.
Kaminari already paginates your records as expected.
Heroku is prone to random timeout errors due to its random router.
Try to reproduce on local. You may have bottlenecks in your code which make indeed your request being too long to return. You should not have any problem requesting 5 items from database, so you may have code before or after that that takes long time to run.
If everything is ok on local with production data, you may add new_relic to analyze your requests and see whether some problem occurs specifically on production (and why).
If it appears heroku router is indeed the problem, you can still try to use unicorn as a webserver, but you have to take special care that your app does not consume too much memory (each unicorn worker will consume the ram of a whole app, and you may hit heroku memory limits, which would produce R14 errors in place of those H12).
I have an MVC4 website and I'm using the OutputCache to cache the result of a view that displays multiple pages of ranked results. The cached output varies on the Page parameter. The rankings are a function of time, so the results for any given page can be out of sync depending on when they're cached, which is exacerbated by the fact that I'm using an infinite-scroll mechanism where duplicate results can be shown if a result gets pushed to the next page.
The ideal solution to this problem would be to cache some reasonable number of pages all at once. This would require being able to check if the cached output is expired, re-generate cached results if they are expired and then return the cached response. Is this possible?
I should also note that I'm using OutputCaching with the Azure Output caching provider, and I have a dedicated caching role (Note: not the shared caching service).
Any help would be greatly appreciated.
This would require being able to check if the cached output is
expired, re-generate cached results if they are expired and then
return the cached response. Is this possible?
This is exactly how OutputCaching works - request a page, if it exists in cache and isn't expired, retrieve it from cache, otherwise, render page and update the cache.
If the data really is this dynamic, you are probably causing more work/problems by caching the output without realizing any gains in performance (KISS applies here! Don't create a solution for a problem if you can avoid the problem in the 1st place).
However, to architect a solution as you describe (if really required) could be done with an Azure Queue and a Worker Role. Have your ratings engine stuff a value in the queue when a rating is added/updated. Then, have the Worker Role poll the Queue every second (for example) for values. If a value is found, the have the Worker Role do a web request against the cached page. This will update the output cache if it has expired. However, you are still limited by the cache expiration. unless you do something like from this SO post):
HttpResponse.RemoveOutputCacheItem() is probably the method you want
to use. If you can figure out what name the actions are cached under,
you can remove just the specific action (try setting a breakpoint or
dumping all of the names of cached items to the screen)
I want to cache query results so that the same results are fetched "for more than one request" till i invalidate the cache. For instance, I want to render a sidebar which has all the pages of a book, much like the index of a book. As i want to show it on every page of the book, I have to load it on every request. I can cache the rendered sidebar index using action caching, but i also want to actually cache the the query results which are used to generate the html for the sidebar. Does Rails provide a way to do it? How can i do it?
You can cache the query results using ActiveSupport's cache store, which can by backed by a memory store such as memcached, or a database store if you provide your own implementation. Note that you'll want to use a database store if the cache is shared across multiple Ruby processes which will be the case if you're deploying to Mongrel Cluster or Phusion Passenger.
This Railscast has the details
You also could check for your action cache before querying the database in the controller.