Check size of Rails cache? - ruby-on-rails

Is there a way to check the size of the Rails cache?
Something in the vein of: Rails.cache.size => 390 MB
I assume there's some slight variation between data stores, but right now I'm not sure how to even start to check the disk space a cache is taking up.

that totally depends on your cache store and the backend you use.
this is an example from my heroku instance running memcachier:
Rails.cache.stats
# => {"xxx.memcachier.com:11211"=>{"curr_items"=>"278", "bytes"=>"3423104", "evictions"=>"0", "total_items"=>"7373", "curr_connections"=>"7", "total_connections"=>"97", "cmd_get"=>"141674", "cmd_set"=>"7373", "cmd_delete"=>"350", "cmd_flush"=>"6", "get_hits"=>"63716", "get_misses"=>"77958", "delete_hits"=>"162", "delete_misses"=>"188", "incr_hits"=>"0", "incr_misses"=>"0", "decr_hits"=>"0", "decr_misses"=>"0"}}
FileStore does not have such a method:
Rails.cache.stats
# => NoMethodError: undefined method `stats' for #<ActiveSupport::Cache::FileStore:0x007ff1cbe905b0>
And when running a memcached locally, i get a different result set:
Rails.cache.stats
# => {"127.0.0.1:11211"=>{"pid"=>"327", "uptime"=>"517931", "time"=>"1392163858", "version"=>"1.4.16", "libevent"=>"2.0.21-stable", "pointer_size"=>"64", "rusage_user"=>"2.257386", "rusage_system"=>"4.345445", "curr_connections"=>"15", "total_connections"=>"16", "connection_structures"=>"16", "reserved_fds"=>"20", "cmd_get"=>"0", "cmd_set"=>"0", "cmd_flush"=>"0", "cmd_touch"=>"0", "get_hits"=>"0", "get_misses"=>"0", "delete_misses"=>"0", "delete_hits"=>"0", "incr_misses"=>"0", "incr_hits"=>"0", "decr_misses"=>"0", "decr_hits"=>"0", "cas_misses"=>"0", "cas_hits"=>"0", "cas_badval"=>"0", "touch_hits"=>"0", "touch_misses"=>"0", "auth_cmds"=>"0", "auth_errors"=>"0", "bytes_read"=>"48", "bytes_written"=>"30", "limit_maxbytes"=>"67108864", "accepting_conns"=>"1", "listen_disabled_num"=>"0", "threads"=>"4", "conn_yields"=>"0", "hash_power_level"=>"16", "hash_bytes"=>"524288", "hash_is_expanding"=>"0", "malloc_fails"=>"0", "bytes"=>"0", "curr_items"=>"0", "total_items"=>"0", "expired_unfetched"=>"0", "evicted_unfetched"=>"0", "evictions"=>"0", "reclaimed"=>"0"}}

In addition to #phoet's answer, for a Redis cache you can use the following to get a human readable format:
Rails.cache.stats["used_memory_human"] #=> 178.32M
Where used_memory_human can actually be any key that's returned when running an INFO command on the redis server.

Related

Rails.cache.delete does not work (Dalli store)

Hi I ma new to rails and I try to delete a key from the rails cache like this:
Rails.cache.fetch('datasources_field_options')
I see a big array in the rails console
Then I try to delete it like this:
Rails.cache.delete('datasources_field_options')
This returns true
this is from the console:
irb(main):004:0> Rails.cache.delete('datasources_field_options')
2019-09-26 14:45:56 +0000 1569509156754 (24618) Cache delete: datasources_field_options
=> true
irb(main):005:0>
And then I check again if it was deleted:
Rails.cache.fetch('datasources_field_options')
The cache type is ActiveSupport::Cache::DalliStore
No it wasn't. The array is still there.
What am I missing? Why cant delete the specific key from the cache?
EDIT
I checked the configuration - the store is configured to use Redis on a remote host.
I did a quick video to show you what I am doing.
https://youtu.be/q2WyYniacNw

What's the fastest way to delete all errbit errors from mongodb?

I'd like to start over with errbit - there are millions of records in our mongodb database and we hadn't been cleaning them up. I'd like to start over, but I don't want to lose my user accounts.
I've tried to run these routines (https://mensfeld.pl/2015/01/making-errbit-work-faster-by-keeping-it-clean-and-tidy/):
bundle exec rake errbit:clear_resolved
desc 'Resolves problems that didnt occur for 2 weeks'
task :cleanup => :environment do
offset = 2.weeks.ago
Problem.where(:updated_at.lt => offset).map(&:resolve!)
Notice.where(:updated_at.lt => offset).destroy_all
end
but the second one (deleting problems and notices over 2 weeks old), just seems to run forever.
Querying problems and notices collections via mongo shell doesn't seem to show any being deleted... we're using errbit V 0.7.0-dev and mongodb version 3.2.22.
Fastest way would be to get a mongo console and drop most of the collections. I'd say stop your errbit server, get a mongo console, connect to the db you use and run:
> db.errs.drop()
true
> db.problems.drop()
true
> db.backtraces.drop()
true
> db.notices.drop()
true
> db.comments.drop()
Problem.where(:updated_at.lt => 2.months.ago).destroy_all
runs too long because of N+1 problem with recursive deletion of Err, Notice and Comment, also mongoid does not support nested eager loading, so only way to delete faster - is to manually take these ids and delete directly, without callbacks:
problem_ids = Problem.where(:updated_at.lt => 2.months.ago).pluck(:id)
err_ids = Err.where(problem_id: {:$in => problem_ids}).pluck(:id)
Notice.where(err_id:{:$in => err_ids}).delete_all
Err.where(id:{:$in => err_ids}).delete_all
Comment.where(problem_id: {:$in => problem_ids}).delete_all
Problem.where(id: {:$in => problem_ids}).delete_all

Slow cache read on first cache fetch in Rails

I am seeing some very slow cache reads in my rails app. Both redis (redis-rails) and memcached (dalli) produced the same results.
It looks like it is only the first call to Rails.cache that causes the slowness (averaging 500ms).
I am using skylight to instrument my app and see a graph like:
I have a Rails.cache.fetch call in this code, but when I benchmark it I see it average around 8ms, which matches what memcache-top shows for my average call time.
I thought this might be dalli connections opening slowly, but benchmarking that didnt show anything slow either. I'm at a loss for what else to check into.
Does anyone have any good techniques for tracking this sort of thing down in a rails app?
Edit #1
Memcache server is stored in ENV['MEMCACHE_SERVERS'], all the servers are in the us-east-1 datacenter.
Cache config looks like:
config.cache_store = :dalli_store, nil, { expires_in: 1.day, compress: true }
I ran something like:
100000.times { Rails.cache.fetch('something') }
and calculated the average timings and got something on the order of 8ms when running on one of my webservers.
Testing my theory of the first request is slow, I opened a console on my web server and ran the following as the first command.
irb(main):002:0> Benchmark.ms { Rails.cache.fetch('someth') { 1 } }
Dalli::Server#connect my-cache.begfpc.0001.use1.cache.amazonaws.com:11211
=> 12.043342
Edit #2
Ok, I split out the fetch into a read and write, and tracked them independently with statsd. It looks like the averages sit around what I would expect, but the max times on the read are very spiky and get up into the 500ms range.
http://s16.postimg.org/5xlmihs79/Screen_Shot_2014_12_19_at_6_51_16_PM.png

Heroku/Memcache/Rack::Cache Stats

I am trying to wrap my brain around Rack::Cache, Rails 3.2, Memcache, and Heroku. I think I've got it all working together, as outlined here: http://myownpirateradio.com/2012/01/01/getting-heroku-cedar-and-rails-3-1-asset-pipeline-to-play-nicely-together/
All that said, I am unsure if Memcached is actually doing what it should. Is there any way to get stats on Memcached or to see if a request was cached by Memcached? I put the current time on a page, and can see that it is getting cached (headers look good too), but how do I know it is all working with Memcached, as opposed to the file store?
Thanks.
You can get stats on memcached by doing:
$ heroku run console
Running console attached to terminal... up, run.1
Loading production environment (Rails 3.1.3)
irb(main):001:0> Rails.cache.stats
Dalli/SASL authenticating as app590983%40heroku.com
Dalli/SASL: Authenticated
=> {"mc5.ec2.northscale.net:11211"=>{"evictions"=>"0", "curr_items"=>"627",
"total_items"=>"1257", "bytes"=>"2294318", "reclaimed"=>"0",
"engine_maxbytes"=>"5242880", "bucket_conns"=>"2", "pid"=>"319",
"uptime"=>"6710022", "time"=>"1330731177", "version"=>"1.4.4_207_g19c6b9e",
"libevent"=>"1.4.11-stable", "pointer_size"=>"64",
"rusage_user"=>"34354.590000", "rusage_system"=>"31381.520000",
"daemon_connections"=>"10", "curr_connections"=>"1211",
"total_connections"=>"14127919", "connection_structures"=>"1764",
"cmd_get"=>"9476", "cmd_set"=>"1257", "cmd_flush"=>"0", "auth_cmds"=>"24",
"auth_errors"=>"0", "get_hits"=>"8093", "get_misses"=>"1383",
"delete_misses"=>"0", "delete_hits"=>"0", "incr_misses"=>"0",
"incr_hits"=>"0", "decr_misses"=>"0", "decr_hits"=>"0", "cas_misses"=>"0",
"cas_hits"=>"0", "cas_badval"=>"0", "bytes_read"=>"21983909",
"bytes_written"=>"85267718", "limit_maxbytes"=>"67108864",
"rejected_conns"=>"0", "threads"=>"4", "conn_yields"=>"0"}}
PS: I think you might need to be using the Dalli gem for this to work, but that is the recommended client anyway.
You can also run Rails.cache.class to see which backend is Rails using.

Weird cacheing issues with heroku / memcache and dalli

Consider the following. From my heroku console:
>> Rails.cache.stats
=> {"server_id"=>{"evictions"=>"0", "curr_items"=>"2064", "total_items"=>"18793", "bytes"=>"7674501", ...
>> Rails.cache.clear
=> [true]
>> Rails.cache.stats
=> {"server_id"=>{"evictions"=>"0", "curr_items"=>"2064", "total_items"=>"18793", "bytes"=>"7674501",
Super weird -- how can I clear my cache!!
Similar Issue ? : https://stackoverflow.com/q/7122513/192791
If you connect directly to the Dalli/memcahced client through the console and flush_all the cache clears.
i.e.
dc = Dalli::Client.new('localhost:11211')
dc.flush_all
NOTE: the stats take a while to update, but the cache will definitely clear.
The Expiring Cache section at http://devcenter.heroku.com/articles/building-a-rails-3-application-with-the-memcache-addon suggests using filters
after_save :expire_contact_all_cache
after_destroy :expire_contact_all_cache
def expire_contact_all_cache
Rails.cache.delete('Contact.all')
end

Resources