Hi I ma new to rails and I try to delete a key from the rails cache like this:
Rails.cache.fetch('datasources_field_options')
I see a big array in the rails console
Then I try to delete it like this:
Rails.cache.delete('datasources_field_options')
This returns true
this is from the console:
irb(main):004:0> Rails.cache.delete('datasources_field_options')
2019-09-26 14:45:56 +0000 1569509156754 (24618) Cache delete: datasources_field_options
=> true
irb(main):005:0>
And then I check again if it was deleted:
Rails.cache.fetch('datasources_field_options')
The cache type is ActiveSupport::Cache::DalliStore
No it wasn't. The array is still there.
What am I missing? Why cant delete the specific key from the cache?
EDIT
I checked the configuration - the store is configured to use Redis on a remote host.
I did a quick video to show you what I am doing.
https://youtu.be/q2WyYniacNw
Related
I'd like to start over with errbit - there are millions of records in our mongodb database and we hadn't been cleaning them up. I'd like to start over, but I don't want to lose my user accounts.
I've tried to run these routines (https://mensfeld.pl/2015/01/making-errbit-work-faster-by-keeping-it-clean-and-tidy/):
bundle exec rake errbit:clear_resolved
desc 'Resolves problems that didnt occur for 2 weeks'
task :cleanup => :environment do
offset = 2.weeks.ago
Problem.where(:updated_at.lt => offset).map(&:resolve!)
Notice.where(:updated_at.lt => offset).destroy_all
end
but the second one (deleting problems and notices over 2 weeks old), just seems to run forever.
Querying problems and notices collections via mongo shell doesn't seem to show any being deleted... we're using errbit V 0.7.0-dev and mongodb version 3.2.22.
Fastest way would be to get a mongo console and drop most of the collections. I'd say stop your errbit server, get a mongo console, connect to the db you use and run:
> db.errs.drop()
true
> db.problems.drop()
true
> db.backtraces.drop()
true
> db.notices.drop()
true
> db.comments.drop()
Problem.where(:updated_at.lt => 2.months.ago).destroy_all
runs too long because of N+1 problem with recursive deletion of Err, Notice and Comment, also mongoid does not support nested eager loading, so only way to delete faster - is to manually take these ids and delete directly, without callbacks:
problem_ids = Problem.where(:updated_at.lt => 2.months.ago).pluck(:id)
err_ids = Err.where(problem_id: {:$in => problem_ids}).pluck(:id)
Notice.where(err_id:{:$in => err_ids}).delete_all
Err.where(id:{:$in => err_ids}).delete_all
Comment.where(problem_id: {:$in => problem_ids}).delete_all
Problem.where(id: {:$in => problem_ids}).delete_all
I am trying to check whether a particular pdf file exists on AWS S3 using aws-sdk gem (version 2) inside ruby on rails application.
I have the AWS connection established and currently using exists? method:
puts #bucket.objects(prefix:"path/sample_100.pdf").exists?
on running the above statement, I get the below no method error:
undefined method 'exists?' for Aws::Resources::Collection
Checked few documents but of not much help. Is there any other way to achieve the same?
Thanks in advance
I'm not a Ruby developer myself, but I might be able to suggest something.
The usual way to check whether an object exists in Amazon S3 is using the HEAD Object operation. Basically, it returns the metadata (but no content) of an object if it exists, or a 404 error if it doesn't. It's like GET Object, but without the contents of the object.
I just looked up in the AWS SDK for Ruby API Reference and found this method:
http://docs.aws.amazon.com/sdkforruby/api/Aws/S3/Client.html#head_object-instance_method
Take a look at that, it's probably what you are looking for.
I'd recommend you to use the much simpler S3 gem: https://github.com/qoobaa/s3 If you only need to deal with S3. You'll be able to do it this way:
object = bucket.objects.find("example.pdf")
As mentioned by Bruno, you can use head_object to get info on the file, without actually fetching it. If it is not found (or other problems, such as permissions), an exception will be raised. So if head_object returns, the file exists.
Here's a file that exists:
> head = s3.head_object(bucket: bucket, key: path)
=> #<struct Aws::S3::Types::HeadObjectOutput last_modified=2020-06-05 16:18:05 +0000, content_length=553, etc...>
And here's one that does not exist, and the exception it raises:
> path << '/not-really'
=> "example/file/not-really"
> head = s3.head_object(bucket: bucket, key: path)
Aws::S3::Errors::NotFound
Traceback (most recent call last):
1: from (irb):18
Aws::S3::Errors::NotFound ()
And here's how you can roll your own s3_exists? method:
def s3_exists?(bucket, path)
s3.head_object(bucket: bucket, key: path)
true
rescue
false
end
Is there a way to check the size of the Rails cache?
Something in the vein of: Rails.cache.size => 390 MB
I assume there's some slight variation between data stores, but right now I'm not sure how to even start to check the disk space a cache is taking up.
that totally depends on your cache store and the backend you use.
this is an example from my heroku instance running memcachier:
Rails.cache.stats
# => {"xxx.memcachier.com:11211"=>{"curr_items"=>"278", "bytes"=>"3423104", "evictions"=>"0", "total_items"=>"7373", "curr_connections"=>"7", "total_connections"=>"97", "cmd_get"=>"141674", "cmd_set"=>"7373", "cmd_delete"=>"350", "cmd_flush"=>"6", "get_hits"=>"63716", "get_misses"=>"77958", "delete_hits"=>"162", "delete_misses"=>"188", "incr_hits"=>"0", "incr_misses"=>"0", "decr_hits"=>"0", "decr_misses"=>"0"}}
FileStore does not have such a method:
Rails.cache.stats
# => NoMethodError: undefined method `stats' for #<ActiveSupport::Cache::FileStore:0x007ff1cbe905b0>
And when running a memcached locally, i get a different result set:
Rails.cache.stats
# => {"127.0.0.1:11211"=>{"pid"=>"327", "uptime"=>"517931", "time"=>"1392163858", "version"=>"1.4.16", "libevent"=>"2.0.21-stable", "pointer_size"=>"64", "rusage_user"=>"2.257386", "rusage_system"=>"4.345445", "curr_connections"=>"15", "total_connections"=>"16", "connection_structures"=>"16", "reserved_fds"=>"20", "cmd_get"=>"0", "cmd_set"=>"0", "cmd_flush"=>"0", "cmd_touch"=>"0", "get_hits"=>"0", "get_misses"=>"0", "delete_misses"=>"0", "delete_hits"=>"0", "incr_misses"=>"0", "incr_hits"=>"0", "decr_misses"=>"0", "decr_hits"=>"0", "cas_misses"=>"0", "cas_hits"=>"0", "cas_badval"=>"0", "touch_hits"=>"0", "touch_misses"=>"0", "auth_cmds"=>"0", "auth_errors"=>"0", "bytes_read"=>"48", "bytes_written"=>"30", "limit_maxbytes"=>"67108864", "accepting_conns"=>"1", "listen_disabled_num"=>"0", "threads"=>"4", "conn_yields"=>"0", "hash_power_level"=>"16", "hash_bytes"=>"524288", "hash_is_expanding"=>"0", "malloc_fails"=>"0", "bytes"=>"0", "curr_items"=>"0", "total_items"=>"0", "expired_unfetched"=>"0", "evicted_unfetched"=>"0", "evictions"=>"0", "reclaimed"=>"0"}}
In addition to #phoet's answer, for a Redis cache you can use the following to get a human readable format:
Rails.cache.stats["used_memory_human"] #=> 178.32M
Where used_memory_human can actually be any key that's returned when running an INFO command on the redis server.
I am trying to wrap my brain around Rack::Cache, Rails 3.2, Memcache, and Heroku. I think I've got it all working together, as outlined here: http://myownpirateradio.com/2012/01/01/getting-heroku-cedar-and-rails-3-1-asset-pipeline-to-play-nicely-together/
All that said, I am unsure if Memcached is actually doing what it should. Is there any way to get stats on Memcached or to see if a request was cached by Memcached? I put the current time on a page, and can see that it is getting cached (headers look good too), but how do I know it is all working with Memcached, as opposed to the file store?
Thanks.
You can get stats on memcached by doing:
$ heroku run console
Running console attached to terminal... up, run.1
Loading production environment (Rails 3.1.3)
irb(main):001:0> Rails.cache.stats
Dalli/SASL authenticating as app590983%40heroku.com
Dalli/SASL: Authenticated
=> {"mc5.ec2.northscale.net:11211"=>{"evictions"=>"0", "curr_items"=>"627",
"total_items"=>"1257", "bytes"=>"2294318", "reclaimed"=>"0",
"engine_maxbytes"=>"5242880", "bucket_conns"=>"2", "pid"=>"319",
"uptime"=>"6710022", "time"=>"1330731177", "version"=>"1.4.4_207_g19c6b9e",
"libevent"=>"1.4.11-stable", "pointer_size"=>"64",
"rusage_user"=>"34354.590000", "rusage_system"=>"31381.520000",
"daemon_connections"=>"10", "curr_connections"=>"1211",
"total_connections"=>"14127919", "connection_structures"=>"1764",
"cmd_get"=>"9476", "cmd_set"=>"1257", "cmd_flush"=>"0", "auth_cmds"=>"24",
"auth_errors"=>"0", "get_hits"=>"8093", "get_misses"=>"1383",
"delete_misses"=>"0", "delete_hits"=>"0", "incr_misses"=>"0",
"incr_hits"=>"0", "decr_misses"=>"0", "decr_hits"=>"0", "cas_misses"=>"0",
"cas_hits"=>"0", "cas_badval"=>"0", "bytes_read"=>"21983909",
"bytes_written"=>"85267718", "limit_maxbytes"=>"67108864",
"rejected_conns"=>"0", "threads"=>"4", "conn_yields"=>"0"}}
PS: I think you might need to be using the Dalli gem for this to work, but that is the recommended client anyway.
You can also run Rails.cache.class to see which backend is Rails using.
I'm developing an app using event_calendar plugin to build a calendar with some events. Everything works fine in development and test environment but when i try to switch to production env trying to open the calendar page the server returns a 500 error page.
What is really weird for me is that event_calendar worked well until today, and i didn't touch a line of code related to this plugin.
So, that's what i made to debug with no success:
Migrate database and precompile assets on production env.
Opened log/production.log to find some error, but nothing was there, non error logged.
Uncomment this line on production.rb config.log_level = :debug, still no error logged.
Modified the index action of Calendar Controller with a dummy empty index action to make sure that was not the problem.
Opened rails console on production env and try to make a get request to /calendar:
1.9.3-p0 :001 > app.get '/calendar'
=> 500
1.9.3-p0 :002 > app.response
=> #<ActionDispatch::TestResponse:0x007ff53607d298 #body=["<!DOCTYPE html>\n<html>\n<head>\n ... HTML CODE OF 500 ERROR PAGE ... </html>\n"]
,#header={"Content-Type"=>"text/html; charset=utf-8", "Content-Length"=>"643", "X-Request-Id"=>"d47e6d869dd8e215a0741430ee2eacae", "X-Runtime"=>"0.036510"}, #status=500, #sending_file=false, #blank=false, #content_type=text/html, #charset="utf-8", #cache_control={}, #etag=nil>
Apart from this everything works fine, even in production env. So i guess the issue depends on something wrong on event_calendar but i think the problem of ghost error is more general.
I really hope that error is somewhere!
Hope someone can help me! Thanks
Marco