Rails how to check size of current session (cookies) - ruby-on-rails

I was happily throwing things into the session instead of direct saving into the database (like in multistep forms) and thought those 4kb is more than enough. Since I use Devise I thought it uses some session storage but felt safe until I tried to p session and OH!!! My terminal couldn't event print all data from it. That data is hard to understand - there are some items that I pass to it, but also some local routes and other weird things.
So no I wonder how to check the size of it at some stages? I found similar question but following that I get #encryptor as undefined / nil..
Also tried:
#encryptor = ActiveSupport::MessageEncryptor.new(secret, cipher: encrypted_cookie_cipher, serializer: SERIALIZER)
data = session.to_hash.delete_if { |k,v| v.nil? }
data = #encryptor.encrypt_and_sign(serialize(name, data))
p data.bytesize
But then secret is undefined:
undefined local variable or method `secret'

I also tried to find a way to display the cookie-size with rails, without success.
But another way to check the size of your current session cookie is the developer tool in the browser.
There you can see all cookies with some information and its sizes.
It's maybe not the best way, but better than nothing.

Related

Can I use Rails.cache to store short-lived session data?

We are already using cookie based sessions, and switching off them to file store sessions in not an option. However, I need a way to store larger amounts of session data (up to 10MG or so) -- beyond the limit of cookie session and, even it weren't, round-tripping that much data on multiple requests would be slow.
I am currently attempting to solve this by using (abusing?) Rails.cache. The basic setup is like this:
I post to a route, which has the following code:
# calculate some results...
Rails.cache.write('search_results' + session.id), search_results)
redirect_to '/results'
Inside GET /results, I read the cached data and send it to the client:
#results = Rails.cache.read('search_results' + session.id)
This works fine. However, if I subsequently make a request to another route like GET /results2 that also calls Rails.cache.read('search_results' + session.id), it will return nil. Even if all calls happen within a 5-10s span.
So my questions are:
Why does this happen? What determines when Rails.cache clean itself?
Is there a way to make this work?
Is there a better approach altogether that doesn't involve using a DB or redis?
Answer to your questions:
The problem with file cache store is that it stores file locally. Thus, if you have multiple servers, cache can be written to one server while cache is read on another server which will return ‘nil’. The solution is to use cache store that can be shared among multiple servers.
Using redis-store may be a solution: https://github.com/redis-store/redis-rails

Getting data out of devise

I'm migrating away from rails. I will be using the same domain, so I'll get the _session_id cookie that rails uses and I can bring over the old sessions table.I would like to use this to extract data (the user_id) from the old session. I can not tell how to do this outside of rails.
Within a controller there's current_user of course or session["warden.user.user.key"], but how can I take the id, decrypt the data in the table, and pull stuff out on my own (besides running the old rails application and creating a route on that that returns the info I need and hitting it from my new application)?
I'm not entirely sure this is the best way, but I was intrigued so went down the rabbit hole. This works for my 4.1.10 app where sessions are stored in the cookie. You'll want to look at action pack's EncryptedCookieJar class and active support's CachingKeyGenerator and MessageEncryptor classes for details.
Obviously you'll need to replace the two strings that start "THE VALUE…".
key_generator = ActiveSupport::KeyGenerator.new('THE VALUE OF SECRET_KEY_BASE FROM config/secrets.yml', iterations: 1000)
caching_key_generator = ActiveSupport::CachingKeyGenerator.new(key_generator)
caching_key_generator.generate_key('encrypted cookie')
caching_key_generator.generate_key('signed encrypted cookie')
secret = caching_key_generator.generate_key('encrypted cookie')
sign_secret = caching_key_generator.generate_key('signed encrypted cookie')
encryptor = ActiveSupport::MessageEncryptor.new(secret, sign_secret, serializer: ActionDispatch::Cookies::NullSerializer)
session_value = CGI::unescape('THE VALUE OF THE SESSION COOKIE')
serialized_result = encryptor.decrypt_and_verify(session_value)
result = Marshal.load(serialized_result)
The result, for me, is a hash that looks exactly the session hash in Rails.
If it doesn't work for you, you may be using a different serializer so need to replace Marshal.load with whatever you need. Just take a look at serialized_result and see.

Mongoid identity_map and memory usage, memory leaks

When I executing query
Mymodel.all.each do |model|
# ..do something
end
It uses allot of memory and amount of used memory increases at all the time and at the and it crashes. I found out that to fix it I need to disable identity_map but when I adding to my mongoid.yml file identity_map_enabled: false I am getting error
Invalid configuration option: identity_map_enabled.
Summary:
A invalid configuration option was provided in your mongoid.yml, or a typo is potentially present. The valid configuration options are: :include_root_in_json, :include_type_for_serialization, :preload_models, :raise_not_found_error, :scope_overwrite_exception, :duplicate_fields_exception, :use_activesupport_time_zone, :use_utc.
Resolution:
Remove the invalid option or fix the typo. If you were expecting the option to be there, please consult the following page with repect to Mongoid's configuration:
I am using Rails 4 and Mongoid 4, Mymodel.all.count => 3202400
How can I fix it or maybe some one know other way to reduce amount of memory used during executing query .all.each ..?
Thank you very much for the help!!!!
I started with something just like you by doing loop through millions of record and the memory just keep increasing.
Original code:
#portal.listings.each do |listing|
listing.do_something
end
I've gone through many forum answers and I tried them out.
1st attempt: I try to use the combination of WeakRef and GC.start but no luck, I fail.
2nd attempt: Adding listing = nil to the first attempt, and still fail.
Success Attempt:
#start_date = 10.years.ago
#end_date = 1.day.ago
while #start_date < #end_date
#portal.listings.where(created_at: #start_date..#start_date.next_month).each do |listing|
listing.do_something
end
#start_date = #start_date.next_month
end
Conclusion
All the memory allocated for the record will never be released during
the query request. Therefore, trying with small number of record every
request does the job, and memory is in good condition since it will be
released after each request.
Your problem isn't the identity map, I don't think Mongoid4 even has an identity map built in, hence the configuration error when you try to turn it off. Your problem is that you're using all. When you do this:
Mymodel.all.each
Mongoid will attempt to instantiate every single document in the db.mymodels collection as a Mymodel instance before it starts iterating. You say that you have about 3.2 million documents in the collection, that means that Mongoid will try to create 3.2 million model instances before it tries to iterate. Presumably you don't have enough memory to handle that many objects.
Your Mymodel.all.count works fine because that just sends a simple count call into the database and returns a number, it won't instantiate any models at all.
The solution is to not use all (and preferably forget that it exists). Depending on what "do something" does, you could:
Page through all the models so that you're only working with a reasonable number of them at a time.
Push the logic into the database using mapReduce or the aggregation framework.
Whenever you're working with real data (i.e. something other than a trivially small database), you should push as much work as possible into the database because databases are built to manage and manipulate big piles of data.

Rails - how to cache data for server use, serving multiple users

I have a class method (placed in /app/lib/) which performs some heavy calculations and sub-http requests until a result is received.
The result isn't too dynamic, and requested by multiple users accessing a specific view in the app.
So, I want to schedule a periodic run of the method (using cron and Whenever gem), store the results somewhere in the server using JSON format and, by demand, read the results alone to the view.
How can this be achieved? what would be the correct way of doing that?
What I currently have:
def heavyMethod
response = {}
# some calculations, eventually building the response
File.open(File.expand_path('../../../tmp/cache/tests_queue.json', __FILE__), "w") do |f|
f.write(response.to_json)
end
end
and also a corresponding method to read this file.
I searched but couldn't find an example of achieving this using Rails cache convention (and not some private code that I wrote), on data which isn't related with ActiveRecord.
Thanks!
Your solution should work fine, but using Rails.cache should be cleaner and a bit faster. Rails guides provides enough information about Rails.cache and how to get it to work with memcached, let me summarize how I would use it in your case
Heavy method
def heavyMethod
response = {}
# some calculations, eventually building the response
Rails.cache.write("heavy_method_response", response)
end
Request
response = Rails.cache.fetch("heavy_method_response")
The only problem here is that when ur server starts for the first time, the cache will be empty. Also if/when memcache restarts.
One advantage is that somewhere on the flow, the data u pass in is marshalled into storage, and then unmartialled on the way out. Meaning u can pass in complex datastructures, and dont need to serialize to json manually.
Edit: memcached will clear your item if it runs out of memory. Will be very rare since its using a LRU (i think) algoritm to expire things, and I presume you will use this often.
To prevent this,
set expires_in larger than your cron period,
change your fetch code to call the heavy_method if ur fetch fails (like Rails.cache.fetch("heavy_method_response") {heavy_method}, and change heavy_method to just return the object.
Use something like redis which will not delete items.

Rails 2 session clobbered some time AFTER variable is stored

I am working on an existing Rails 2 app. I have converted a few of the hash data structures into objects, and if I put one in the session store, it seems to clobber the session, clearing out the user_id, among other things, and forcing another login. I'm using dalli_store for the session.
The following code causes the session to be wiped out:
bug = MyClass.get_an_object()
session[:debug] = bug
It's not clear WHERE it is getting wiped out. I can step through in the debugger to the end of rendering the view, and the session is fine, but when I hit another link in the UI, session is empty (Hash[0]), and I am redirected to the login page.
If I change the code slightly, to this, the session is OK:
bug = MyClass.get_an_object()
session[:debug] = Marshal::dump(bug)
However, if I store the actual object (even a deep copy), the session is lost. I.e. even this does not work:
session[:debug] = Marshal::load( Marshal::dump(bug) )
bug.size is about 140K when marshalled, so it should not be overrunning memcached. At any rate, I would assume that session is serialized by Marshal::dump(), so the size should be identical. It doesn't matter if I access the object in the session after storing it. Just putting it into the session is enough to cause the problem, but as I said, the session is fine after storing the object, and all the way through the view rendering. It isn't until the start of the next request that I find out that it has been clobbered.
I'm stumped.
Do you have any recommendations on how to debug this? For the moment, I guess I can explicitly call Marshal to save the object, but I would really like to understand why the session is getting clobbered.
I know it's a Bad Thing to put big objects into the session, but fixing that part of the problem is out of scope for the current project ... maybe later. Plus, I am very curious about what is going on here.

Resources