I have a rails console session in a tmux session, and it's taking up a lot of memory. It has a lot of data, pretty deeply nested, in a few variables, and it took a long time to query for that data, so right now my plan is to serialize the data and save it to a file. That way, I can reload it later and not take up too much memory on the machine while I'm not using it. I'm wondering, though, if there's a better way. Is it possible for me to save my whole rails console session and load it up again later?
No, you can't save a whole Rails console session (the same is true for a simple irb session) for later use.
Create, or edit your ~/.irbrc file to include:
require 'irb/ext/save-history'
IRB.conf[:SAVE_HISTORY] = 200
IRB.conf[:HISTORY_FILE] = "#{ENV['HOME']}/.irb-history"
Related
We are already using cookie based sessions, and switching off them to file store sessions in not an option. However, I need a way to store larger amounts of session data (up to 10MG or so) -- beyond the limit of cookie session and, even it weren't, round-tripping that much data on multiple requests would be slow.
I am currently attempting to solve this by using (abusing?) Rails.cache. The basic setup is like this:
I post to a route, which has the following code:
# calculate some results...
Rails.cache.write('search_results' + session.id), search_results)
redirect_to '/results'
Inside GET /results, I read the cached data and send it to the client:
#results = Rails.cache.read('search_results' + session.id)
This works fine. However, if I subsequently make a request to another route like GET /results2 that also calls Rails.cache.read('search_results' + session.id), it will return nil. Even if all calls happen within a 5-10s span.
So my questions are:
Why does this happen? What determines when Rails.cache clean itself?
Is there a way to make this work?
Is there a better approach altogether that doesn't involve using a DB or redis?
Answer to your questions:
The problem with file cache store is that it stores file locally. Thus, if you have multiple servers, cache can be written to one server while cache is read on another server which will return ‘nil’. The solution is to use cache store that can be shared among multiple servers.
Using redis-store may be a solution: https://github.com/redis-store/redis-rails
I have a class method (placed in /app/lib/) which performs some heavy calculations and sub-http requests until a result is received.
The result isn't too dynamic, and requested by multiple users accessing a specific view in the app.
So, I want to schedule a periodic run of the method (using cron and Whenever gem), store the results somewhere in the server using JSON format and, by demand, read the results alone to the view.
How can this be achieved? what would be the correct way of doing that?
What I currently have:
def heavyMethod
response = {}
# some calculations, eventually building the response
File.open(File.expand_path('../../../tmp/cache/tests_queue.json', __FILE__), "w") do |f|
f.write(response.to_json)
end
end
and also a corresponding method to read this file.
I searched but couldn't find an example of achieving this using Rails cache convention (and not some private code that I wrote), on data which isn't related with ActiveRecord.
Thanks!
Your solution should work fine, but using Rails.cache should be cleaner and a bit faster. Rails guides provides enough information about Rails.cache and how to get it to work with memcached, let me summarize how I would use it in your case
Heavy method
def heavyMethod
response = {}
# some calculations, eventually building the response
Rails.cache.write("heavy_method_response", response)
end
Request
response = Rails.cache.fetch("heavy_method_response")
The only problem here is that when ur server starts for the first time, the cache will be empty. Also if/when memcache restarts.
One advantage is that somewhere on the flow, the data u pass in is marshalled into storage, and then unmartialled on the way out. Meaning u can pass in complex datastructures, and dont need to serialize to json manually.
Edit: memcached will clear your item if it runs out of memory. Will be very rare since its using a LRU (i think) algoritm to expire things, and I presume you will use this often.
To prevent this,
set expires_in larger than your cron period,
change your fetch code to call the heavy_method if ur fetch fails (like Rails.cache.fetch("heavy_method_response") {heavy_method}, and change heavy_method to just return the object.
Use something like redis which will not delete items.
I am working on an existing Rails 2 app. I have converted a few of the hash data structures into objects, and if I put one in the session store, it seems to clobber the session, clearing out the user_id, among other things, and forcing another login. I'm using dalli_store for the session.
The following code causes the session to be wiped out:
bug = MyClass.get_an_object()
session[:debug] = bug
It's not clear WHERE it is getting wiped out. I can step through in the debugger to the end of rendering the view, and the session is fine, but when I hit another link in the UI, session is empty (Hash[0]), and I am redirected to the login page.
If I change the code slightly, to this, the session is OK:
bug = MyClass.get_an_object()
session[:debug] = Marshal::dump(bug)
However, if I store the actual object (even a deep copy), the session is lost. I.e. even this does not work:
session[:debug] = Marshal::load( Marshal::dump(bug) )
bug.size is about 140K when marshalled, so it should not be overrunning memcached. At any rate, I would assume that session is serialized by Marshal::dump(), so the size should be identical. It doesn't matter if I access the object in the session after storing it. Just putting it into the session is enough to cause the problem, but as I said, the session is fine after storing the object, and all the way through the view rendering. It isn't until the start of the next request that I find out that it has been clobbered.
I'm stumped.
Do you have any recommendations on how to debug this? For the moment, I guess I can explicitly call Marshal to save the object, but I would really like to understand why the session is getting clobbered.
I know it's a Bad Thing to put big objects into the session, but fixing that part of the problem is out of scope for the current project ... maybe later. Plus, I am very curious about what is going on here.
Observers and Sweepers are removed from Rails 4. Cool.
But what is the way to cache and clear cache then ?
I read about russian doll caching. It nice and all but it only concerns the view rendering cache. It doesn't prevent the database from being hit.
For instance:
<% cache #product do %>
Some HTML code here
<% end %>
You still need to get #product from the db to get its cache_key. So page or action caching can still be useful to prevent unnecessary load.
I could use some timeout to clear the cache sometimes but what for if the records didn't change ?
At least with sweepers you have control on that aspect. What is/will be the right way to do cache and to clear it ?
Thanks ! :)
Welcome to one of the two hard problems in computer science, cache invalidation :)
You would have to handle that manually since the logic for when a cached object, unlike a cached view which can be simply derived from the objects it displays, should be invalidated is application and situation dependent.
You goto method for this is the Rails.cache.fetch method. Rails.cache.fetch takes 3 arguments; the cache key, an options hash, and a block. It first tries to read a valid cache record based on the key; if that key exists and hasn’t expired it will return the value from the cache. If it can’t find a valid record it instead takes the return value from the block and stores it in the cache with your specified key.
For example:
#models = Rails.cache.fetch my_cache_key do
Model.where(condition: true).all
end
This will cache the block and reuse the result until something (tm) invalidates the key, forcing the block to be reevaluated. Also note the .all at the end of the method chain. Normally Rails would return an ActiveRecord relation object that would be cached and this would then be evaluated when you tried to use #models for the first time, neatly sidestepping the cache. The .all call forces Rails to eager load the records and ensure that it's the result that we cache, not the question.
So now that you get all your cache on and never talk to the database again we have to make sure we cover the other end, invalidating the cache. This is done with the Rails.cache.delete method that simply takes a cache key and removes it, causing a miss the next time you try to fetch it. You can also use the force: trueoption with fetch to force a re-evaluation of the block. Whichever suits you.
The science of it all is where to call Rails.cache.delete, in the naïve case this would be on update and delete for a single instance and update, delete, create on any member for a collection. There will always bee corner cases and they are always application specific, so I can't help you much there.
I assume in this answer that you will set up some sane cache store, like memcached or Redis.
Also remember to add this to config/environments/development.rb:
config.cache_store = :null_store
or you development environment will cache and you will end up hairless from frustration.
For further reference read: Everyone should be using low level caching in Rails and The rails API docs
It is also worth noting that functionality is not removed from Rails 4, merely extracted into a gem. If you need or would like the full features of the sweepers simply add it back to your app with a gem 'rails-observers' line in your Gemfile. That gem contains both the sweepers and observers that where removed from Rails 4 core.
I hope that helpt you get started.
We recently lost a database and I want to recover the data from de Production.log.
Every request is logged like this:
Processing ChamadosController#create (for XXX.XXX.XXX.40 at 2008-07-30 11:07:30) [POST]
Session ID: 74c865cefa0fdd96b4e4422497b828f9
Parameters: {"commit"=>"Gravar", "action"=>"create", "funcionario"=>"6" ... (all other parameters go here).
But some stuff to post on de database were in the session. In the request I have the Session ID, and I also have all the session files from the server.
Is there anyway I can, from this Session ID, open de session file and get it's contents?
It's probably best to load the session file into a hash -- using the session-id as the key -- and then go through all the log files in chronological order, and parse out the relevant info for each session, and modify your database with it.
I guess you're starting out with an old database backup? Make sure to do this in a separate Rails environment -- e.g. don't do this in production; create and use a separate "recovery" environment / DB.
think about some sanity checks you can run on the database afterwards, to make sure that the state of the records makes sense
Going forward:
make sure that you do regular backups going forward (e.g. with mysqldump if you use MySQL).
make sure to set up your database for master/slave replication
hope this helps -- good luck!
Have you tried using Marshal#load? I'm not sure how you're generating those session files, but it's quite possible Rails just uses Marshal.
A client exactly had the same problem a few weeks ago. I came up with the following solution:
play back the latest backup you have (in our case it was one year
old)
write a small parser that moves all the requests from production in a temporary database (i chose mongodb for that): i used a rake task and "eval" to create the hash.
play back the data in the following order
play in the first create of an object, if it does not already exist.
find the last update (by date) and play it back.
here is the regex for scanning the production.log:
file = File.open("location_of_your_production.log", "rb")
contents = file.read
contents.scan(/(Started POST \"(.*?)\" for (.*?) at (.*?)\n.*?Parameters: \{(.*?)\}\n.*?Completed (.*?) in (.*?)ms)/m).each do |x|
# now you can collect all the important data.
# do the same for GET requests as well, if you need it.
end
In my case, the temporary database speeded up the process of the logfile parsing, so the above noted steps could be taken. Of course, everything that was not sent over production.log will be lost. Also, updates of the objects would send the whole information, it might be different in your case. I could also recreate the image uploads, since the images were sent base64 encoded in the production.log.
good luck!