Is there a way to reload ruby model in runtime?
For example I've a model
class Model
def self.all_models
##all_models ||= Model.all
end
end
Records in this model are changed very rarely, but then they do, I don't want to reload whole application, just this one class.
On a Development server, this is not a problem. A production server is a big one.
In reality it's not feasible without restarting the server. The best you could do is add a before filter in ApplicationController to update class variables in each worker thread, but it has to be done on every request. You can't turn this behaviour off and on easily.
If it's an resource intensive operation, you can settle for a less intensive test like a comparing value in a database/last modified time of a file to a constant defined at runtime in an effort to determine if the full reload should occur. But you would still have to do this as part of every request.
However, to the best of my knowledge modifying routes once the server has been loaded is impossible. Modifying other site wide variables may require a little more effort, such as reading from a file/database and updating in a before filter.
There may be another way, but I haven't tried it at all. So there's no guarantee.
If you're using a ruby based server such as mongrel. In theory you could use hijack to update the model/routes/variables in the control thread from which, worker threads are spawned from.
Related
I've got a rails app using MRI ruby and Unicorn as the app server. The app provides a 10-week fitness program, and I've built a feature for people logged in as admins to be able to "time travel" to different weeks in the program. Basically, it just sets a session variable with the date they're "time travelled" to, and, each request, it time-travels to that point at the beginning, then returns at the end. The code is below.
I'd limited the feature non-production environments, out of fear that one person's time-travelling may affect other users (since TimeCop monkey patches core classes). But, given MRI isn't really multi-threaded I'm now thinking that's an irrational fear, and that there should be no danger of using the "time travel" feature in production.
For the duration within which a single request is processed (and therefore the time for which the core classes are monkey patched by TimeCop if the user is using "time travel"), there should be no possibility that any other request gets run on the same ruby process.
Is this correct? Or can other user's requests be affected by TimeCop's changes to the core classes in a way I'm not aware of?
The code I'm using is as follows:
module TimeTravelFilters
extend ActiveSupport::Concern
included do
if Rails.env.development? || Rails.env.staging?
around_action :time_travel_for_request
end
end
def time_travel_for_request
time_travel
yield
time_travel_return
end
def time_travel
if session[:timetravel_to_date]
Timecop.travel(session[:timetravel_to_date])
end
end
def time_travel_return
Timecop.return
end
end
MRI's global interpreter lock does mean that 2 threads won't execute concurrently, but the granularity of that is much much smaller than the processing of one request.
As it happens unicorn doesn't use threads for concurrency so you'd be ok, but the day you switched to another server (eg puma) then you'd be in for a nasty surprise.
This would also affect things like data in logs, created_at/updated_at timestamps for anything updated and so on. It might also affect monitoring data gathered by services like newrelic, airbrake etc if you use those. Another example of something that might seem completely unrelated is api requests to AWS: the signature that verifies these requests includes a timestamp, and they will fail if you're out of sync by more than a few minutes. There is just too much code (much of which you don't control) that assumes that Time.now is accurate.
You'd be much better off identifying those bits of code that implicitly use the current Time and changing them to allow the desired time to be passed as an argument.
As an aside I think your code would leave the altered time in place if the controller raised an exception
I am trying to spawn a thread in Rails. I am usually not comfortable using threads as I will need to have an in-depth knowledge of Rails' request/response cycle, yet I cannot avoid using one as my request times out.
In order to avoid the time out, I am using a thread within a request. My question here is simple. The thread that I've used accesses a params[] variable inside it. And things seem to work OK now. I want to know whether this is right? I'd be happy if someone can throw some light on using Threads in Rails during request/response cycle.
[Starting a bounty]
The short answer is yes, but only to a degree; the binding in which the thread was created will continue to persist. The params will still exist only if no one (including Rails) goes out of their way to modify or delete the params hash. Instead, they rely on the garbage collector to clean up these objects. Since the thread has access to the current context (called the "binding" in Ruby) when it was created, all the variables that can be reached from that scope (effectively the entire state when the thread was created) cannot be deleted by the garbage collector. However, as executing continues in the main thread, the values of the variables in that context can be changed by the main thread, or even by the thread you created, if it can access it. This is the benefit--and the downfall--of threads: they share memory with everything else.
You can emulate a very similar environment to Rails to test your problem, using a function as such: http://gist.github.com/637719. As you can see, the code still prints 5.
However, this is not the correct way to do this. The better way to pass data to a thread is to pass it to Thread.new, like so:
# always dup objects when passing into a thread, else you really
# haven't done yourself any good-it would still be the same memory
Thread.new(params.dup) do |params|
puts params[:foo]
end
This way, you can be sure than any modifications to params will not affect your thread. The best practice is to only use data you pass to your thread in this way, or things that the thread itself created. Relying on the state of the program outside the thread is dangerous.
So as you can see, there are good reasons that this is not recommended. Multithreaded programming is hard, even in Ruby, and especially when you're dealing with as many libraries and dependencies as are used in Rails. In fact, Ruby seems to make it deceptively easy, but it's basically a trap. The bugs you will encounter are very subtle; they will happen "randomly" and be very hard to debug. This is why things like Resque, Delayed Job, and other background processing libraries are used so widely and recommended for Rails apps, and I would recommend the same.
The question is more does rails keep the request open whilst the thread is running than does it persist the value.
It won't persist the value as soon as the request ends and I also wouldn't recommend holding the request open unless there is a real need. As other users have said some stuff is just better in a delayed job.
Having said that we used threading a couple of times to query multiple sources concurrently and actually reduce the response time of an app (that was only for admins so didn't need to have fast response times) and if memory serves correctly the thread can keep the request open if you call join at the end and wait for each thread to finish before continuing.
I have always been taught that storing objects in a session was a bad idea. Instead IDs should be stored that retrieve the record when needed.
However, I have an application that I wonder is an exception to this rule. I'm building a flashcard application, and the words being quizzed are in a table in the database whose schema doesn't change. I want to store the words currently being quizzed in a session, so a user can finish where they started in case they move on to a separate page.
In this case, is it possible to get away with storing these words as objects in the database? If so, why? The reason I ask is because the quiz is designed to move quickly, and I'd hate to waste a database call on retrieving a record that never changes in the first place. However, perhaps there are other negatives to a large session that I'm not aware of.
*For the record, I have tried caching it with the built-in memcache methods in Rails 2.3, but apparently that has a maximum size per item of 1MB.
The main reason not to store objects in the session is that if the object structure changes, you will get an exception. Consider the following:
class Foo
attr_accessor :bar
end
class Bar
end
foo = Foo.new
foo.bar = Bar.new
put_in_session(foo)
Then, in a subsequent release of the project, you change Bar's name. You reboot the server, and try to grab foo out of the session. When it tries to deserialize, it fails to find Bar and explodes.
It might seem like it would be easy to avoid this pitfall, but in practice, I've seen it bite a number of people. This is just because serializing an object can sometimes take more along with it than is immediately apparent (this sort of thing is supposed to be transparent) and unless you have rigorous rules about this, things will tend to get flummoxed up.
The reason it's normally frowned upon is that it's extremely common for this to bite people in ActiveRecord, since it's quite common for the structure of your app to shift over time, and sessions can be deserialized a week or longer after they were originally created.
If you understand all that and are willing to put in the energy to be sure that your model does not change and is not serializing anything extra, you're probably fine. But be careful :)
Rails tends to encourage RESTful design, and using sessions isn't very RESTful. I'd probably make a Quiz resource that has a bunch of words, as well as a current_word. This way, when they come back, you'll know where they were.
Now, REST isn't everything (depending on who you talk to), but there's a pretty good case against large sessions. Remember that sessions write things to and from disk, and the more data that you're writing, the longer it takes to read back...
Since your app is a Rails app, I would suggest either:
Using your clients' ability to cache
by caching the cards in javascript.
(you'd need a fairly ajaxy app to
do this, see the latest RailsCast for some interesting points on javascript page caching)
Use one of the many other rails-supported server-side
caching options (i.e. MemCached) to
cache this data.
A much more insidious issue you'll encounter storing objects directly in the session is when you're using CookieStore (the default in Rails 2+ I believe). It's very easy to get CookieOverflow errors which are very hard to recover from.
I have a fairly vanilla rails app with low traffic at present, and everything seems to work OK.
However, I don't know much about the rails internals and I'm wondering what happens in a busy site if two requests come in at the same time and try to update the same model, from (I assume) two separate mongrel processes. Could this result in a failed transaction exception or similar, or does rails do any magic to serialize controller methods?
If an update could fail, what is the best practice to watch for and handle this type of situation?
For more background, my controller methods often update multiple models. I currently don't do anything special to create transactions and just rely on the default behaviors. Ideally I'd like the update to be retried rather than return an error (the updates are generally idempotent, i.e. doing them twice if necessary would be OK). My database is mysql.
afaik, mysql will wait until the first transaction is processed and then it will process the second one. #create, #update and #save get their stuff wrapped in an SQL transaction. And I guess mysql can handle those well.
Rails comes with a handy session hash into which we can cram stuff to our heart's content. I would, however, like something like ASP's application context, which instead of sharing data only within a single session, will share it with all sessions in the same application. I'm writing a simple dashboard app, and would like to pull data every 5 minutes, rather than every 5 minutes for each session.
I could, of course, store the cache update times in a database, but so far haven't needed to set up a database for this app, and would love to avoid that dependency if possible.
So, is there any way to get (or simulate) this sort of thing? If there's no way to do it without a database, is there any kind of "fake" database engine that comes with Rails, runs in memory, but doesn't bother persisting data between restarts?
Right answer: memcached . Fast, clean, supports multiple processes, integrates very cleanly with Rails these days. Not even that bad to set up, but it is one more thing to keep running.
90% Answer: There are probably multiple Rails processes running around -- one for each Mongrel you have, for example. Depending on the specifics of your caching needs, its quite possible that having one cache per Mongrel isn't the worst thing in the world. For example, supposing you were caching the results of a long-running query which
gets fresh data every 8 hours
is used every page load, 20,000 times a day
needs to be accessed in 4 processes (Mongrels)
then you can drop that 20,000 requests down to 12 with about a single line of code
##arbitrary_name ||= Model.find_by_stupidly_long_query(param)
The double at-mark, a Ruby symbol you might not be familiar with, is a global variable. ||= is the commonly used Ruby idiom to execute the assignment if and only if the variable is currently nil or otherwise evaluates to false. It will stay good until you explicitly empty it OR until the process stops, for any reason -- server restart, explicitly killed, what have you.
And after you go down from 20k calculations a day to 12 in about 15 seconds (OK, two minutes -- you need to wrap it in a trivial if block which stores the cache update time in a different global), you might find that there is no need to spend additional engineering assets on getting it down to 4 a day.
I actually use this in one of my production sites, for caching a few expensive queries which literally only need to be evaluated once in the life of the process (i.e. they change only at deployment time -- I suppose I could precalculate the results and write them to disk or DB but why do that when SQL can do the work for me).
You don't get any magic expiry syntax, reliability is pretty slim, and it can't be shared across processes -- but its 90% of what you need in a line of code.
You should have a look at memcached: http://wiki.rubyonrails.org/rails/pages/MemCached
There is a helpful Railscast on Rails 2.1 caching. It is very useful if you plan on using memcached with Rails.
Using the stock Rails cache is roughly equivalent to this.
#p3t0r- is right,MemCached is probably the best option, but you could also use the sqlite database that comes with Rails. That won't work over multiple machines though, where MemCached will. Also, sqlite will persist to disk, though I think you can set it up not to if you want. Rails itself has no application-scoped storage since it's being run as one-process-per-request-handler so it has no shared memory space like ASP.NET or a Java server would.
So what you are asking is quite impossible in Rails because of the way it is designed. What you ask is a shared object and Rails is strictly single threaded. Memcached or similar tool for sharing data between distributed processes is the only way to go.
The Rails.cache freezes the objects it stores. This kind of makes sense for a cache but NOT for an application context. I guess instead of doing a roundtrip to the moon to accomplish that simple task, all you have to do is create a constant inside config/environment.rb
APP_CONTEXT = Hash.new
Pretty simple, ah?