Good idea to access session in observer or not? - ruby-on-rails

I want to log user's actions in my Ruby on Rails application.
So far, I have a model observer that inserts logs to the database after updates and creates. In order to store which user performed the action that was logged, I require access to the session but that is problematic.
Firstly, it breaks the MVC model. Secondly, techniques range from the hackish to the outlandish, perhaps maybe even tying the implementation to the Mongrel server.
What is the right approach to take?

Hrm, this is a sticky situation. You pretty much HAVE to violate MVC to get it working nicely.
I'd do something like this:
class MyObserverClass < ActiveRecord::Observer
cattr_accessor :current_user # GLOBAL VARIABLE. RELIES ON RAILS BEING SINGLE THREADED
# other logging code goes here
end
class ApplicationController
before_filter :set_current_user_for_observer
def set_current_user_for_observer
MyObserverClass.current_user = session[:user]
end
end
It is a bit hacky, but it's no more hacky than many other core rails things I've seen.
All you'd need to do to make it threadsafe (this only matters if you run on jruby anyway) is to change the cattr_accessor to be a proper method, and have it store it's data in thread-local storage

I find this to be a very interesting question. I'm going to think out loud here a moment...
Ultimately, what we are faced with is a decision to violate a design-pattern acceptable practice in order to achieve a specific set of functionality. So, we must ask ourselves
1) What are the possible solutions that would not violate MVC pattern
2) What are the possible solutions that would violate the MVC pattern
3) Which option is best? I consider design patterns and standard practices very important, but at the same time if holding to them makes your code more complex, then the right solution may very well be to violate the practice. Some people might disagree with me on that.
Lets consider #1 first.
Off the top of my head, I would think of the following possible solutions
A) If you are really interested in who is performing these actions, should this data be stored in the model any way? It would make this information available to your Observer. And it also means that any other front-end caller of your ActiveRecord class gets the same functionality.
B) If you are not really interested in understanding who created a entry, but more interested in logging the web actions themselves, then you might consider "observing" the controller actions. It's been some time since I've poked around Rails source, so I'm not sure who their ActiveRecord::Observer "observes" the model, but you might be able to adapt it to a controller observer. In this sense, you aren't observing the model anymore, and it makes sense to make session and other controller type data information to that observer.
C) The simplest solution, with the least "structure", is to simply drop your logging code at the end of your action methods that you're watching.
Consider option #2 now, breaking MVC practices.
A) As you propose, you could find the means to getting your model Observer to have access to the Session data. You've coupled your model to your business logic.
B) Can't think of any others here :)
My personal inclination, without knowing anymore details about your project, is either 1A, if I want to attach people to records, or 1C if there are only a few places where I'm interested in doing this. If you are really wanting a robust logging solution for all your controllers and actions, you might consider 1B.
Having your model observer find session data is a bit "stinky", and would likely break if you tried to use your model in any other project/situation/context.

You're right about it breaking MVC. I would suggest using callbacks in your controllers, mostly because there are situations (like a model which save is called but fails validation) where you wouldn't want an observer logging anything.

I found a clean way to do what is suggested by the answer I picked.
http://pjkh.com/articles/2009/02/02/creating-an-audit-log-in-rails
This solution uses an AuditLog model as well as a TrackChanges module to add tracking functionality to any model. It still requires you to add a line to the controller when you update or create though.

In the past, when doing something like this, I have tended towards extending the User model class to include the idea of the 'current user'
Looking at the previous answers, I see suggestions to store the actual active record user in the session. This has several disadvantages.
It stores a possibly large object in the session database
It means that the copy of the user is 'cached' for all time (or until logout is forced). This means that any changes in status of this user will not be recognised until the user logs out and logs back in. This means for instance, that attempting to disable the user will await him logging off and back on. This is probably not the behaviour you want.
So that at the beginning of a request (in a filter) you take the user_id from the session and read the user, setting User.current_user.
Something like this...
class User
cattr_accessor :current_user
end
class Application
before_filter :retrieve_user
def retrieve_user
if session[:user_id].nil?
User.current_user = nil
else
User.current_user = User.find(session[:user_id])
end
end
end
From then on it should be trivial.

http://www.zorched.net/2007/05/29/making-session-data-available-to-models-in-ruby-on-rails

Related

current_user available in an observer

OK, so I know this has been brought up before, and I realise that this breaks the MVC model in a purists view. However, I really think that things such as current_user or current_tenant should be available in an observer.
My specific case is that after actions have been done on a subset of my models (about half a dozen) I want something to be written to an audit log that includes the user that made the change and the tenant that made that change as well.
The first way to do this is to add a line to each controller method that carries out that function. To make this DRYer the actual activity is carried out in the application controller or the auditlog model and a simple one line statement is called from the controller. However this still means adding in the line, which isn't great and would be a whole lot more elegant if it were done in an observer.
However since an observer has no way of knowing what current_user is, this isn't possible. I've seen some work around using Thread, but these do not look that safe to me.
Now if anyone does have a more elegant solution, I'd love to hear it. Otherwise this is my case that we should have access to some controller methods in an observer. I'd like to get a sense of the feeling out there before putting this to the rails core dev team.

What is the most elegant way to access current_user from the models? or why is it a bad idea?

So, I've implemented some permissions between my users and the objects the users modify.. and I would like to lessen the coupling between the views/controllers with the models (calling said permissions). To do that, I had an idea: Implementing some of the permission functionality in the before_save / before_create / before_destroy callbacks. But since the permissions are tied to users (current_user.can_do_whatever?), I didn't know what to do.
This idea may even increase coupling, as current_user is specifically controller-level.
The reason why I initially wanted to do this is:
All over my controllers, I'm having to check if a user has the ability to save / create / destroy. So, why not just return false upon save / create / destroy like rails' .save already does, and add an error to the model object and return false, just like rails' validations?
Idk, is this good or bad? is there a better way to do this?
Let the controller check the user's privileges. To have the model handle authorization logic would lead to more coupling (just in different places, and there'd still be coupling to the controller to get the current user). Checking privileges isn't really internal logic to a model.
Simile: Imagine if it was a file's responsibility to check whether you can read/write to it, instead of having the OS (which is the mother of all controllers, really) handle the access.
If you want cleaner controllers you can (for instance) make some generalized before_filters that restrict access to CRUD actions based on the current user.
Flambino gave you the party line :) but let me add my 2 cents.
It is certainly possible to consider access control logic as part of models.
Model validations is a way of checking the manipulating user's rights on crud actions. A downside of this is exactly the fact that access logic usually generalizes across models, therefore we like to abstract them out like in a neat cancan ability file.
You can actually have your cake and eat it by making your manipulation itself a polymorphic association on models. This is how most audit/version control systems are implemented.
https://github.com/collectiveidea/audited
You could extend it and put your access control logic in audit class validations, so it is in one place.
Audited gem also offers a cunning way to pass the current user to the model level by using an observer as around filter on a controller.
https://github.com/collectiveidea/audited/blob/master/lib/audited/sweeper.rb
but there are other methods too https://github.com/bokmann/sentient_user often considered hacky.
Caveats
In either case you will have to guard against situations when models
are manipulated outside the request cycle (in a background routine,
cronjob, console) when current_user is ill-defined.
You usually do not want to treat access violations with validation
errors but use standard http status code responses.
Third, in a web app setting, you usually manipulate objects via forms
and typically you can already controll access to say an update form
in fact usually coupled with the same access rights as update itself
(cancan even have general alias for new/create and edit/update) so if
the latter is handled with validations, this generalized access logic
will be lost. Not to mention that read-access logic is impossible to
handle with model validations.
hope this helps

Advice on Purchase model design (with Gateway interaction)

I'm making a Purchase model that handles site purchases, which will interact with the Payment Gateway. My question is about how to design the interface, whether I should use separate class methods to do the work, or patch into the AR lifecycle with callbacks.
At first I was doing something like Purchase.make_purchase(product, ...), as a class method. But this didn't seem great.
What I'm about to implement is a solution that uses the model lifecycle and callbacks to make the purchase and gateway transaction. Something like this:
#purchase = Purchase.new
#purchase.product = product
#purchase.user = current_user
if #purchase.save
else
end
I would then have a before_save callback that talks to the gateway:
before_save :transfer_funds
that can halt save if unsuccessful, setting #purchase.errors[:gateway_error]
I'm not sure this is the best way to go about this. Any advice?
I haven't used it yet so I can't give much insight but I'd take a look at ActiveMerchant if you haven't already. I'm not sure what your payment gateway is right now, but if you're not using it; you may get a few ideas.
EDIT I realize it didn't answer the question, I was just thinking it may give you some ideas if you weren't already using it.
I don't have direct experience with payment processing so you can take my opinions with a big grain of salt.
My big consideration with using the life cycle methods is that you may end up having to put extra logic in your transfer_funds method in this case that you may not otherwise need. For example, if a Purchase could be updated at a later time, then you're going to be calling your transfer_funds method every time it's updated.
I'm not sure if a Purchase has the concept of a pre-authorization followed by an actual charge, but I'd imagine the transfer_funds should only be called once? You may be able to move this to a before_create instead but that may only fix this one scenario.
I've also found that moving them to the lifecycle methods can often add a lot more logic into your model than may desirable. In the past, I've found that being more explicit in a controller action can sometimes save me headaches down the road even if it adds a step for me to do every place I need to transfer_funds for example.
I now try to keep my life cycle methods in the model class only related to updating the ActiveRecord model itself and not doing much extra work. If keeping it in your controller is not viable, I'd consider using an ActiveRecord::Observer to abstract out the logic associated with transfer_funds.
Hope this gives you some ideas.

Pitfalls of Thread.local[:current_user]

This question is related to: Access current_user in model.
Specifically, I want to enable access to current_user in one Model.rb. #moif left a comment stating that the solution is not thread-safe, and I have read that there are additional caveats to using this process.
My question is this - if I were to add:
def self.current_user
Thread.local[:current_user]
end
def self.current_user=(usr)
Thread.local[:current_user] = usr
end
to one Model.rb (used only lightly and infrequently), what are the real-world implications for my app and is there anything additional I have to do to ensure its health?
Set up: Rails 1.9, Rails 3.0, Heroku, Authlogic.
I'm not sure I agree about the path you are taking. I agree with the other post that passing the current_user to the model is not appropriate but I wouldn't be using Thread.local for that. Here's why:
Developers love to get technical with solutions and there's not much more "closer to the system" you can get than a Thread.local. Having used Thread.locals before they are very tricky and if you don't get it right then you spend countless hours trying to figure out the problem let alone the solution. It also is difficult to find testers who can understand the complexities of Thread.local and be able to test the code thoroughly. In fact I would wonder how many developers put together solid rspec tests (or equivalent) for something like this. The "cost" of this solution may not be worth it.
I think I would take a look at what you are trying to do and see if there is an easier solution. For example, I can think of two off-hand (maybe would work or not in your case):
a) connect your history table to your user table with a foreign key. "belongs_to, has_many"; or
b) pass the username with attr_accessor on history and set that when creating the object.

ActionMailer best practices: Call method in the model or the controller?

Sending an email is usually called after an action on a model, but the email itself is a view operation. I'm looking for how you think about what question(s) to ask yourself to determine where to put the action mailer method call.
I've seen/used them:
In a model method - bad coupling of related but seperate concerns?
In a callback in the model (such as after_save) - best separation as far as I can tell with my current level of knowledge.
In the controller action - just feels wrong, but are there situations were this would be the smartest way to structure the code?
If I want to know how to program I need to think like a programmer, so learning how you go about thinking through particular programming solutions is worth months of coding on my own in isolation. Thank you!
Late answer, but I want to rationalize on the subject:
Usually, in a web app, you want to send emails either as a direct reaction to a client. Or as a background task, in case we're talking about a newsletter/notification mail sort of thing.
The model is basically a data storage mapper. Its logic should encapsulate data-handling/communication with data storage handling. Therefore, inserting logic which does not relate to it is a bit tricky, and in most cases wrong. Let us take the example: User registers an account and should receive a confirmation email. In this case one could say, the confirmation email is a direct effect of the creation of a new account. Now, instead of doing it in the web app, try to create a user in the console. Sounds wrong to trigger a callback in that case, right? So, callback option scratched. Should we still write the method in the model? Well, if it's a direct effect of a user action/input, then it should stay in that workflow. I would write it in the controller after the user was successfully created. Directly. Replicating this logic in the model to be called in the controller anyways adds unnecessary modularity, and dependency of an Active Record model from Action Mailer. Try to consider sharing the model over many apps, in which some of them don't want Action Mailer for it. For the stated reasons, I'm of the opinion that the mailer calls should be where they make sense, and usually the model is not that place. Try to give me examples where it does make.
Well, depends.
I've used all of those options and your point about 'why should I put this where?' is good.
If it's something I want to happen every time a model is updated in a certain way, then I put it in the model. Maybe even in a callback in the model.
Sometimes you're just firing off a report; there's no updating of anything. In that case, I've normally got a resource with an index action that sends the report.
If the mailer isn't really related to the model that's being changed, I could see putting it in a callback. I don't do that very often. I'd be more likely to still encapsulate it in the model. I've done it, just not very often.
I'm aware it's been a while but best practices never die, right? :)
Email is by definition asynchronous communication (except for confirmation email, but even this one it should be a best practice to leave a delay before having to confirm).
Hence in my opinion, the most logical way to send it is :
in a background action (using Sidekiq or delayed_job)
in a callback method : "hey this action is successfully done, maybe we can tell the world now?"
Problem in Rails is that it is not too many callbacks (as in JS for instance): I personnaly find it dirty to have code like:
after_save :callback
def callback
if test_that_is_true_once_in_the_objects_life
Mailer.send_email()
end
end
So, if you really want to think like a programmer, the idea would be to set up some custom callback system in your app.
Eg.
def run_with_callback(action, callback_name)
if send(action)
delay.send(callback_name)
end
end
Or even creating an event system in your app would be a decent solution.
But in the end those solutions are pretty expensive in time so people end-up writing it inline after the action
def activate
[...]
user.save
Mailer.send_mail
respond_to
[...]
end
which is the closest fashion to callback in synchronous programming and results having Mailers call everywhere (in Model and in Controller).
There's several reasons why controllers are a good place for the mailers:
Emails that have nothing to do with a model.
If your emails depend on several models that dont know about each other.
Extracting models to an API should not mean reimplementing mailers.
Mailer content determined by request variables that you dont want to pass to the model.
If your business model requires a lot of diferent emails, model callbacks can stack.
If the email does not depend on the result of model computations.

Resources