Advice on Purchase model design (with Gateway interaction) - ruby-on-rails

I'm making a Purchase model that handles site purchases, which will interact with the Payment Gateway. My question is about how to design the interface, whether I should use separate class methods to do the work, or patch into the AR lifecycle with callbacks.
At first I was doing something like Purchase.make_purchase(product, ...), as a class method. But this didn't seem great.
What I'm about to implement is a solution that uses the model lifecycle and callbacks to make the purchase and gateway transaction. Something like this:
#purchase = Purchase.new
#purchase.product = product
#purchase.user = current_user
if #purchase.save
else
end
I would then have a before_save callback that talks to the gateway:
before_save :transfer_funds
that can halt save if unsuccessful, setting #purchase.errors[:gateway_error]
I'm not sure this is the best way to go about this. Any advice?

I haven't used it yet so I can't give much insight but I'd take a look at ActiveMerchant if you haven't already. I'm not sure what your payment gateway is right now, but if you're not using it; you may get a few ideas.
EDIT I realize it didn't answer the question, I was just thinking it may give you some ideas if you weren't already using it.
I don't have direct experience with payment processing so you can take my opinions with a big grain of salt.
My big consideration with using the life cycle methods is that you may end up having to put extra logic in your transfer_funds method in this case that you may not otherwise need. For example, if a Purchase could be updated at a later time, then you're going to be calling your transfer_funds method every time it's updated.
I'm not sure if a Purchase has the concept of a pre-authorization followed by an actual charge, but I'd imagine the transfer_funds should only be called once? You may be able to move this to a before_create instead but that may only fix this one scenario.
I've also found that moving them to the lifecycle methods can often add a lot more logic into your model than may desirable. In the past, I've found that being more explicit in a controller action can sometimes save me headaches down the road even if it adds a step for me to do every place I need to transfer_funds for example.
I now try to keep my life cycle methods in the model class only related to updating the ActiveRecord model itself and not doing much extra work. If keeping it in your controller is not viable, I'd consider using an ActiveRecord::Observer to abstract out the logic associated with transfer_funds.
Hope this gives you some ideas.

Related

current_user available in an observer

OK, so I know this has been brought up before, and I realise that this breaks the MVC model in a purists view. However, I really think that things such as current_user or current_tenant should be available in an observer.
My specific case is that after actions have been done on a subset of my models (about half a dozen) I want something to be written to an audit log that includes the user that made the change and the tenant that made that change as well.
The first way to do this is to add a line to each controller method that carries out that function. To make this DRYer the actual activity is carried out in the application controller or the auditlog model and a simple one line statement is called from the controller. However this still means adding in the line, which isn't great and would be a whole lot more elegant if it were done in an observer.
However since an observer has no way of knowing what current_user is, this isn't possible. I've seen some work around using Thread, but these do not look that safe to me.
Now if anyone does have a more elegant solution, I'd love to hear it. Otherwise this is my case that we should have access to some controller methods in an observer. I'd like to get a sense of the feeling out there before putting this to the rails core dev team.

Best way to send an email upon creation of a new model instance in Rails?

I have an app with the following models: User, Task, and Assignment. Each Assignment belongs_to a User and a Task (or in other words, a Task is assigned to a User via an Assignment).
Once a User completes a Task, the Assignment is marked as complete, and the app immediately creates a new Assignment (or in other words, assigns the task to someone else).
Immediately after creating this new Assignment, I want to send an email to the new assignee. I know I can do this one of three ways:
Explicitly send the email in my controller.
Send the email in a callback on the Assignment model.
Create an observer on the Assignment model and send the email in after_create.
Which of these options do people think is best, and why? #1 seems bad to me, because I don't want to have to remember to send it in every action that might complete an Assignment. I've heard a couple people say that Rails observers are bad and should be avoided, but I'm not sure if they're people I should trust or not. Any other opinions?
You're right, the first way isn't a good approach. Observers are my preferred way to go, for a couple reasons.
First, if you use TDD (test-driven development) you can shut off observers to more purely test the model without every creation firing off a mailer creation. Then you can unit test the mailer and observer separately.
Second, the idea of separating callbacks creates cleaner code. Callbacks aren't really part of your model, they are events. Your model contains the functions and attributes necessary to run itself, and the callbacks (implemented with observers) are separate event handlers.
That said, I don't think your second option is "bad" or less professional. Either way works as long as it's at the model level, instead of controllers or (even worse) views.
i would go for observers as they reduce clutter in your model / controller code and i can think of no downside in using them ...
iirc sending an email after save email is even an example in the active record observers documentation
You can also do a combination of things. You could use observers for one action, and if there is just a single email for one other action you could use option #1 for it.
Have you heard of acts_as_state_machine, or any other similar solutions?
http://github.com/rubyist/aasm
They allow you to define a state of each object and different things that can happen with state changes.
This allows you to have as much logic as you need about when things are sent, if you need this much. Can be overkill, but can be really handy. I suggest because you want an email sent when a task is 'completed' which sounds like it may be a type of state or status column in your Task model.
In the end, I like this implementation http://www.scottw.com/resque-mail-queue-gem

Is there a good cached memoization plugin for rails?

I have a model along the lines of:
class Account < ActiveRecord::Base
has_many :payments
has_many :purchases
def balance
payments.sum(:dollar_amount) - purchases.map{|p| p.dollar_amount}.sum
end
end
I want to memoize the balance method and store it in memcached. The problem, of course, is that the cached value needs to get expired any time a payment or purchase is created. I could insert code in the after_save callbacks of payments and purchases to expire the cached balances of their accounts but it seems to me that it would be easier to understand/maintain if I could say something like:
cached_memoize :balance, :depends_on => [:payments, :purchases]
Is there an existing gem/plugin that does this? And before I go off and write my own, is it a good idea? The downside that I see is that it might make it less obvious to somebody who is modifying the dollar_amount method of Purchase that they need to take into account a caching issue (if they unwittingly introduced a dependency on another model, like SubPurchase or something, it would screw things up.) But since this isn't super obvious anyway, I think that having a neat declarative syntax is worth it - at least that way when it breaks, it's clear how to fix it.
Thoughts?
Edit: in response to semanticart's answer I will be more explicit about my problem with the "just put the expires in the relevant callback" approach - the problem is that you end up with expires all over the codebase - it starts with the after_save callback on payment, but maybe it's in a separate observer for purchases, and then you have polymorphic associations, inheritance trees, etc. The syntax I'm proposing forces developers to keep all these cases in a neat list in one place. That way when you get a bug report like "users balances sometimes are out of sync and they aren't quite sure how to replicate the issue" it is a lot easier to figure out what is going on.
I'd consider another approach: have a balance field on the account. Use callbacks (after_save, etc.) on the Purchase and SubPurchase models to update the balance field on the parent Account. Your balance only changes when the other models are modified and you never have to worry about it being stale.
sounds to me like you want to fork cache_fu, and add an option that magically sprinkles the after_saves across the dependent records. I dig having your dependencies in 1 place.
Not sure this is what you are looking for but it may help.
http://railscasts.com/episodes/137-memoization

ActionMailer best practices: Call method in the model or the controller?

Sending an email is usually called after an action on a model, but the email itself is a view operation. I'm looking for how you think about what question(s) to ask yourself to determine where to put the action mailer method call.
I've seen/used them:
In a model method - bad coupling of related but seperate concerns?
In a callback in the model (such as after_save) - best separation as far as I can tell with my current level of knowledge.
In the controller action - just feels wrong, but are there situations were this would be the smartest way to structure the code?
If I want to know how to program I need to think like a programmer, so learning how you go about thinking through particular programming solutions is worth months of coding on my own in isolation. Thank you!
Late answer, but I want to rationalize on the subject:
Usually, in a web app, you want to send emails either as a direct reaction to a client. Or as a background task, in case we're talking about a newsletter/notification mail sort of thing.
The model is basically a data storage mapper. Its logic should encapsulate data-handling/communication with data storage handling. Therefore, inserting logic which does not relate to it is a bit tricky, and in most cases wrong. Let us take the example: User registers an account and should receive a confirmation email. In this case one could say, the confirmation email is a direct effect of the creation of a new account. Now, instead of doing it in the web app, try to create a user in the console. Sounds wrong to trigger a callback in that case, right? So, callback option scratched. Should we still write the method in the model? Well, if it's a direct effect of a user action/input, then it should stay in that workflow. I would write it in the controller after the user was successfully created. Directly. Replicating this logic in the model to be called in the controller anyways adds unnecessary modularity, and dependency of an Active Record model from Action Mailer. Try to consider sharing the model over many apps, in which some of them don't want Action Mailer for it. For the stated reasons, I'm of the opinion that the mailer calls should be where they make sense, and usually the model is not that place. Try to give me examples where it does make.
Well, depends.
I've used all of those options and your point about 'why should I put this where?' is good.
If it's something I want to happen every time a model is updated in a certain way, then I put it in the model. Maybe even in a callback in the model.
Sometimes you're just firing off a report; there's no updating of anything. In that case, I've normally got a resource with an index action that sends the report.
If the mailer isn't really related to the model that's being changed, I could see putting it in a callback. I don't do that very often. I'd be more likely to still encapsulate it in the model. I've done it, just not very often.
I'm aware it's been a while but best practices never die, right? :)
Email is by definition asynchronous communication (except for confirmation email, but even this one it should be a best practice to leave a delay before having to confirm).
Hence in my opinion, the most logical way to send it is :
in a background action (using Sidekiq or delayed_job)
in a callback method : "hey this action is successfully done, maybe we can tell the world now?"
Problem in Rails is that it is not too many callbacks (as in JS for instance): I personnaly find it dirty to have code like:
after_save :callback
def callback
if test_that_is_true_once_in_the_objects_life
Mailer.send_email()
end
end
So, if you really want to think like a programmer, the idea would be to set up some custom callback system in your app.
Eg.
def run_with_callback(action, callback_name)
if send(action)
delay.send(callback_name)
end
end
Or even creating an event system in your app would be a decent solution.
But in the end those solutions are pretty expensive in time so people end-up writing it inline after the action
def activate
[...]
user.save
Mailer.send_mail
respond_to
[...]
end
which is the closest fashion to callback in synchronous programming and results having Mailers call everywhere (in Model and in Controller).
There's several reasons why controllers are a good place for the mailers:
Emails that have nothing to do with a model.
If your emails depend on several models that dont know about each other.
Extracting models to an API should not mean reimplementing mailers.
Mailer content determined by request variables that you dont want to pass to the model.
If your business model requires a lot of diferent emails, model callbacks can stack.
If the email does not depend on the result of model computations.

Good idea to access session in observer or not?

I want to log user's actions in my Ruby on Rails application.
So far, I have a model observer that inserts logs to the database after updates and creates. In order to store which user performed the action that was logged, I require access to the session but that is problematic.
Firstly, it breaks the MVC model. Secondly, techniques range from the hackish to the outlandish, perhaps maybe even tying the implementation to the Mongrel server.
What is the right approach to take?
Hrm, this is a sticky situation. You pretty much HAVE to violate MVC to get it working nicely.
I'd do something like this:
class MyObserverClass < ActiveRecord::Observer
cattr_accessor :current_user # GLOBAL VARIABLE. RELIES ON RAILS BEING SINGLE THREADED
# other logging code goes here
end
class ApplicationController
before_filter :set_current_user_for_observer
def set_current_user_for_observer
MyObserverClass.current_user = session[:user]
end
end
It is a bit hacky, but it's no more hacky than many other core rails things I've seen.
All you'd need to do to make it threadsafe (this only matters if you run on jruby anyway) is to change the cattr_accessor to be a proper method, and have it store it's data in thread-local storage
I find this to be a very interesting question. I'm going to think out loud here a moment...
Ultimately, what we are faced with is a decision to violate a design-pattern acceptable practice in order to achieve a specific set of functionality. So, we must ask ourselves
1) What are the possible solutions that would not violate MVC pattern
2) What are the possible solutions that would violate the MVC pattern
3) Which option is best? I consider design patterns and standard practices very important, but at the same time if holding to them makes your code more complex, then the right solution may very well be to violate the practice. Some people might disagree with me on that.
Lets consider #1 first.
Off the top of my head, I would think of the following possible solutions
A) If you are really interested in who is performing these actions, should this data be stored in the model any way? It would make this information available to your Observer. And it also means that any other front-end caller of your ActiveRecord class gets the same functionality.
B) If you are not really interested in understanding who created a entry, but more interested in logging the web actions themselves, then you might consider "observing" the controller actions. It's been some time since I've poked around Rails source, so I'm not sure who their ActiveRecord::Observer "observes" the model, but you might be able to adapt it to a controller observer. In this sense, you aren't observing the model anymore, and it makes sense to make session and other controller type data information to that observer.
C) The simplest solution, with the least "structure", is to simply drop your logging code at the end of your action methods that you're watching.
Consider option #2 now, breaking MVC practices.
A) As you propose, you could find the means to getting your model Observer to have access to the Session data. You've coupled your model to your business logic.
B) Can't think of any others here :)
My personal inclination, without knowing anymore details about your project, is either 1A, if I want to attach people to records, or 1C if there are only a few places where I'm interested in doing this. If you are really wanting a robust logging solution for all your controllers and actions, you might consider 1B.
Having your model observer find session data is a bit "stinky", and would likely break if you tried to use your model in any other project/situation/context.
You're right about it breaking MVC. I would suggest using callbacks in your controllers, mostly because there are situations (like a model which save is called but fails validation) where you wouldn't want an observer logging anything.
I found a clean way to do what is suggested by the answer I picked.
http://pjkh.com/articles/2009/02/02/creating-an-audit-log-in-rails
This solution uses an AuditLog model as well as a TrackChanges module to add tracking functionality to any model. It still requires you to add a line to the controller when you update or create though.
In the past, when doing something like this, I have tended towards extending the User model class to include the idea of the 'current user'
Looking at the previous answers, I see suggestions to store the actual active record user in the session. This has several disadvantages.
It stores a possibly large object in the session database
It means that the copy of the user is 'cached' for all time (or until logout is forced). This means that any changes in status of this user will not be recognised until the user logs out and logs back in. This means for instance, that attempting to disable the user will await him logging off and back on. This is probably not the behaviour you want.
So that at the beginning of a request (in a filter) you take the user_id from the session and read the user, setting User.current_user.
Something like this...
class User
cattr_accessor :current_user
end
class Application
before_filter :retrieve_user
def retrieve_user
if session[:user_id].nil?
User.current_user = nil
else
User.current_user = User.find(session[:user_id])
end
end
end
From then on it should be trivial.
http://www.zorched.net/2007/05/29/making-session-data-available-to-models-in-ruby-on-rails

Resources