Rails: rollback from after_create without throwing Exception - ruby-on-rails

I'm developing a plugin for Redmine (RoR 4.2) that should send data to a different system once an Issue of a certain type is created/updated.
I've created a patch for Issue containing two callbacks: before_update and after_create. Both call the same method to execute. The reason why I use after_create is that I need to send the ID of a newly created Issue to the second system.
My problem here is that while returning false from before_update cancels the transaction, doing so from after_create has no effect. To handle this I need to throw an Exception which in its turn breaks the Issue controller making it return Error 500 page instead of a nice error popup.
So what is the best way to handle this situation taking into account that I'm not willing to override the controller (if possible)?

This sounds like a fool's errand since exceptions are generally handled on the controller layer. Of course you can rescue the exception in your callback method and for example log a message.
But you can't really effect the controllers outcome from a model callback without resorting to some really nasty hacks. The model should only be concerned with its own state - not the application flow.
And ActiveRecord does not really care about the return value from after_* callbacks.
Fat models and skinny controllers are good. But letting your models do stuff like talk across the wire or send emails is usually a horrible idea. ActiveRecord models are already doing a million things too many just maintaining your data and business logic. They should not be concerned with stuff like what happens when your request to API x fails.
You might want to consider using a Service Object to wrap creating and updating the model instead.

Related

Rails 4 - Creating associated model during update of another model

I am using the Wicked gem to build an instance of a model in steps (step 1, step 2, etc). On the third step, however, I need to make an API call to collect some data and store it in another model instance (it would have a :belongs_to relationship with the other model). What I am wondering is how do I interact with this API and store information, all while I am still in the creation process of the first model. Is this a good design pattern? Or should I be dealing with API information in a different way?
My thoughts are that I could redirect to the form for making the API call and redirect back to the fourth step after dealing with the API.
Does Rails have a specific design it uses for dealing with 3rd party APIs?
No, this is not a good design pattern, but sometimes there is no way around it. Important is that
everything is covered by a single database transaction, and that, as I understand from your question, is the case. Your objects are connected by a "belongs_to" relationship, so they can be saved in one go (when the "parent" object is saved, the "children" will be saved at once). There is also no second, unconnected object involved, so no need to create a separate transaction just for this action
second is that you cover everything with enough error handling. This is your own responsibility: make sure when the 3rd party call goes bananas, you're ready to catch the error, and worse case, roll back the entire transaction yourself
So, to summarize: no it's not a good practice, but Rails gives you the tools to "keep it clean"
Although your question was rather verbose, I would recommend looking at the before_create ActiveRecord callback in your model:
#app/models/parent.rb
Class Parent < ActiveRecord::Base
before_create :build_child
end
This builds the child object before you create the parent, meaning that when you save the parent, you'll have the child created at the same time. This will allow you to create the child object when interacting with the parent. To ensure the child's data is populated correctly, you'll need to use an instance method with the callback

Should i send a mail from a model callback or from a controller?

I have a model that, when it is in a particular state, need some additional data and then send a mail.
To do so i have a before validation callback that say
"i'm in this state where i need data, do i need more data? No? then i'll change my state and send a mail".
This state is triggered every night by a cron rake, and sometime no data is needed, so it should send the mail ASAP. Later when data is collected, the callback will be triggered and the mail will go it's way.
I have read and been told that mail should be sent from controller only. But here it mean i would need to send mail in my controller and in my rake.
Why is it "bad" to send mail from the callback?
Isn't it a bad idea to go from one place to trigger the mail to two different places?
Why would it be bad?
Use ActiveRecord::Observer for that. It's a perfect use case, because mail logic probably doesn't belong neither to your model, nor controller.
class WelcomeEmailObserver < ActiveRecord::Observer
observe :user
def after_create(user)
if user.purchased_membership?
GreetingMailer.welcome_and_thanks_email(user).deliver
else
GreetingMailer.welcome_email(user).deliver
end
end
end
# app/models/user.rb
class User
end
# config/application.rb
class Application < Rails::Application
config.active_record.observers = :welcome_email_observer
end
Example stolen from here.
I would also suggest that you leave your observers stateless, because it's hard to debug them otherwise. Use them only for callbacks with third-party API's, emails or some other stuff that's not very related to your application.
I would suggest you to use no-peeping-toms for testing them. Don't isolate observer test from model tests - that doesn't make much sense, but be sure to test models with and without observers. With this helper gem, you can target an observer you like and switch it on or off.
Enforce single responsibility principle on observers, don't couple them to models. Leave persistence logic away from observers. Use model callbacks for persistence-related concerns.
Also, I would like to warn you to be careful with observers when using something like Sidekiq. It's sometimes faster than ActiveRecord callbacks, so you should use the after_commit callback to avoid conflicts.
Alternatives
ActiveRecord::Observers will be deprecated in Rails 4.0, because they're extracted into a gem. You can still use them, though, but seems like Rails core team enforces extracting everything into classes with single responsibility. Choosing what to use depends on your taste.
Observers make everything less explicit and sure are more complicated to test. They can still be a good OO citizens if you know what you're doing.
Using plain old ruby objects seems more DCI-like to me and that's a big plus because it expresses your intentions more clearly.

Best way to send an email upon creation of a new model instance in Rails?

I have an app with the following models: User, Task, and Assignment. Each Assignment belongs_to a User and a Task (or in other words, a Task is assigned to a User via an Assignment).
Once a User completes a Task, the Assignment is marked as complete, and the app immediately creates a new Assignment (or in other words, assigns the task to someone else).
Immediately after creating this new Assignment, I want to send an email to the new assignee. I know I can do this one of three ways:
Explicitly send the email in my controller.
Send the email in a callback on the Assignment model.
Create an observer on the Assignment model and send the email in after_create.
Which of these options do people think is best, and why? #1 seems bad to me, because I don't want to have to remember to send it in every action that might complete an Assignment. I've heard a couple people say that Rails observers are bad and should be avoided, but I'm not sure if they're people I should trust or not. Any other opinions?
You're right, the first way isn't a good approach. Observers are my preferred way to go, for a couple reasons.
First, if you use TDD (test-driven development) you can shut off observers to more purely test the model without every creation firing off a mailer creation. Then you can unit test the mailer and observer separately.
Second, the idea of separating callbacks creates cleaner code. Callbacks aren't really part of your model, they are events. Your model contains the functions and attributes necessary to run itself, and the callbacks (implemented with observers) are separate event handlers.
That said, I don't think your second option is "bad" or less professional. Either way works as long as it's at the model level, instead of controllers or (even worse) views.
i would go for observers as they reduce clutter in your model / controller code and i can think of no downside in using them ...
iirc sending an email after save email is even an example in the active record observers documentation
You can also do a combination of things. You could use observers for one action, and if there is just a single email for one other action you could use option #1 for it.
Have you heard of acts_as_state_machine, or any other similar solutions?
http://github.com/rubyist/aasm
They allow you to define a state of each object and different things that can happen with state changes.
This allows you to have as much logic as you need about when things are sent, if you need this much. Can be overkill, but can be really handy. I suggest because you want an email sent when a task is 'completed' which sounds like it may be a type of state or status column in your Task model.
In the end, I like this implementation http://www.scottw.com/resque-mail-queue-gem

ActionMailer best practices: Call method in the model or the controller?

Sending an email is usually called after an action on a model, but the email itself is a view operation. I'm looking for how you think about what question(s) to ask yourself to determine where to put the action mailer method call.
I've seen/used them:
In a model method - bad coupling of related but seperate concerns?
In a callback in the model (such as after_save) - best separation as far as I can tell with my current level of knowledge.
In the controller action - just feels wrong, but are there situations were this would be the smartest way to structure the code?
If I want to know how to program I need to think like a programmer, so learning how you go about thinking through particular programming solutions is worth months of coding on my own in isolation. Thank you!
Late answer, but I want to rationalize on the subject:
Usually, in a web app, you want to send emails either as a direct reaction to a client. Or as a background task, in case we're talking about a newsletter/notification mail sort of thing.
The model is basically a data storage mapper. Its logic should encapsulate data-handling/communication with data storage handling. Therefore, inserting logic which does not relate to it is a bit tricky, and in most cases wrong. Let us take the example: User registers an account and should receive a confirmation email. In this case one could say, the confirmation email is a direct effect of the creation of a new account. Now, instead of doing it in the web app, try to create a user in the console. Sounds wrong to trigger a callback in that case, right? So, callback option scratched. Should we still write the method in the model? Well, if it's a direct effect of a user action/input, then it should stay in that workflow. I would write it in the controller after the user was successfully created. Directly. Replicating this logic in the model to be called in the controller anyways adds unnecessary modularity, and dependency of an Active Record model from Action Mailer. Try to consider sharing the model over many apps, in which some of them don't want Action Mailer for it. For the stated reasons, I'm of the opinion that the mailer calls should be where they make sense, and usually the model is not that place. Try to give me examples where it does make.
Well, depends.
I've used all of those options and your point about 'why should I put this where?' is good.
If it's something I want to happen every time a model is updated in a certain way, then I put it in the model. Maybe even in a callback in the model.
Sometimes you're just firing off a report; there's no updating of anything. In that case, I've normally got a resource with an index action that sends the report.
If the mailer isn't really related to the model that's being changed, I could see putting it in a callback. I don't do that very often. I'd be more likely to still encapsulate it in the model. I've done it, just not very often.
I'm aware it's been a while but best practices never die, right? :)
Email is by definition asynchronous communication (except for confirmation email, but even this one it should be a best practice to leave a delay before having to confirm).
Hence in my opinion, the most logical way to send it is :
in a background action (using Sidekiq or delayed_job)
in a callback method : "hey this action is successfully done, maybe we can tell the world now?"
Problem in Rails is that it is not too many callbacks (as in JS for instance): I personnaly find it dirty to have code like:
after_save :callback
def callback
if test_that_is_true_once_in_the_objects_life
Mailer.send_email()
end
end
So, if you really want to think like a programmer, the idea would be to set up some custom callback system in your app.
Eg.
def run_with_callback(action, callback_name)
if send(action)
delay.send(callback_name)
end
end
Or even creating an event system in your app would be a decent solution.
But in the end those solutions are pretty expensive in time so people end-up writing it inline after the action
def activate
[...]
user.save
Mailer.send_mail
respond_to
[...]
end
which is the closest fashion to callback in synchronous programming and results having Mailers call everywhere (in Model and in Controller).
There's several reasons why controllers are a good place for the mailers:
Emails that have nothing to do with a model.
If your emails depend on several models that dont know about each other.
Extracting models to an API should not mean reimplementing mailers.
Mailer content determined by request variables that you dont want to pass to the model.
If your business model requires a lot of diferent emails, model callbacks can stack.
If the email does not depend on the result of model computations.

Good idea to access session in observer or not?

I want to log user's actions in my Ruby on Rails application.
So far, I have a model observer that inserts logs to the database after updates and creates. In order to store which user performed the action that was logged, I require access to the session but that is problematic.
Firstly, it breaks the MVC model. Secondly, techniques range from the hackish to the outlandish, perhaps maybe even tying the implementation to the Mongrel server.
What is the right approach to take?
Hrm, this is a sticky situation. You pretty much HAVE to violate MVC to get it working nicely.
I'd do something like this:
class MyObserverClass < ActiveRecord::Observer
cattr_accessor :current_user # GLOBAL VARIABLE. RELIES ON RAILS BEING SINGLE THREADED
# other logging code goes here
end
class ApplicationController
before_filter :set_current_user_for_observer
def set_current_user_for_observer
MyObserverClass.current_user = session[:user]
end
end
It is a bit hacky, but it's no more hacky than many other core rails things I've seen.
All you'd need to do to make it threadsafe (this only matters if you run on jruby anyway) is to change the cattr_accessor to be a proper method, and have it store it's data in thread-local storage
I find this to be a very interesting question. I'm going to think out loud here a moment...
Ultimately, what we are faced with is a decision to violate a design-pattern acceptable practice in order to achieve a specific set of functionality. So, we must ask ourselves
1) What are the possible solutions that would not violate MVC pattern
2) What are the possible solutions that would violate the MVC pattern
3) Which option is best? I consider design patterns and standard practices very important, but at the same time if holding to them makes your code more complex, then the right solution may very well be to violate the practice. Some people might disagree with me on that.
Lets consider #1 first.
Off the top of my head, I would think of the following possible solutions
A) If you are really interested in who is performing these actions, should this data be stored in the model any way? It would make this information available to your Observer. And it also means that any other front-end caller of your ActiveRecord class gets the same functionality.
B) If you are not really interested in understanding who created a entry, but more interested in logging the web actions themselves, then you might consider "observing" the controller actions. It's been some time since I've poked around Rails source, so I'm not sure who their ActiveRecord::Observer "observes" the model, but you might be able to adapt it to a controller observer. In this sense, you aren't observing the model anymore, and it makes sense to make session and other controller type data information to that observer.
C) The simplest solution, with the least "structure", is to simply drop your logging code at the end of your action methods that you're watching.
Consider option #2 now, breaking MVC practices.
A) As you propose, you could find the means to getting your model Observer to have access to the Session data. You've coupled your model to your business logic.
B) Can't think of any others here :)
My personal inclination, without knowing anymore details about your project, is either 1A, if I want to attach people to records, or 1C if there are only a few places where I'm interested in doing this. If you are really wanting a robust logging solution for all your controllers and actions, you might consider 1B.
Having your model observer find session data is a bit "stinky", and would likely break if you tried to use your model in any other project/situation/context.
You're right about it breaking MVC. I would suggest using callbacks in your controllers, mostly because there are situations (like a model which save is called but fails validation) where you wouldn't want an observer logging anything.
I found a clean way to do what is suggested by the answer I picked.
http://pjkh.com/articles/2009/02/02/creating-an-audit-log-in-rails
This solution uses an AuditLog model as well as a TrackChanges module to add tracking functionality to any model. It still requires you to add a line to the controller when you update or create though.
In the past, when doing something like this, I have tended towards extending the User model class to include the idea of the 'current user'
Looking at the previous answers, I see suggestions to store the actual active record user in the session. This has several disadvantages.
It stores a possibly large object in the session database
It means that the copy of the user is 'cached' for all time (or until logout is forced). This means that any changes in status of this user will not be recognised until the user logs out and logs back in. This means for instance, that attempting to disable the user will await him logging off and back on. This is probably not the behaviour you want.
So that at the beginning of a request (in a filter) you take the user_id from the session and read the user, setting User.current_user.
Something like this...
class User
cattr_accessor :current_user
end
class Application
before_filter :retrieve_user
def retrieve_user
if session[:user_id].nil?
User.current_user = nil
else
User.current_user = User.find(session[:user_id])
end
end
end
From then on it should be trivial.
http://www.zorched.net/2007/05/29/making-session-data-available-to-models-in-ruby-on-rails

Resources