I am using the Wicked gem to build an instance of a model in steps (step 1, step 2, etc). On the third step, however, I need to make an API call to collect some data and store it in another model instance (it would have a :belongs_to relationship with the other model). What I am wondering is how do I interact with this API and store information, all while I am still in the creation process of the first model. Is this a good design pattern? Or should I be dealing with API information in a different way?
My thoughts are that I could redirect to the form for making the API call and redirect back to the fourth step after dealing with the API.
Does Rails have a specific design it uses for dealing with 3rd party APIs?
No, this is not a good design pattern, but sometimes there is no way around it. Important is that
everything is covered by a single database transaction, and that, as I understand from your question, is the case. Your objects are connected by a "belongs_to" relationship, so they can be saved in one go (when the "parent" object is saved, the "children" will be saved at once). There is also no second, unconnected object involved, so no need to create a separate transaction just for this action
second is that you cover everything with enough error handling. This is your own responsibility: make sure when the 3rd party call goes bananas, you're ready to catch the error, and worse case, roll back the entire transaction yourself
So, to summarize: no it's not a good practice, but Rails gives you the tools to "keep it clean"
Although your question was rather verbose, I would recommend looking at the before_create ActiveRecord callback in your model:
#app/models/parent.rb
Class Parent < ActiveRecord::Base
before_create :build_child
end
This builds the child object before you create the parent, meaning that when you save the parent, you'll have the child created at the same time. This will allow you to create the child object when interacting with the parent. To ensure the child's data is populated correctly, you'll need to use an instance method with the callback
Related
I'm developing a plugin for Redmine (RoR 4.2) that should send data to a different system once an Issue of a certain type is created/updated.
I've created a patch for Issue containing two callbacks: before_update and after_create. Both call the same method to execute. The reason why I use after_create is that I need to send the ID of a newly created Issue to the second system.
My problem here is that while returning false from before_update cancels the transaction, doing so from after_create has no effect. To handle this I need to throw an Exception which in its turn breaks the Issue controller making it return Error 500 page instead of a nice error popup.
So what is the best way to handle this situation taking into account that I'm not willing to override the controller (if possible)?
This sounds like a fool's errand since exceptions are generally handled on the controller layer. Of course you can rescue the exception in your callback method and for example log a message.
But you can't really effect the controllers outcome from a model callback without resorting to some really nasty hacks. The model should only be concerned with its own state - not the application flow.
And ActiveRecord does not really care about the return value from after_* callbacks.
Fat models and skinny controllers are good. But letting your models do stuff like talk across the wire or send emails is usually a horrible idea. ActiveRecord models are already doing a million things too many just maintaining your data and business logic. They should not be concerned with stuff like what happens when your request to API x fails.
You might want to consider using a Service Object to wrap creating and updating the model instead.
I have one model 'Schedule' in my application and want a user to add multiple schedule with a single form but how can I save multiple records with a single form without using nested_form_for and fields_for.
Please help.
Thankyou
You really would need to provide some code to show us what you're doing. However...
Remember that what actually happens when the user submits the form is handled entirely by the schedule controller, specifically the create action. For example, if you were creating a repeating event that occurred every week or month, you could certainly include logic in your controller to cause multiple events to be scheduled from that single submission.
Whether you think that logic should be in the controller or possibly in the model is a design decision that you'd have to make. Personally, I would recommend pushing it into the model since it really is a function of the model.. What I'm getting at is that you may instantiate or handle a Schedule object somewhere else, possibly in some other controller. Rather than duplicating code, moving all scheduling activities into the model allows you to keep it DRY.
this is basically a theoretical question.
What do you think is better for a normal rails app with users:
1)Create a Profile model where to put resume, images, links etc...
2)Put all the data in the user model.
The first choice maybe is cleaner but you have to load 2 models from the db, so maybe slower.
Thanks in advance.
I normally have a single User model. If I have unrelated resources that may deserve an associated model, than I create one for them.
For example, for me the Resume (assuming is not a single field) may deserve a dedicated Resume model, with a one-to-one association to User.
On the view, I normally create an /account resource that internally displays the account and provides the show, edit and update actions to view the account or update it.
The more models you have, the more your architecture will become complicated. So unless you have the need to split the fields out of the User model, I would keep them inside the model.
When you start to have several fields that may require a prefix, such as resume_title, resume_body, resume_created_at inside the User model, that's a good indication that you probably need a separate Resume model associated to the User.
I have an app with the following models: User, Task, and Assignment. Each Assignment belongs_to a User and a Task (or in other words, a Task is assigned to a User via an Assignment).
Once a User completes a Task, the Assignment is marked as complete, and the app immediately creates a new Assignment (or in other words, assigns the task to someone else).
Immediately after creating this new Assignment, I want to send an email to the new assignee. I know I can do this one of three ways:
Explicitly send the email in my controller.
Send the email in a callback on the Assignment model.
Create an observer on the Assignment model and send the email in after_create.
Which of these options do people think is best, and why? #1 seems bad to me, because I don't want to have to remember to send it in every action that might complete an Assignment. I've heard a couple people say that Rails observers are bad and should be avoided, but I'm not sure if they're people I should trust or not. Any other opinions?
You're right, the first way isn't a good approach. Observers are my preferred way to go, for a couple reasons.
First, if you use TDD (test-driven development) you can shut off observers to more purely test the model without every creation firing off a mailer creation. Then you can unit test the mailer and observer separately.
Second, the idea of separating callbacks creates cleaner code. Callbacks aren't really part of your model, they are events. Your model contains the functions and attributes necessary to run itself, and the callbacks (implemented with observers) are separate event handlers.
That said, I don't think your second option is "bad" or less professional. Either way works as long as it's at the model level, instead of controllers or (even worse) views.
i would go for observers as they reduce clutter in your model / controller code and i can think of no downside in using them ...
iirc sending an email after save email is even an example in the active record observers documentation
You can also do a combination of things. You could use observers for one action, and if there is just a single email for one other action you could use option #1 for it.
Have you heard of acts_as_state_machine, or any other similar solutions?
http://github.com/rubyist/aasm
They allow you to define a state of each object and different things that can happen with state changes.
This allows you to have as much logic as you need about when things are sent, if you need this much. Can be overkill, but can be really handy. I suggest because you want an email sent when a task is 'completed' which sounds like it may be a type of state or status column in your Task model.
In the end, I like this implementation http://www.scottw.com/resque-mail-queue-gem
I want to be able to "deep clone" 10 instances of an ActiveRecord model and all its associations into memory, work on them, update the in-memory objects and then, when I've finished, pick one to write back over the original in the database.
How do I deep clone (i.e. .clone but also cloning all associations right down to the bottom of the association tree)? I've assumed so far that I'm going to have to write my own method in the Model.
How can ensure that none of the cloned instances will write back to the database until I'm ready to do so?
If possible I'd like to:-
retain all current IDs as one of my main associations is a has_many :through matching the IDs of one model to another
still be able to treat each of the clones as if it were in the database (i.e. .find_by_id etc. will work)
Moon on a stick perhaps? ;)
Not 100% sure of what you are trying to do ...
Models will only be stored in the database if you call the save method. Calling save in an existing model will update the database with any data that has been changed. Associations may be saved also, but it really depends on the type of association and in most cases you will probably need to call save on these models as well.
Doh! Sometimes it takes asking the stupid question before you see the obvious answer.
My issue was that I was having to make changes to the associated objects and they weren't showing up when I used those in-memory objects later so thought I had to save. However, you are right. All that was actually happening was that the variables referencing them had gone out of scope and I was therefore accessing the in-database ones instead.
I'll go back through my code and check that this is the case.
Having said that, it doesn't answer my question on the "deep cloning though" ...
I've solved our deep cloning issues using DefV's deep cloning plugin : http://github.com/DefV/deep_cloning
It's done everything I've required so far, though as you've found you need to be very watchful of your relationships. My tests have luckily shown this up as an issue and I'm working through it at the moment. I found this post as I was trying to solve it :)
Check out the plugin though, it's been quite handy.