Using Symbol#to_proc with multiple procs - ruby-on-rails

I have some ruby code that operates on an ActiveRecord object using a couple of methods, and the end game is to return the object itself. I want to use two methods that have return values other than the object itself (boolean values). I like using the shorthand Symbol#to_proc syntax, i.e.
Object.tap(&:do_work)
Is it possible to pass multiple procs? i.e.
Object.tap(&:do_work, &:do_more_work)
The above syntax does not work. Is this possible or do I have to do something like:
Object.tap(&:do_work).tap(&:do_more_work)

If you are looking for a good architectural solution then you need to implement FIFO stack for that.
Or you can do it like you mentioned above:
Object.tap(&:do_work).tap(&:do_more_work)

Related

Should I override Activerecord's update() method?

I'm refactoring my Rails 5.2 app in such a way that one model (OrigM) is being split into two (OrigM, NewM). Out in the codebase, there is code that does update on OrigM objects, and I want this to now result in specialized behavior in which associated OrigM and NewM objects are updated appropriately.
It seems like a straightforward way to do this is to override update in the OrigM model, but trying to do it a simpleminded way results in infinite recursion, because apparently even the class method OrigM.update(id, new_attrs) under the hood is just finding the object with id id and calling update as an instance method on it.
I have figured out a couple other ways I could perhaps make this work, although I haven't fully tested them yet: (1) Update OrigM attributes by calling write_attribute on them individually (seems like this will not let me check validations), or (2)
self.assign_attributes(new_attrs) ## resulting in "dirty" object, then
self.save!
I'm leaning toward this second solution, as it seems like it ought to be pretty close to functionally identical to what the standard update is doing.
EDIT:
I've realized that no one can really answer this unless I am asking a question, so I will now recast it as:
QUESTION: Should I, or should I not, override AR's update method (in OrigM class)?
Commenters have said: "do not do this or you will be reviled" (which honestly I was sure someone would say lol). But... is it really such a crime? Is there a "killer" reason not to do it (something you couldn't argue with)? Or is it just a vaguely encouraged best practice?
Also: "why not just define a new method if it does something different?" Well... I guess I need to say more about the models... I don't want to try to give actual code, because it's too complex. Suffice to say: most of the code still interacts with OrigM objects, and will not know about NewM. NewM was only needed architecturally because I needed to allow multiple OrigMs to share (belong_to) a single NewM to avoid duplication of data. So I would like to allow most of the rest of the code to maintain the illusion that there is still only OrigM, and be able to update it as usual. This feels like the most elegant solution to me. If I create a new OrigM#myupdate, future developers (including myself) might curse me because they tried to do a simple update and it broke things. Iow, isn't having to know that there is a special updating function just as confusing as having to know that AR's update has been overridden?

Ruby Methods and Params Validation

I have a general questions about how to treat params on a method. Let's say, we have a method that receives a number and multiplies it by 2.
def multiplier(num)
num*2
end
What happens when num is nil? Who is responsible to handle the error? The method itself or the one who calls the method? What is considered the best oop practice?
This is not related to OOP in any way, because it applies to other paradigms as well. That said, there are different schools of thought about the problem. Which one is the "best" depends on who you're talking to.
One is "defensive programming everywhere". This means each method is responsible for rigorous checks of its input.
Another is "caller is responsible for providing correct data".
Another is what I call a "security perimeter": rigorous checks only when we deal with external input, but once data is "inside of the system", it can be trusted.
And anything in between.
I agree with the other answers (#sergio-tulentsev, #Зелёный). And there is another thing to consider.
In a lot of cases it is a good practice not to expect an object of particular type, but an object which acts like a particular type. I mean, in your case, the method could expect not only a number to be passed as a parameter, but any object that could be treated like a number. That would not only make your method more powerful, but also solve the nil problem. Of course it depends on your needs.
The method version I am talking about might look like this:
def multiplier(num)
# trying to convert whatever is passed to a number
num.to_f * 2
end
In the case of nil it will return 0.0

What is the difference with find_by() from the core and the one from the FinderMethods?

Currently I'm working on a gem, which overrides ActiveRecords where. By working on that, I stumbled on two different find_by implementations. One is in the core and it uses some kind of cache, whereas the one from the FinderMethods module calls where directly. What is the difference between these two implementations? When is which used?
I think it's that way: When you use something like this:
User.find_by(...)
The ActiveRecord::Core#find_by is called, as the Core is included into Base from which you inherit.
But if you do something like:
User.first.products.find_by(...)
The ActiveRecord::Relation (includes FinderMethods here) will call FinderMethods#find_by
I don't know why this is implemented like that, but I'm sure there's a reason for this.

What is the difference between two types defining a functions grails

Method 1:
def funtion1(){
//Code here
}
Method 2:
def function2={
//code here
}
actually what are the difference between defining these two type of method... And which one is good ..
Controller Actions as Methods
It is now possible to define controller actions as methods instead of using closures as in previous versions of Grails.
In fact this is now the preferred way of expressing an action.
So, if you use grails > 2.*, define actions as methods instead of as clothures.
Similar questions:
https://stackoverflow.com/a/1827035/1815058
https://stackoverflow.com/a/9205312/1815058
Well, the first one if a function and the second is a closure.
A Groovy Closure is like a "code block" or a method pointer. It is a piece of code that is defined and then executed at a later point. It has some special properties like implicit variables, support for currying and support for free variables.
I think that traditional methods is what you need. You probably should use closures in some special cases, but it's really big topic for thought.
So you better read about closures here and may be here.

Is it bad design to base control flow/conditionals around an object's class?

I'm currently working on a Rails project, and have found times where it's easiest to do
if object.class == Foo
...
else if object.class == Bar
...
else
...
I started doing this in views where I needed to display different objects in different ways, but have found myself using it in other places now, such as in functions that take objects as arguments. I'm not precisely sure why, but I feel like this is not good practice.
If it's not good practice, why so?
If it's totally fine, when are times that one might want to use this specifically?
Thanks!
Not sure why that works for you at all. When you need to test whether object is instance of class Foo you should use
object.is_a? Foo
But it's not a good practice in Ruby anyway. It'd much better to use polymorphism whenever it's possible. For example, if somewhere in the code you can have object of two different classes and you need to display them differently you can define display method in both classes. After that you can call object.display and object will be displayed using method defined in the corresponding class.
Advantage of that approach is that when you need to add support for the third class or a whole bunch of new classes all you'll need to do is define display method in every one of them. But nothing will change in places where you actually using this method.
It's better to express type specific behavior using subtyping.
Let the objects know how they are displays. Create a method Display() and pass all you need from outside as parameter. Let "Foo" know to display foo and "Bar" know how to display bar.
There are many articles on replacing conditionals with polymorphism.
It’s not a good idea for several reasons. One of them is duck typing – once you start explicitly checking for object class in the code, you can no longer simply pass an instance of a different class that conforms to a similar interface as the original object. This makes proxying, mocking and other common design tricks harder. (The point can be also generalized as breaking encapsulation. It can be argued that the object’s class is an implementation detail that you as a consumer should not be interested in. Broken encapsulation ≈ tight coupling ≈ pain.)
Another reason is extensibility. When you have a giant switch over the object type and want to add one more case, you have to alter the switch code. If this code is embedded in a library, for example, the library users can’t simply extend the library’s behaviour without altering the library code. Ideally all behaviour of an object should be a part of the object itself, so that you can add new behaviour just by adding more object types.
If you need to display different objects in a different way, can’t you simply make the drawing code a part of the object?

Resources