ABCService.new.do_foo - why not a class method? - ruby-on-rails

I just inherited an RoR codebase, and in many of the controllers I see the following style of code:
ABCService.new.do_foo
I have been working on RoR codebases for quite a long time, but I fail to understand why the .new. style is used. The service classes in question do not hold any state (even with class-level variables) and the same can be achieved with self (ie class-level) methods - so, any explanation of why this style is better? To me, it looks like some java developers coded this app and "ported over" some coding paradigms from that language.

Making service objects stateful as a convention has its benefits:
Minimal refactoring when one of them requires state
Easy to mock in tests without messing around with constants
Save brain juice on this decision when implementing a new service object
That being said, whether this is beneficial for your codebase is something you / your team need to assess as part of defining your own architectural / code style conventions.
It can be quite irritating to always have to call Klass.new.do_something. You can wrap it in a class method, eg:
class Service
class << self
def do_something
new.do_something
end
end
def do_something
end
end

"Right tool for the job"
Having only class method, will explicitly tell other developers/readers of your code, that this service doesn't have state.
Even better, use module instead of class, then your intentions would be clear for others and for later you.
When you need state, use instance method.
For example you can have service which accepts two arguments, first one is argument which should be used for all calls of this service, but second can be change for every call.
class AddTax
def initialize(tax_rate)
#tax_rate = tax_rate
end
def to(amount)
amount * (1.0 + #tax_rate)
end
end
# Usage
prices = [20, 100, 50, 49, 50]
add_tax = AddTax.new(0.24);
with_taxes = prices.map { |price| add_tax.to(price) }

Related

RSpec: How can I not use `allow_any_instance_of` for objects that get instantiated in the functions I call?

I have a class A with a method M for which I want to write a test T. The problem is that method M creates a new object O. I want to mock a method F of that new object O.
class A
def M(p1, p2)
#o = O.new(p1, p2)
end
end
class O
def F(q)
...
end
end
I can very easily do so with the allow_any_instance_of feature of RSpec, but I really don't see a way of doing so with just allow or expect. I understand that I can mock a method of an existing instance and of a class but from my tests I couldn't make it work against methods of objects that get created in a method I'm testing.
T :process do
it "works" do
# This works
allow_any_instance_of(O).to receive(:F).and_return(123)
...
end
it "does not works" do
# This fails
allow(O).to receive(:F).and_return(123)
...
end
end
How do I know that it fails?
I changed my F method with a puts() and I can see that output on the screen when I use the allow(O). It does not appear at all when I use the allow_any_instance_of(). So I know that it's working as expected only in the latter.
def F(q)
puts("If I see this, then F() was not mocked properly.")
...
end
I would think that allow(O)... should connect to the class so whenever a new instance is created the mocked functions follow, but apparently not.
Do you have RSpec tests handling such mocking cases in a different way that would not involve the use of the allow_any_instance_of() function?
The reason I ask is because it is marked as obsolete (#allow-old-syntax) since RSpec 3.3 so it sounds like we should not be using this feature anymore, especially once RSpec 4.x comes out, it probably will be gone.
The reason this
allow(O).to receive(:F).and_return(123)
Doesn't work is that :F is not a method of O, so the O never receives this message (method invocation).
The best solution for you would be to refactor your code to use dependency injection. (Please note that your example is abstract to the extreme, if you provided a real life example - closer to the ground - some better refactoring might be possible)
class A
attr_accessor :o_implementation
def initialize(o_implementation)
#o_implementation = o_implementation
end
def M(p1, p2)
#o = o_implementation.new(p1, p2)
end
end
RSpec.describe A do
subject { described_class.new(klass) }
let(:klass) { O }
let(:a_double) { instance_double(klass) }
it do
allow(klass).to receive(:new).and_return(a_mock)
allow(a_double).to receive(:F).and_return(123)
end
end
With the Dependency injection you move outside the decision which class to instantiate. This decouples your code (A stops being coupled to O, now it depends only on the O interface that it's using), and makes it easier* to test.
(*) One could argue that allow_any_instance is easier (less involved, less typing), but it has some issues, and should be avoided if possible.
(as a small aside: I can understand the probable need for very thorough anonymization of your code, but you could still follow ruby style guide: methods start with lower-case, only classes start with upper-case)
So first off: allow(O) works, but will only capture class methods. If you need to capture instance methods, you need to call allow for a specific instance.
Since your example is pretty sparse, I see no reason why we could not split up the creation of the object from the test? If that is possible, a very simple approach would be to write something like:
describe :process do
before do
#o = A.o_maker(p1,p2)
allow(#o).to receive(:some_function) { 123 }
end
it "works" do
# do something with `#o` that should call the function
end
end
I personally prefer this approach over creating the mock class, as suggested before.
This is probably well known, but for clarity: the problem with a mock class imho is that you are no longer testing class A but the mock. This could in some cases be useful, but from the original question it is unclear if it applies in this case and if this is not needlessly complicated. And secondly: if your code is this complicated (e.g. some method that creates a new object and then calls F), I would rather 1) refactor my code to make it test-able, and/or 2) test side effects (e.g. F adds an audit-log-line, sets a state, ...). I do not need to "test" my implementation (is the correct method called), but is it performed (and of course, as always, there are exceptions e.g. when calling external services or something --but again all that is impossible to deduce from the original question).

Clean implementation for multiple decisions in ruby

background
I'm writing an API that processes data from an external application. My application processes JSON responses from the external application and offers relevant information to consumers of my service. (These consumers are internal to my organisation)
The external application has an API that allows me to check for updates. Updates are triggered by events. The API offers 12 different types of events. The event types are offered in a string format. (ex. 'MoveEvent', 'DeleteEvent', 'CreateEvent')
I need to write a specific processing algorithm for each event.
Problem
I'm looking for a clean, DRY and SOLID way to implement the event processing system. The focus for this application is on code quality and a solid architecture.
My solution and thoughts
There are a number of ways to tackle this issue, but my best guess so far has been:
Create a hash that holds the string name of the event types and map them to a processing class.
Use the Strategy pattern to define a common interface for all the processing classes to adhere to, so that any mediating class only needs to know the message to which the processing classes can respond.
Use some sort of factory (method) to instantiate a concrete implementation.
I'm explicitly looking to ignore a long if-elsif-else solution, unless someone can convince me to do otherwise.
Suggestions and criticism is always welcome, thanks!
In this type of situation, I like to use this pattern:
class Processor
class << self
def for(name, data)
processors[name].new(data)
end
def processors
{
'MoveEvent' => MoveEventProcessor,
'DeleteEvent' => DeleteEventProcessor,
'CreateEvent' => CreateEventProcessor
}
end
end
attr_reader :data
def initialize(data)
#data = data
end
class MoveEventProcessor < Processor
#... code to handle this event
end
class DeleteEventProcessor < Processor
#... code to handle this event
end
class CreateEventProcessor < Processor
#... code to handle this event
end
end
p Processor.for 'MoveEvent', {some: :data}
So my suggestion would be to not over engineer this chances are that you will have to iteratively refactor or reimplement your solution as you become aware of things. I think the first Idea you came up with is probably the most straight forward way to go about it. Even if you want to house that hash inside of a class that then handles and required processing of the selected event class that would still seem reasonable (so basically number 2). ReggieB's answers is about what I would expect.

Does this method that changes an instance variable belong in my Rails controller or model?

I have a basic "best practice" question about controllers and instance variables.
Say you have an instance variable in anew or update action in a controller, is it ok to modify that instance variable via a private method in the controller? Or should the method exist in the model?
e.g. in this example below, I need to loop through the attributes of an instance variable, and add or remove something. For example, if I am using nested attributes 3 layers deep and have to remove certain attributes, change them and then add them back in. I know this may seem strange, but assume it is necessary.
def new
#some_thing = SomeThing.new(:some_params)
do_something_to_inst_var # method call
#some_thing.save
end
private
def do_something_to_inst_var
#some_thing.addresses.each do |address|
# modify it in some way
end
end
Or is this bad practice? Should this be a method in the model and should be called like:
#some_thing.do_something_to_inst_var
OR
should we explicitly pass the instance variable to the method like:
def new
#some_thing = SomeThing.new(:some_params)
do_something_to_inst_var(#some_thing) # method call
#some_thing.save
end
private
def do_something_to_inst_var(some_thing)
some_thing.addresses.each do |addresses|
# modify it in some way
end
end
I'm looking for some clarity here, with an example if possible. I'm still learning and trying to improve and I didn't find an answer by searching.
Rails applications should have "thin controllers" and "fat models" for a couple of reasons:
Each object should handle only its own responsibilities. A controller should just be about connecting the web, the the model and the view, which thanks to Rails doesn't take much code. If a controller method refers repeatedly to methods of the same model, it's incorrectly taking on model responsibilities; we say that it's not cohesive or that it has "Feature Envy". It is more likely that if the model changes the controller will have to change in parallel.
It's easier to test models than to test controllers.
Fix it by writing a method in the model that does the model-specific work and call it in the controller (your second option). (Eventually your model will get too fat and you'll have to break it up too, but that's another story.) For example:
class SomeThingsController
def new
#some_thing = SomeThing.new(:some_params)
#some_thing.do_something # method call
#some_thing.save
end
end
class SomeThing
def do_something
addresses.each do |address|
# modify it in some way
end
end
end
Regarding instance variables.
Define them only if necessary. Presumably the one in your example is needed for the view.
Assuming an instance variable is justified at all, there's no reason not to refer to it in private methods of the class that contains it. That's what they're for. So your first option (referring directly to the instance variable) is a bit better than your third option (passing it in). But, as discussed above, extracting a model method is better than both of the other two options.
In my opinion Modifying #instance_vars from private method is okay if your controller is just 100 lines long.
Imagine a scenario where there are 500 LOC in your controller and after a struggle of a couple of hours you found out that the #intance_var is being modified by some private method.
Helpful tips:
create small private methods with single responsibility
put ! at the end of method_name! indicating that it modifies something. Specially this is helpful when you see my_private_method!, ! makes you realize that its modifying something.
lets not put code in controller that do not belong here.
There is one more option:
In Controller:
def new
#some_thing = SomeThing.new(:some_params)
#some_thing_modified = #some_thing.modify_somehow(params)
#some_thing_modified.save
end
In SomeThing Model:
def modify_somehow(params)
result = self.clone
# ... modify result ...
return result
end
Because modify_somehow is now pure function (assuming you don't do anything in ... modify result ... part, that makes it impure), what you gain here is Referential transparency. Main benefit of referential transparency is that you can determine what function/method invocation will do, only by looking at its arguments, and get result of its work only via return value, and not via side effects. This makes your code more predictable, which in turn makes it easier to understand and debug.
There are of course disadvantages: Because you create new object this option can be less performant, it's also more verbose than its alternatives.
Functional programming concepts, like referential transparency, are not very popular in Rails community (probably because of how OO-centric Ruby is). But referential transparency is there if you want it, with its pros and cons.

Where to place common logic with RoR

I have a model Project that appears in multiple controllers in an application I'm building as it appears on multiple pages. The where clause for this isn't complicated, per se, but I feel like it is too large to be repeated on every method requiring projects with these constraints.
My question is, where, if possible, does this common call for Projects go? In .NET, I'd have a ProjectService class with a method that would return all projects, and another that returned all projects that satisfied my conditions. I'm new to Rails so I'm struggling to see where this fits in?
You can either use a class method or Scopes.
class Project < ActiveRecord::Base
# example for a scope
scope :awkward_projects,where(awkward: true)
# example for a class method.
def self.awkward_projects
where(awkward: true)
end
end
Its very safe to do what was once given in a SO answer. Read below and choose carefully.
Quoting an answer
"Generally, I use scope entries for simple one-liners to filter down my result set. However, if I'm doing anything complicated in a "scope" which may require detailed logic, lambdas, multiple lines, etc., I prefer to use a class method. And as you caught, if I need to return counts or anything like that, I use a class method."

Proper technique when passing data between methods

If you have two methods in a model or controller and you want to pass a variable between methods e.g.
def foo
#param = 2
#test = 1
callee
#do something with #test
end
def callee
#test += #param
end
Is it better to use instance variables to do this or regular variables like so
def foo
param = 2
test = 1
test = callee(param, test)
#do something with test
end
def callee(param, test)
test += param
test
end
Thanks in advance!
There isn't a definite answer to this question, it depends a lot on the context - the thing you need to ask is "which approach best demonstrates the intent of the code". You should definitely have tests for the model/controller class you are talking about.
As a very rough guideline:
The first approach is commonly seen when the method is part of the class's public API and it alters the internal state of instances of the class (although it may be the sign of a code smell if you have public methods chained as in your example.) This is probably going to be seen more often in a model object.
The second approach is usually seen when the method you are calling is a private convenience method that factors out some code duplication, or a method which does very specialised operations on the parameters and returns some result (in which case it should probably be factored out into a utility class.) This may be seen in model or controller objects.
If you are aiming for maintainable OO code, then the principles of SOLID design are very good guidelines - have a look at Uncle Bob's article about them here:
http://blog.objectmentor.com/articles/2009/02/12/getting-a-solid-start
It depends on your needs. Also, prototype of the function that you are passing variables to is also important. If you want the method not to change any of the parameters without your permission, you should use your second implementation. But, if you trust the function, you can use first method. This is a big topic called as "call by reference" and "call by value". You can examine following link;
http://www.exforsys.com/tutorials/c-language/call-by-value-and-call-by-reference.html

Resources