How do I partition Rails app into logical parts? - ruby-on-rails

So I am a fairly seasoned Django developer and I've been using Ruby on Rails for about a year. I'm working on a project with, let's say, 100 models. In my actual scenario, I'm writing a "component" to the app, which requires 2 models, one of which I wanted to call Event, however there is another piece of the software which uses an Event model in a totally different context. In Django, this is resolved by splitting the "components" into "apps", and models can have the same name because they are in a different app.
Is there is a similar pattern in Rails to separate some models in their own package to avoid name conflicts, but also so that each separate component can have it's own README which describes how that component behaves? I don't require that it have it's own controllers/views, since right now we have a single frontend interface, but I would be interested to hear if there is a pattern for that too. I've heard the term "engines" in Rails, but that seems to imply a total decoupling and unidirectional dependency scheme like for a generic 3rd party app, where as in my case, there is still some coupling between this "component" and the rest of the app, but it still seems reasonable to have some way of grouping models logically.
Also, I'm wondering how this case is handled in general. When a web application grows organically to have tons and tons of models, is there a standard pattern for managing this complexity?

You can use namespacing using modules for your models.
For example Foo::Bar or Baz::Bar like this:
module Foo
class Bar < ApplicationRecord
end
end
To associate this model with table foo_bars, you can implement the method:
module Foo
class << self
def table_name_prefix
'foo_'
end
end
end

Related

What do folks use app/services/ in rails applications

Every now and them I would come across this in the ruby on rails ecosystem:
class LocalizeUrlService
class Services::UpdateUserRegistrationForOrder
class ProductionOrderEmailService
UserCart::PromotionsService.new(
Shipping::BulkTrackingService.new(bulk_update, current_spree_user)
You can also see an example here
However, in the official examples of for example "Ruby On Rails Guides" I've never seen this. Which leads me to believe this is a concept coming from another language/paradigm different to Rails/OOP.
Where is this paradigm/trend coming from? Is there a tutorial/book
that these folks got influenced by? Are these folks holdouts from the SOA trend of a few years ago?
Is it a good idea to put code in app/service/blah_service.rb ?
If yes, what logic/code can be considered "Service" material.
Is there any kind code that would/wouldnt belong as a service?
What gem/plugin creates the app/services folder? The vanilla rails app doesn't ship with it at first.
sidetone: Personally I have issue with instantiating a service. I feel classes and instantiating is misused by a ametuer programmers. I feel a class and instantiating is "for a thing" and a service is something that "does"
so mixins/defs/include should be the way to go I feel.
Service objects are for things that don't fit well in the normal MVC paradigm. They're typically for business logic that would otherwise make your models or controllers too fat. Typically they have no state (that's held in a model) and do things like speak to APIs or other business logic. Service objects let you keep your models thin and focused, and each service object is also thin and focused on doing one thing.
Rails Service Objects: A Comprehensive Guide has examples of using service objects to manage talking to Twitter, or encapsulating complex database transactions which might cross multiple models.
Service Objects in Ruby on Rails…and you shows creating a service object to manage the new user registration process.
The EngineYard blog posted Using Services to Keep Your Rails Controllers Clean and DRY with an example of a service object which does credit card processing.
If you're looking for the origins, Service objects in Rails will help you design clean and maintainable code. Here's how. is from 2014 when they were coming on the scene.
Services has the benefit of concentrating the core logic of the application in a separate object, instead of scattering it around controllers and models.
The common characteristic among all services is their lifecycle:
accept input
perform work
return result
If this sounds an awful lot like what a function does, you're right! They even go so far as to recommend using call as the public method name on the service, just like a Proc. You can think of service objects as a way to name and organize what would otherwise be a big subroutine.
Anatomy of a Rails Service Object addresses the difference between a service object and a concern. It covers the advantages a service object has over modules. It goes into some detail about what makes a good service object including...
Do not store state
Use instance methods, not class methods
There should be very few public methods
Method parameters should be value objects, either to be operated on or needed as input
Methods should return rich result objects and not booleans
Dependent service objects should be accessible via private methods, and created either in the constructor or lazily
For example, if you have an application which subscribes users to lists that might be three models: User, List, Subscription.
class List
has_many :subscriptions
has_many :users, through: :subscriptions
end
class User
has_many :subscriptions
has_many :lists, through: :subscriptions
end
class Subscription
belongs_to :user
belongs_to :list
end
The process of adding and removing users to and from lists is easy enough with the basic create and destroy methods and associations and maybe a few callbacks.
Now your boss wants an elaborate subscription process that does extensive logging, tracks statistics, sends notifications to Slack and Twitter, sends emails, does extensive validations... now what was a simple create and destroy becomes a complex workflow contacting APIs and updating multiple models.
You could write those all as concerns or modules, include all that stuff into these three formerly simple models, and write big Subscription.register and Subscription.remove class methods. Now your subscriptions can Tweet and post to Slack and verify email addresses and perform background checks? Weird. Your models are now bloated with code unrelated to their core functionality.
Instead you can write SubscriptionRegistration and SubscriptionRemove service objects. These can include the ability to Tweet and store statistics and perform background checks and so on (or more likely put that into more service objects). They each have one public method: SubscriptionRegistration.perform(user, list) and SubscriptionRemove.perform(subscription). User, List, and Subscription don't need to know anything about it. Your models stay slim and do one thing. And each of your service objects do one thing.
As to your specific questions...
Where is this paradigm/trend coming from?
As near as I can tell, it's a consequence of the "fat model / skinny controller" trend; that's how I came to it. While that's a good idea, often your models get TOO fat. Even with modules and concerns, it gets to be too much to cram into a single class. That other business logic which would normally bloat a model or controller goes into service objects.
What gem/plugin creates the app/services folder?
You do. Everything in app/ is autoloaded in Rails 5.

Rails Engines scalability issue

I need to find a way for scaling my rails monolith application with the help of rails engines.
Goal: I have database connection timeout issue and monolith has more than 200+ models. what we want to do is divide our models into the tree-like structure of engines. and we will be able to use a separate database for each engine.
UseCase: let's say we have engine A as the base engine and which is included in engine B and C respectively. Both B and C engines live on the same level of the tree.
So i have models seggrated in different engines.
Engine A: has all data related to user.
Class User
end
Engine B: has all data related to products
class Product
end
Engine C: has all data related to reports.
class Report
end
now the main issue comes while defining the associations. earlier we were having associations and several other methods which access associations. For eg.
class User
has_many products
def get_title_product
products.pluck(:title)
end
end
now I cannot define it in engine A as products table doesn't live there.
Option:
what I know is I have to open that User model inside Engine B and define all the association and get_title_product logic related to this Domain in engine B itself.
I can't even include Engine B in Engine A because it will result in circular dependency.
I don't want to follow above approach because it will get messy and my application is significantly large, additionally I don't think it is good as per rails best practices.
Thanks in advance.
Your post contains many questions. Your asking about database sharding, architecture with rails engines and performances / scalability with the timeout issue.
Performances / Timeout
First your timeout issue is not related to the number of models as you suggest. The number of models as no impact at all on performances. To know where is the performance problem or the bottleneck you should use a monitoring tool. This one is my favorite https://www.rorvswild.com (DISCLAIMER: I'm the author ^_^). There is other competitors, use the one your like more.
For the timeouts maybe you should check around your database config. We have not enough informations here to go deeper.
Database sharding
That is not trivial at all since you cannot JOIN and reference foreign keys when they are not in the same db. That is why you have to carefully chose where to shard your db. Ideally it's where you have the least joins. It's a long work which has an important impact on your code base. A stack overflow post is not enough to talk about sharding. Fortunately there is a lot of articles and gems to help you for that.
Just be sure you understand that you can split the load across many databases, but it comes at an extra price on your code base.
For the relationships across databases you cannot use the Rails builtin has_many and so on. You have to define the relationships yourself or use a
gem which will help for that. To give you an idea:
class User
def products
Product.where(user_id: id)
end
end
Rails engines
They are great to built reusable piece of features across applications (http://guides.rubyonrails.org/engines.html). Looks like reusing is not your goal. So I'm afraid your going in the wrong direction.
If you don't want to reopen the class you can use a module:
module HasProducts
def self.included(model)
model.has_many(:products)
end
def get_title_product
products.pluck(:title)
end
end

RDBMS and Graph Database on the same Rails application

I'm developing a web app that has several "subapps" inside it. For some of them a RDBMS is clearly the weapon of choice. The issue is that lately I came with an idea for a nice little subapp whose logic and performance would benefit greatly from using a graph based database.
My problem is: This subapp is important and graph is the way to make it happen. On the other hand, the others are just fine on a RDBMS and in some cases migrating them to graph would add unnecessary complexity.
So, is it possible to have two heterogeneous database systems running on the same Rails app, perhaps using each controller to specify where to connect?
This is absolutely possible, but it's not something you'd handle at a controller level: it is the responsibility of each model class to define how its data is stored, for example by subclassing from ActiveRecord::Base or including Mongoid::Document or Neo4j::ActiveNode.
There's nothing particular you need to do. As long as the objects all conform to the active model interface (the above all do) then things like link_to 'Person', #person will still work.

Sharing AR models between Rails applications

I have a problem that I have been trying to solve for a while now. I have 4 different Rails applications using the same database, meaning they need to use the same models and have the same migrations. I initially solved the problem by creating a Rails engine packaged into a gem, which then carries all the models and migrations with it. Now I realize that there are pieces of functionality that only one application needs, but the others do not - like for example the admin application needs to have methods for providing sortable tables for all models - the other applications do not need this functionality at all.
So my idea was to find a way where I can provide the "base" models from the gem, while augmenting these base models in my specific applications to add additional functionality when needed. What I tried first was inheritance:
class User < Base::User
end
This does not work though, because now you have 2 User models in your load path (User and Base::User) and when querying associations, it always picks the "closest" class for the associated record class - meaning when you have an Model::Account which belongs_to :user, it will pick Model::User as the association class, not User. I tried reversing the AR type compute method but this only resulted in more problems.
I can technically provide all of my models from the base engine (gem), but the issue here is that how do i extend these models in my application? .class_eval feels really really dirty, inheritance does not work, providing base functionality as mixins means the "base" models do not feel and look like models at all. My goal would be to cause as little friction as possible for the other developers, I want them to be able to define their models in the gem like they do normally and then also have an easy way to extend that functionality in other applications.
Has anyone solved this problem before or have any suggestions? Or how do you guys solve this problem in your larger applications? Any help would be appreciated.
This is mentioned in the Rails guides. It describes class modification with the Decorator pattern.

Namespacing models in a Rails application

I had a discussion recently with a friend of mine who is also a RoR developer. We argued about how Rails models should be managed. Personally I like to leave in the default namespace only the root models (e.g. User, Article, Bill etc.), and the dependent models go to a module (e.g. User::Profile, User::Activity) with the name of the root model they are associated with.
On the other hand, I've seen many projects which had like 100 models in the default namespace called like user_profile, user_activity and so on. Judging by Java (Spring) development, java community tends to organize class in packages and have them grouped logically, which I find very appealing.
So the question is: are there any drawback in grouping models in modules (except the extra :class_name in relation definition) and are there any specific reasons why people usually don't do it?
Although namespacing has its advantages, it does require adding exceptions throughout your models. Foo::Bar presumes a table name of bars and likewise bar_id for associations, whereas you might prefer foo_bars and foo_bar_id to be used instead.
If you really feel strongly about this, you might want to see if there's an add-on that fixes this for you, or implement your own extension that does.
The only case when I've used namespaces is for add-ons that are to be used in third party applications where I don't want to claim root-level model names as that would be annoying. The extra effort in this case is worth-while.
If you're bothered by seeing 100+ model files without any grouping, you'll probably be equally annoyed by seeing 100+ tables with no grouping, and that's generally something you can't fix.
Controllers lend themselves to grouping quite naturally, but models aren't as easily accommodated, at least not with stock ActiveRecord.

Resources