Do I need to use multiple databases in Ruby on Rails application? - ruby-on-rails

I am designing a CRM using Ruby on Rails. How do you think, do I need a separate database for every client company? Or should I use the same database for everyone?

If they are separate companies or competing companies (for say a white label CRM) you'll most definitely want to run separate instances because then you can credibly claim total sandboxing. Otherwise, if you ever inadvertently wrote code that somehow allowed the data from one to display for the other, the game is over. Your customer will run for the hills and tell everyone about their terrible experience with your product.
I would even suggest you run separate instances of your app for each customer. Heroku provides a super simple way to deploy RoR apps so spinning up a new one whenever you add a new customer is a reasonable approach. Of course, if you want a more turnkey solution that allows people to just sign up for an account, you will have to have a single instance that enforces customer data sandboxing in code. Obviously it can be done, but the separation isn't done at the infrastructure level which is ultimately the safest way.
Best regards.

I do it with a single database, like this:
class Company < ActiveRecord::Base
has_many :records
def recent_records
records.desc(:created_at)
end
end
class Record < ActiveRecord::Base
belongs_to :company
end
Then, in the controller, I can write:
#records = #company.recent_records
And pass that down to the views.
Hope this helps.

Related

What do folks use app/services/ in rails applications

Every now and them I would come across this in the ruby on rails ecosystem:
class LocalizeUrlService
class Services::UpdateUserRegistrationForOrder
class ProductionOrderEmailService
UserCart::PromotionsService.new(
Shipping::BulkTrackingService.new(bulk_update, current_spree_user)
You can also see an example here
However, in the official examples of for example "Ruby On Rails Guides" I've never seen this. Which leads me to believe this is a concept coming from another language/paradigm different to Rails/OOP.
Where is this paradigm/trend coming from? Is there a tutorial/book
that these folks got influenced by? Are these folks holdouts from the SOA trend of a few years ago?
Is it a good idea to put code in app/service/blah_service.rb ?
If yes, what logic/code can be considered "Service" material.
Is there any kind code that would/wouldnt belong as a service?
What gem/plugin creates the app/services folder? The vanilla rails app doesn't ship with it at first.
sidetone: Personally I have issue with instantiating a service. I feel classes and instantiating is misused by a ametuer programmers. I feel a class and instantiating is "for a thing" and a service is something that "does"
so mixins/defs/include should be the way to go I feel.
Service objects are for things that don't fit well in the normal MVC paradigm. They're typically for business logic that would otherwise make your models or controllers too fat. Typically they have no state (that's held in a model) and do things like speak to APIs or other business logic. Service objects let you keep your models thin and focused, and each service object is also thin and focused on doing one thing.
Rails Service Objects: A Comprehensive Guide has examples of using service objects to manage talking to Twitter, or encapsulating complex database transactions which might cross multiple models.
Service Objects in Ruby on Rails…and you shows creating a service object to manage the new user registration process.
The EngineYard blog posted Using Services to Keep Your Rails Controllers Clean and DRY with an example of a service object which does credit card processing.
If you're looking for the origins, Service objects in Rails will help you design clean and maintainable code. Here's how. is from 2014 when they were coming on the scene.
Services has the benefit of concentrating the core logic of the application in a separate object, instead of scattering it around controllers and models.
The common characteristic among all services is their lifecycle:
accept input
perform work
return result
If this sounds an awful lot like what a function does, you're right! They even go so far as to recommend using call as the public method name on the service, just like a Proc. You can think of service objects as a way to name and organize what would otherwise be a big subroutine.
Anatomy of a Rails Service Object addresses the difference between a service object and a concern. It covers the advantages a service object has over modules. It goes into some detail about what makes a good service object including...
Do not store state
Use instance methods, not class methods
There should be very few public methods
Method parameters should be value objects, either to be operated on or needed as input
Methods should return rich result objects and not booleans
Dependent service objects should be accessible via private methods, and created either in the constructor or lazily
For example, if you have an application which subscribes users to lists that might be three models: User, List, Subscription.
class List
has_many :subscriptions
has_many :users, through: :subscriptions
end
class User
has_many :subscriptions
has_many :lists, through: :subscriptions
end
class Subscription
belongs_to :user
belongs_to :list
end
The process of adding and removing users to and from lists is easy enough with the basic create and destroy methods and associations and maybe a few callbacks.
Now your boss wants an elaborate subscription process that does extensive logging, tracks statistics, sends notifications to Slack and Twitter, sends emails, does extensive validations... now what was a simple create and destroy becomes a complex workflow contacting APIs and updating multiple models.
You could write those all as concerns or modules, include all that stuff into these three formerly simple models, and write big Subscription.register and Subscription.remove class methods. Now your subscriptions can Tweet and post to Slack and verify email addresses and perform background checks? Weird. Your models are now bloated with code unrelated to their core functionality.
Instead you can write SubscriptionRegistration and SubscriptionRemove service objects. These can include the ability to Tweet and store statistics and perform background checks and so on (or more likely put that into more service objects). They each have one public method: SubscriptionRegistration.perform(user, list) and SubscriptionRemove.perform(subscription). User, List, and Subscription don't need to know anything about it. Your models stay slim and do one thing. And each of your service objects do one thing.
As to your specific questions...
Where is this paradigm/trend coming from?
As near as I can tell, it's a consequence of the "fat model / skinny controller" trend; that's how I came to it. While that's a good idea, often your models get TOO fat. Even with modules and concerns, it gets to be too much to cram into a single class. That other business logic which would normally bloat a model or controller goes into service objects.
What gem/plugin creates the app/services folder?
You do. Everything in app/ is autoloaded in Rails 5.

Is maintaining multiple "sites" using the same "web app" worth the added architecture and code complexity?

This may not at first seem like a code-related question. But it truly is, in the end a very high-level architecture question, which has implications for database design, and code architecture. So please give a good moment of thought before judging this as off-topic.
I imagine my situation is not unique in the industry and I would like to learn from others' experience.
I have run a topic based video education website for 8 years. Recently I've been inspired to make one or two other websites that are essentially the same thing, but with a different topic. Everything I'll need for these new sites I already have, including searching, indexing, external content hosting, backend jobs, mailers, etc.
I'm faced with the decision of do I fork the current website for each additional website, and make the 5% alterations required, and set up all of the other services, etc? Or do I try to basically cram a number of "websites" into the same app, that would basically key off of the domain name and give a different face and content (and registration, shopping cart, menus, content, etc)?
An example of this issue is Stack Overflow itself. They have many sites "branded" slightly differently. Do they maintain separate apps for each, or do they all run off of one app?
In the first case upgrades and code development will get out of sync and become a nightmare, and in the second, this will add a significant degree of complexity to, well, mostly everything.
Both seem pretty bad. Which is least bad?
ps, it's a ruby on rails app, in case by some magic, some gem exists for this kind of thing that I don't know about.
If the functionality is the same then the you only need one app. However, if you realize that at some point that the functionality of the app are diverging, then you could add another app to handle the diverging code but keep the common app to handle all the common functionality and convert to service oriented architecture.
Forking and maintaining two sets of identical code is a nightmare that should be avoided.
You're looking at multi tenancy -- the ability for multiple "tenants" to use a single application.
Modern terminology would label this as "cloud" software, although our current HTTP architecture prevents users maintaining state, preventing it from achieving that mantle.
CMS
In the terms of your app, you'd benefit from reading up about CMS systems.
Specifically, the likes of tumblr etc work with the exact pattern you want -- single application with...
admin area
front-end
customization (background image, styling etc)
users
--
For Rails, you'd be able to create a single instance of an application, with the following structure:
#config/routes.rb
scope constraints: AccountCheck do
resources :posts, path: "" #-> url.com/
end
#lib/account_check.rb
module AccountCheck
def initializer(router)
#router = router
end
def self.matches?(request)
Account.exists?(request.subdomain) && request.subdomain
end
end
--
#app/models/account.rb
class Account < ActiveRecord::Base
has_and_belongs_to_many :users
has_many :posts
end
#app/models/user.rb
class User < ActiveRecord::Base
has_and_belongs_to_many :accounts
has_many :posts
end
#app/models/post.rb
class Post < ActiveRecord::Base
belongs_to :account
belongs_to :user
end
The above will give you the ability to create a subdomain-centered system, which will display the posts for the current account:
#app/controllers/posts_controller.rb
class PostsController < ApplicationController
def index
#account = Account.find_by name: request.subdomain
#posts = #account.posts
end
end
Gem
Very simple example, hopefully shows you how you'd achieve the multi-tenancy aspect.
If you wanted to take this even further, you'd want to look at data-scoping around the Account model. This cannot be done at the db-level in any other system than pgsql; for which there's a gem called apartment to manage PGSQL schemas.
This is the best I have so far. There's a good book about this pattern by #Ryan Bigg - https://leanpub.com/multi-tenancy-rails
If you have total control over the "creative direction" for all the sites, maintaining all of them under a single site would be easier.
If you plan to retain a main site, with less frequent updates to the other sites, keeping them separate will have the advantage of reduced coupling - you can safely perform customization for a single site/app without impacting others, and avoid (further) raising the overall code complexity needed to accommodate competing requirements for multiple apps under the same site.

Ruby/Rails Model Relationships Across Disparate Services/APIs (SOA)

I'm working on building a suite of (micro)services using Ruby/Rails (Grape, Rails-API etc.) which feed user-facing web/mobile applications. These services are self-contained/isolated, however there is a need to have some cross-service relationships between models/entities.
In the case of a has_one/belongs_to relationship, I can simply store the ID of the foreign entity within the local model and vice versa. The problem I'm facing is how to handle a has_many/belongs_to or has_many/has_many relationship(s).
For example, if I had an Order and Product model, in a monolithic Rails app I would do the following:
class Order < ActiveRecord::Base
has_many :products, through :order_products
end
class Product < ActiveRecord::Base
has_many :orders, through :order_products
end
class OrderProduct < ActiveRecord::Base
belongs_to :order
belongs_to :product
end
How can this type of a relationship be handled when dealing with disparate services? Is there a 'rails way' to do this?
So far the best option I've come up with is to store the foreign IDs as a hash within the model using something like PostgressSQL's hstore column, but this feels wrong both from a scalability and data-integrity standpoint.
Any help would be greatly appreciated!
These services are self-contained/isolated, however there is a need to have some cross-service relationships between models/entities.
the rails way (TM) is to have a monolithic application for this kind of relationship.
if you have these kind of constraints and you are building "microservices" - in my opinion, you are not doing a good job architecting your platform.
I do not see how you could do it as in the monolithic app. But, you could easily validate the presence of the foreign id and manage the relationship creation and deletion yourself. Perhaps an example from livingsocial could help you (quite generous of them to share their best Rails SOA practices):
https://techblog.livingsocial.com/blog/2014/05/06/soa-the-what-the-why-and-the-rules-of-engagement/
Cheers!
The best way to do it is create a microservice called OrderProduct and have one table that correlates the order_id and the product_id.
Alternatively, you can put Order and OrderProduct in one service and Product in the other service... then when you want an order, just search by product_id and you'll get the orders.
I don't think I need to say it but you can put Product and OrderProduct in one service and Order in the other service.
I am assuming that Orders and Products are in separate services.
The most difficult part of developing microservices is defining the boundaries. That is, what functionality goes in which microservice. This is surprisingly difficult to do before building the services, and the reason that some people suggest building a monolithic app first and then cutting out the microservices.
One big question to answer is why are Orders and Products in separate services? How did you decide to separate them? Does it make more sense for them to be in the same service?
I think the answer to your question depends on which service is going to access Orders and Products. Does one service need access both?
You could store an array of foreign key ids on one service. This would work if one service only needs to pull data from the other. This is the same thing as "simply storing the ID of the foreign entity within the local model". The only difference is that you would store an array of ID's.
However, there may be use cases that would require storing an array on both services, so that either one would need to know about the has_many relationship on the other. If this is the case then I think that the design is not good. You would have to duplicate the data on both services. This would make it easy for the data to get out of sync.
One possible solution would be to share the database between two services.

ActiveRecord relations between models on different engines

I'm building a Rails app that is probably going to have a large amount of models. For now let's say that i need to have Users, and each User will have a personal Blog
I've been developing the User-related part on a separate Engine (let's call it AuthEngine) that encapsulates the User model, authentication, etc. At the time this seemed like the best approach
In parallel, my colleague was developing another engine, to deal with the blog features (let's call it BlogEngine).
Now he needs to access user data which is stored in the other engine. He achieved this by defining:
BlogEngine.user_class = "AuthEngine::User"
This way, he can easily ask for user data, even though it isn't stored in the same engine. The problem comes when we want to define relashionships between models
He can easily say that each blog post belongs to a user
has_one :user, :class_name => BlogEngine.user_class
But as far as i know he can't specify that each User has multiple posts since the User model is within the other engine
The consequence of this is that he can't do things like #user.posts, but instead has to do Post.find_all_by_user(#user)
Is there a more elegant way to deal with this?
I also considered the possibility that each engine could simply generate the models inside the app, removing the encapsulation, but since the amount of models will grow quickly, i think this will make the app more of a mess, and not as much maintanable
I think you should reopen user class inside blog_engine in order to define has_many :posts relation and it should be appropriate way.
What about having a common models gem/engine with only the relationships as a dependency for your engines? This way you would have access to all relevant relationships in every engine.

Using join table in a one to many association for legacy tables

I'm trying to marry a rails app to a related (and seriously crufty and wet) PHP app but having both apps share a database. So far it's working happily. I've encountered a case where I need a one to many relationship between a new model and a legacy one. Normally I'd simply use:
class LegacyModel < ActiveRecord::Base
belongs_to :new_model
end
class NewModel < ActiveRecord::Base
has_many :legacy_models
end
But in order do this I'd end up having to add a column to the legacy table and I'd rather not do that; the legacy app is fragile enough that I really don't want to risk altering its tables; just use them in a read only sense. So I was considering using a join table where the new_model_id is unique and the legacy_model_id is not. This feels like I'm going off the Rails but I want that association fairly bad so I'm wondering if:
1) Is this an acceptable approach?
2) Are there any more better solutions?
Please note that my sample code is pseudo-code and not representative. I just want to get the idea across.
Thanks in advance for any help you can give me.
That's a perfectly fine approach as far as I can see. Even if the model you were trying to connect to was a part of your current Rails app, having an explicit join table has many advantages (like being able to - either right away or later on - add more columns to it in order to add and capture more data).
I myself am working on something similar, but the other database I am touching is not a legacy, but quite new, except that I too only want to connect to it in a read only fashion, and so I make that similar join table. Nothing wrong with that!

Resources