Consider web (MVC, for example Rails) application for multiple clients as a service.
How to design this?
one application instance per client? (+ one database per client)
one instance for all clients (+ one database for all clients)
Former one is simple, but... "inefficient". How about the latter? (best practices, design patterns) How to separate client data? For example: worker "A" of client "1" has two documents, worker "B" of client "2" has three documents. How to build model associations to protect other users (and clients) data? I think joining every query with Client model is not a good solution.
I would suggest having a look at an earlier response on multi-tenant apps in Ruby on Rails.
It really depends on your use case, but the simplest way to handle this is a single database with scoping to particular applications. You can head from there depending on your requirements/budget.
I am a big fan of the postgresql schema system detailed in that link :P
This MSDN article explains multi-tenant data architectures well.
Probably obvious but I'll note it anyway; the default configuration of Rails instances to store session information client-side in cookies lends itself to having all application instances equally able to service requests.
Another article in the series is also informative in terms of identifying shared services such as monitoring that you will need.
Related
I originally wrote my Ruby on Rails application for one client. Now, I am changing it so that it can be used for different clients. My end-goal is that some user (not me) can click a button and create a new project. Then all the necessary changes (new schema, new tables, handling of code) are generated without anyone needing me to edit a database.yml file or add new schema definitions. I am currently using the SCOPED access. So I have a project model and other associated models have a project_id column.
I have looked at other posts regarding multi-tenant applications in Rails. A lot of people seem to suggest creating a different schema for each new client in Postgres. For me, however, it is not much useful for a new client to have a different schema in terms of data model. Each client will have the same tables, rows, columns, etc.
My vision for each client is that my production database first has a table of different projects/clients. And each one of those tables links to a set of tables that are pretty much the same with different data. In other terms a table of tables. Or in other terms, the first table will map to a different set of data for each client that has the same structure.
Is the way I explained my vision at all similar to the way that Postgres implements different "schemas"? Does it look like nested tables? Or does Postgres have to query all the information in the database anyway? I do not currently use Postgres, but I would be willing to learn if it fits the design. If you know of database software that works with Rails that fits my needs, please do let me know.
Right now, I am using scopes to accomplish multi-tenant applications, but it does not feel scalable or clean. It does however make it very easy for a non-technical user to create a new project provided I give them fillable information. Do you know if it is possible with the multi-schema Postgres defintion to have it work automatically after a user clicks a button? And I would prefer that this be handled by Rails and not by an external script if possible? (please do advise either way)
Most importantly, do you recommend any plugins or that I should adopt a different framework for this task? I have found Rails to be limited in some cases of abstraction as above and this is the first time I have ran into a Rails-scaling issue.
Any advice related to multi-tenant applications or my situation is welcome. Any questions for clarification or additional advice are welcome as well.
Thanks,
--Dave
MSDN has a good introduction to multi-tenant data architecture.
At one end of the spectrum, you have one database per tenant ("shared nothing"). "Shared nothing" makes disaster recovery pretty simple, and has the highest degree of isolation between tenants. But it also has the highest average cost per tenant, and it supports the fewest tenants per server.
At the other end of the spectrum, you store a tenant id number in every row of every shared table ("shared everything"). "Shared everything" makes disaster recovery hard--for a single tenant, you'd have to restore just some rows in every shared table--and it has the lowest degree of isolation. (Badly formed queries can expose private data.) But it has the lowest cost per tenant, and it supports the highest number of tenants per server.
My vision for each client is that my production database first has a
table of different projects/clients. And each one of those tables
links to a set of tables that are pretty much the same with different
data. In other terms a table of tables. Or in other terms, the first
table will map to a different set of data for each client that has the
same structure.
This sounds like you're talking about one schema per tenant. Pay close attention to permissions (SQL GRANT and REVOKE statements. And ALTER DEFAULT PRIVILEGES.)
There are two railscasts on multitenancy that using scopes and subdomains and another to help with handling multiple schemas.
There is also the multitenant gem which could help with your scopes and apartment gem for handling multiple schemas.
Here is also a good presentation on multitenancy-with-rails.
Dont forget about using default scopes, while creating named scops the way you are now works it does feel like it could be done better. I came across this guide by Samuel Kadolph regarding this issue a few months ago and it looks like it could work well for your situation and have the benefit of keeping your application free of some PgSQL only features.
Basically the way he describes setting the application up involves adding the concepts of tennants to your application and then using this to scope the data at query time using the database.
I'm about to finish building a simple subscription based support ticket Web app. I'm setting up authorization. But since this it's going to be my very own Web app that I'm going to deploy I'm wondering about this.
Do you create a separate database per account opened?
Let's say you have this support ticket Web app. You have ONE and ONLY ONE account owner. Account owner can setup agents that can respond to support tickets. Also, there are customer roles that open support tickets.
So as you can see the database will contain users, support tickets and more.
What is the best way to go?
1) Create one database for the whole application? That way every time somebody signs up, everything is added to the same database with the other tickets and users data and everything else or...
2) Everytime someone signs up, create a separate database per account subscription.
I'm thinking that maybe option number 2 would be a best choice for security and data integrity purposes. If so, how have you gone about tackling this issue?
It sounds like what you want is Multitenancy:
Multitenancy refers to a principle in software architecture where a
single instance of the software runs on a server, serving multiple
client organizations (tenants). Multitenancy is contrasted with a
multi-instance architecture where separate software instances (or
hardware systems) are set up for different client organizations. With
a multitenant architecture, a software application is designed to
virtually partition its data and configuration, and each client
organization works with a customized virtual application instance.
- Wikipedia article on Multitenancy
This article while a little dated is the general idea of how I would go about doing it. Simple Rails Multi-Tenancy. It's clean and efficient and saves you from writing code that you don't need to.
You should go for option #1. Number 2 is (almost (there are probably cases where it is good, but I can't find one at the moment)) never an option.
You are right in security purposes (well, in a sense), but it also creates a lot of other problems that you will have to think about.
Having a different database for each user means that for each request (remember, HTTP is state-less) you will have to open up a new connection to the database, do whatever needs to be done and then close the connection again, instead of using the connection pooling that is in Rails. This affects the performance a great deal.
Administration will be a hassle the more databases you have. Also, having multiple databases on a server do require more resources than just a bigger database.
You would have to circumvent the entire connection handling in Rails since there it is usually one database per application. It is easy to change the database for specific models, but it adds additional places where things can go wrong.
Rails do have good functionality for scoping and handling of separating data within the same database, just for that kind of use-case that you are mentioning.
I'm currently testing the waters with mongoid and have so far begun on an ecommerce store. Now of course mongoid doesn't have transactions so I'd like to ideally use mongoid for most of the app including authentication, authorization, product information etc.
However, the lack of transactions necessitate a return to an rdbms. The rdbms would be used purely to record financial transactions.
Is this possible in rails and has anyone done it?
I have limited experience with rails in general but I imagine having the secure part mounted as a engine and urls scoped under secure.myapp.com or myapp.com/secure/ and the user would be redirected to the ssl while rack takes care of things like shared sessions.
Would this work? Or has anyone found a better way of implementing this?
It is possible to mix mongoDB and a traditional RDMS, but you may have to do some extra coding on your part if you want ActiveRecord objects to communicate with MongoDB objects, since the ORMs are different. Keep in mind that while it is true that MongoDB does not support transactions across multiple documents, it does support 'transactional' atomic updates - which means that if all the data you are updating is contained within a single document you don't have to worry about transactions. MongoDB also supports safe updates, allowing you to verify that data has been written to n different replica servers and has been persisted to disk.
As for shared sessions between HTTPS and HTTP - this is not something you have to worry about. You'll define your session store as either MongoDB, MySQL, Memcached or, my recommendation, Cookies. As long as you define your domain as '.myapp.com' the cookies will be shared across all subdomains of your application regardless of the protocol.
While I can't comment directly on the rails aspect of the question, as with the first poster's response, MongoDB does support transactional updates. It's probably simpler to implement your entire system in Mongo, or in an RDBMS.
The real question is what is the motivation behind using mongo here? What are you hoping to gain from a document database model? Do you just want to rip RoR objects directly to mongo?
Just a suggestion, (abstractly) but you could just strictly define your objects up front, and represent that definition in your RDBMS. It will probably save you a lot of time if you don't have a clear motivation for using Mongo. Mongo is an awesome technology, but it's best for sorting through data and cataloging data, rather representing strict data structures (not that it's incapable of doing so, necessarily, but with a document database, you have a lot more flexibility with the content of each object within your db).
Good luck!
I was wondering if somebody has some insight on this issue.
A little background:
We've been using Rails to migrate from an old dBase and Visual Basic based system
to build internal company IntrAnet that does things like label printing,
invetory control, shipping, etc - basically an ERP
The Dilemma
Right now we need to replace an old customer-facing website that was done in Java, that
would connect to our internal system for our clients to use. We want to be able to pull information like inventory, order placement, account statements from our internal system and expose it to site live. The reason is that we take orders on the website, through fax & phone and sometimes we have walk-ins. So sometimes (very rarely thou) even a short delay in inventory update on our old Java site causes us to put an order on backorder, because we sell the same item to 2 customers within half an hour. It's usually fixed within one day but we want to avoid this in the future.
Actual Question
Does anyone have any suggestion on how to accomplish this in a better
way?
Here are three options that I see:
a) Build a separate Rails app on a web server, that will connect to the same DB that our internal app connects to.
+++ Pluses:Live data - same thing that our internal apps see, i.e. orders are created in real time, inventory is depleted right away
--- Minuses: Potential security risk, duplication of code - i.e. I need to duplicate all the controllers, models, views, etc. that deal with orders.
b) Build a separate Rails app on a web server, that will connect to a different DB from our internal app.
+++ Pluses: Less security exposure.
--- Minuses:Extra effort to sync web DB and internal DB (or using a web service like REST-API), extra code to handle inventory depletion and order # creation, duplication of code - i.e. I need to duplicate all the controllers, models, views, etc. that deal with orders.
c) Expose internal app to the web
+++ Pluses: all the problems from above eliminated. This is much "DRY"er method.
--- Minuses: A lot more security headaches. More complicated login systems - one for web & one for internal users using LDAP.
So any thoughts? Anyone had similar problem to solve? Please keep in mind that our company has limited resources - namely one developer that is dedicated to this. So this has to be one of those "right" and "smart" solutions, not "throw money/people/resources at this" solutions.
Thank you.
I would probably create separate controllers for the public site and use ActiveResource to pull data from you internal application. Take a look at
http://blog.rubybestpractices.com/posts/gregory/rails_modularity_1.html
http://api.rubyonrails.org/classes/ActiveResource/Base.html
Edit - fixed link and added api link
I would go for a. You should be able to create the controllers so that they are re-usable.
Internal users are as likely to duplicate data as external users.
It's likely that a public UI and an internal, for-the-staff, UI will need to be different. The data needs to be consistent so I would put quite a bit of effort into ensuring that there is exactly one, definitive database. So: one database two UIs?
Have a "service" layer that both UIs can use. If this was Java I would be pretty confident of getting the services done quickly. I wonder how easy it is in Ruby/Rails.
The best outcome would be that your existing Customer Java UI can be adapted to use the Rails service layer.
Assuming you trust your programmers to not accidentally expose things in the wrong place, the 'right' solution seems to me to have a single application, but two different sets of controllers and views, one for internal use, and one for public-facing. This will give you djna's idea of one database, two UIs.
As you say having two separate databases is going to involve a lot of duplication, as well as the problem of replication.
It doesn't make sense to me to have two totally separate apps using the same database; the ActiveRecord part of a Rails app is an abstraction of the database in Ruby code, therefore having two abstractions for a single database seems a bit wrong.
You can also then have common business rules in your models, to avoid code duplication across the two versions of the site.
If you don't completely trust your programmers, then Mike's ActiveResource approach is pretty good - it would make it a lot harder to expose things by accident (although ActiveResource is a lot less flexible and feature rich than ActiveRecord)
What version of Rails are you using? Since version 2.3 Rails Engines is included, this allows to share common code (models/views/controllers) in a Rails plugin.
See the Railscast for a short introduction.
I use it too. I have developed three applications for different clients, but with all the shared code in a plugin.
Our app currently spawns a new database for each client. We're starting to wonder whether we should consider refactoring this to a multi-tenant system.
What benefits / trade-offs should we be considering? What are the best practices for implementing a multi-tenant app in Rails?
I've been researching the same thing and just found this presentation to offer an interesting solution: Using Postgre's schemas (a bit like namespaces) to separate data at the DB level while keeping all tenants in the same DB and staying (mostly) transparent to rails.
Writing Multi-Tenant Applications in Rails - Guy Naor
Multi-tenant systems will introduce a whole range of issues for you. My quick thoughts are below
All SQL must be examined and
refactored to include a ClientId
value.
All Indexes must be examined to
determine if the ClientId needs to be
included
An error in a SQL statement by a
developer/sysadmin in production will
affect all of your customers.
A database corruption/problem will
affect all of your customers
You have some data privacy issues
whereby poor code/implementation could
allow customerA to see data belonging
to CustomerB
A customer using your system in a
heavy/agressive manner may affect
other customers perception of performance
Tailoring static data to an individual customers preference becomes more complex.
I'm sure there are a number of other issues but these were my initial thoughts.
It really depends upon what you're doing.
We are making a MIS program for the print industry that tracks inventory, employees, customers, equipment, and does some serious calculations to estimate costs of performing jobs based on a lot of input variables.
We are anticipating very large databases for each customer, and we currently have 170 tables. Adding another column to almost every table just to store the client_id hurts my brain.
We are currently in the beta stage of our program, and here are some things that we have encountered:
Migrations: A Rails assumption is that you will only have 1 database. You can adapt it for multiple databases, and migrations is one of them. You need a custom rake task to apply migrations to all existing databases. Be prepared to do a lot of trouble shooting because a migration may succeed on one DB, but fail on another.
Spawning Databases: How do you create a new db? From a SQL file, copying an existing db, or running all migrations? How do you keep you schema consistent between your table creation system, and your live databases?
Connecting to the appropriate database: We use a cookie to store a unique value that maps to the correct DB. We use a before filter in an Authorized controller that inheirits from ActionController that gets the db from that unique value and uses the establish_connection method on a Subclass of ActiveRecord::Base. This allows us to have some models pull from a common db and others from the client's specific db.
If you have specific questions about any of these, I can help.
I don't have any experience with this personally, but during the lightning talks at the 2009 Ruby Hoedown, Andrew Coleman presented a plugin he designed and uses for multi-tenant databases in rails w/ subdomains. You can check out the lightning talk slides and here's the acts_as_restricted_subdomain repository.
Why would you? Do you have heavy aggregation between users or are you spawning too many DBs? Have you considered using SQLite files per tenant instead of shared DB servers (since multitenant apps often are low-profile and don't need that much concurrency)?