I've started off with a single Rails application. Very simple, largely read-only, front end for a line of business application (using view backed to retrieve data) - with a few standard tables to augment the views.
I now have a need to use the same set of data in a new application (the 2 applications, whilst sharing the same data, work differently enough its not trivial to try and merge them into the same application).
I figured it would be easiest to split the models I can reuse into their own engine and have the 2 applications share the database.
Adding an API and having both applications query that is an option, but in practice I'm not sure I can provide an API that will satisfy both applications properly, as they use the data in different ways.
On that basis I figured if I gave each application a table prefix, or use different schemas to namespace them - that way each application has it's own distinct tables for stuff they don't share, but I can easily reuse the existing views without having to duplicate them.
Both options seems to work great, except I forgot about migrations for the common views and data.
So the only things I can think of are:
Since the 2 applications are kinda tightly coupled anyway, I have no migrations in my common data engine at all - any changes to the views/tables will be dealt with by the "first" application. This seems a bit nasty, since the models are now contained in a separate engine.
I dislike the idea of having migrations in the engine and then copying them into one or the other, since that's basically the same thing.
I use pivotal labs advice, but add some code to detect if it's in the "first" application, and only apply to that. If I fail to do that I end up with both applications including the engine migrations, which results in both applications trying to run the same migrations, and causing nothing but pain.
I actually split off the common data into it's own database. So application #1 uses db #1, application #2 uses db #2, and the common data is housed in db #3 and accessible by both applications. With a bit of faffing, I'm guessing I can end up with 3 dbs, 3 schema_migrations, and I can just blindly leave my migrations in the engine, and include them in both applications as per pivotal labs - my plan is to do something like this to make all this work, and have the common models setup to connect to their own DB, rather than the application DB
Stick with 1 db and multiple schemas, and somehow setup a task to run the engine migrations only, using an account locked down to it's own schema only - that way it creates it's own schema_migrations.
I kinda feel paralyzed as I'm not sure what is the least shitty option. 3 or 4 feels "best", but not great.
I think 3 is the best option. However, I'd expose the common db #3 data via an api. That is, have an application that is used to manage the shared data (HTML admin interface, and json feeds for the other apps to connect on).
Keep the shared data app really simple. Just CRUD actions on the data. So the other two apps will grab the data from the shared data app, manipulate to match local requirements, and then display it; or allow input - manipulate the input to match the shared structure, and then persist via the shared data app's API update/create actions.
As an update to this, due to how tightly coupled the 2 applications are, and other needs that came up yesterday, I've ended up pulling the migrations out of both and using active_record_migrations to manage and orchestrate the migrations for both applications.
Effectively I could've done the same by having a dummy or master application, however this feels somewhat cleaner.
However, this comes at the expense of being able to use models effectively in the migrations. For my use case this doesn't matter at all.
Related
Suppose we have this scenario:
www.main.com - Main interface where admin (foo, bar, etc..) can store products, based on their own e-commerce
www.foo.com - Sample store that sells items from the "foo" store
www.bar.com - Sample store that sells items from the "bar" store
The problem is finding a way to centralize the database structure and models.
I prefer to keep every single store in separated apps (so I exclude rails engines).
For instance, if a user buys something in the "foo" store, I need to interact with the main db and update it.
How can I do this?
Rails works much better with a "one database, one app" model. There are lots of ways how you can share models across apps (gems, engines, git submodules, etc), but none of those ways is great. You end up introducing lots of overhead in your development, deployment, and testing process. You also invite lots of hidden dependencies between code, as Rails doesn't give you easy way to keep clean abstraction (for example, you wrote a helper for store Foo, and then your coworker used that helper in store Bar, and then every time you change Foo, Bar breaks).
I recommend a centralized API approach instead:
api.foobarmain.com - a app/service that provides RESTful API for all the functionality of all stores.
This app has all the db models, and it exposes them as resources in the API for other apps to interact with.
This app can have an admin UI for all the stores, if you need it. Alternatively, admin UI may be another client of an API.
www.foo.com - a full stack app that interacts with API at api.foobarmail.com
There is no shared database connection to API, everything that you need to interact has to be exposed via API.
There will be no shared code between www.foo.com and www.bar.com. Code reuse happens only by virtue of using the same API app/service.
From the perspective of www.foo.com, the model layer (in MVC) is powered by API, not by database.
You can still have its own database on www.foo.com if you need to store data specific to www.foo.com only.
www.bar.com - another full stack app that interacts with API
so on ...
Another way is using multiple schema if you are using Postgresql. I have a similar issue with a new project that I about to start.
You can use gem 'apartment' to deal with different schemas. The queries will be a bit complicated but with different schemas you will end up with one database and you can create different namespaces to respond accordingly.
You can set up so app select the correct schema based on the Domain or Subdomain.
Here is the link: https://github.com/influitive/apartment
Hypothetical question (at the moment!)
Suppose I have a great idea for an application. It acts on data which can be well-represented by tables in a relational database, using interlinked objects which represent those tables. It supports a well-defined API for interacting with (Creating, Reading, Updating, Deleting) those objects, and viewing information about them.
In short, it's a perfect fit for Rails... except it doesn't want to be a web-app. Perhaps it wants a Command Line interface; or an OS-native dialog-based interface; or perhaps it wants to present itself as a resource to other apps. Whatever - it just isn't designed to present itself over HTTP.
These questions suggest it's certainly possible, but both approach the problem from the point of view of adapting an existing web-app to have an additional, non-web, interface.
I'm interested in knowing what the best way to create such an app would be. Would you be best to rails new non_web_app, in order to get the skeleton built "for free", then write some "normal" Ruby code that requires config/environment - but then you have a lot of web-centric cruft that you don't need? Or would it be better to roll up your sleeves and build it from whole cloth, taking just the libraries you need and manually writing any required configuration?
If the latter, what exactly is needed to make a Rails app, but without the web bits?
If you want to access the Rails ORM to develop a CRUD non-web application, just include ActiveRecord in your own Ruby script; you will avoid using a lot of Rails modules you probably don't need (routing, template generator, ...) Here is an example of how to do it.
If you prefer to have the full Rails stack, do not run your Rails web app in an application server (WEBrick, Passenger, Mongrel, ...) to avoid any HTTP exposure, and interact with your application using tasks or the rails console.
I would avoid taking Rails too far off the rails. If I were doing this and felt that the gains of rails w/o the web stuff I'd do the following:
rails new non_web_app
and ignore the webbish cruft and use rails to generate models. In this way you get the tight, comfortable database behavior and can tie various gems in as you want to augment those models. I'd not bother implementing views, of course, and I'd consider implementing controllers in which the various render bits are removed and to use you instantiate an instance of the controller and call the action directly. This means the controller represents your API into your business logic still but the "views" it now "renders" are simply the return of the data output.
Then you could simply strip out the bits you do not need...the public directory, the view structure under app, config/routes.rb, etc. You'll need to test those changes incrementally and make sure that removing some now extraneous bit doesn't throw the Rails world into chaos.
Rails is for Web apps. That means HTTP. Now, you could package a Web app so that it runs on the desktop instead, or you could use ActiveRecord with a desktop application framework like Monkeybars.
Perhaps you can help me think this through to greater detail.
I need to build or make available a uri for a model instance that can be referenced or used by another application which may or may not be a rails application.
e.g.
I create a standard Post with content; I want to build a URL for that post another application can consume or reference by looking at the model in the database (or another less sticky fashion). Datamapper has a URI field, I want to build a canonical uri, store it there and have another application be able to access, announce, manipulate, etc.
Basically, I have several applications that may be in different places, that need to access the same model, to do differing things with the model. I need a way to make that happen clearly without putting them all in one monster application.
I've looked at Pubsubhub, RSS, etc. but haven't found any concrete examples of what I'm trying to do. Do I need to create an common API for the applications, etc?
DataMapper is very flexible about using existing databases.
Many people come to DataMapper because it can create and tear down the database structures without migrations. However, you do not have to work with it in that way.
I have had good success with using a large set of models owned by a central 'housekeeping' app and then declaring a small subset of the same models in separate 'interface' apps.
Some trial and error is required to figure out what works but it can certainly be done. I'd suggest putting your models in modules and including them across apps if possible.
A final point it sounds like you want URIs/URLs to be the primary interface. If that is the case I strongly suggest you look at Sinatra. It is entirely oriented around URLs (and I find Rails routes very obtuse).
I was wondering if somebody has some insight on this issue.
A little background:
We've been using Rails to migrate from an old dBase and Visual Basic based system
to build internal company IntrAnet that does things like label printing,
invetory control, shipping, etc - basically an ERP
The Dilemma
Right now we need to replace an old customer-facing website that was done in Java, that
would connect to our internal system for our clients to use. We want to be able to pull information like inventory, order placement, account statements from our internal system and expose it to site live. The reason is that we take orders on the website, through fax & phone and sometimes we have walk-ins. So sometimes (very rarely thou) even a short delay in inventory update on our old Java site causes us to put an order on backorder, because we sell the same item to 2 customers within half an hour. It's usually fixed within one day but we want to avoid this in the future.
Actual Question
Does anyone have any suggestion on how to accomplish this in a better
way?
Here are three options that I see:
a) Build a separate Rails app on a web server, that will connect to the same DB that our internal app connects to.
+++ Pluses:Live data - same thing that our internal apps see, i.e. orders are created in real time, inventory is depleted right away
--- Minuses: Potential security risk, duplication of code - i.e. I need to duplicate all the controllers, models, views, etc. that deal with orders.
b) Build a separate Rails app on a web server, that will connect to a different DB from our internal app.
+++ Pluses: Less security exposure.
--- Minuses:Extra effort to sync web DB and internal DB (or using a web service like REST-API), extra code to handle inventory depletion and order # creation, duplication of code - i.e. I need to duplicate all the controllers, models, views, etc. that deal with orders.
c) Expose internal app to the web
+++ Pluses: all the problems from above eliminated. This is much "DRY"er method.
--- Minuses: A lot more security headaches. More complicated login systems - one for web & one for internal users using LDAP.
So any thoughts? Anyone had similar problem to solve? Please keep in mind that our company has limited resources - namely one developer that is dedicated to this. So this has to be one of those "right" and "smart" solutions, not "throw money/people/resources at this" solutions.
Thank you.
I would probably create separate controllers for the public site and use ActiveResource to pull data from you internal application. Take a look at
http://blog.rubybestpractices.com/posts/gregory/rails_modularity_1.html
http://api.rubyonrails.org/classes/ActiveResource/Base.html
Edit - fixed link and added api link
I would go for a. You should be able to create the controllers so that they are re-usable.
Internal users are as likely to duplicate data as external users.
It's likely that a public UI and an internal, for-the-staff, UI will need to be different. The data needs to be consistent so I would put quite a bit of effort into ensuring that there is exactly one, definitive database. So: one database two UIs?
Have a "service" layer that both UIs can use. If this was Java I would be pretty confident of getting the services done quickly. I wonder how easy it is in Ruby/Rails.
The best outcome would be that your existing Customer Java UI can be adapted to use the Rails service layer.
Assuming you trust your programmers to not accidentally expose things in the wrong place, the 'right' solution seems to me to have a single application, but two different sets of controllers and views, one for internal use, and one for public-facing. This will give you djna's idea of one database, two UIs.
As you say having two separate databases is going to involve a lot of duplication, as well as the problem of replication.
It doesn't make sense to me to have two totally separate apps using the same database; the ActiveRecord part of a Rails app is an abstraction of the database in Ruby code, therefore having two abstractions for a single database seems a bit wrong.
You can also then have common business rules in your models, to avoid code duplication across the two versions of the site.
If you don't completely trust your programmers, then Mike's ActiveResource approach is pretty good - it would make it a lot harder to expose things by accident (although ActiveResource is a lot less flexible and feature rich than ActiveRecord)
What version of Rails are you using? Since version 2.3 Rails Engines is included, this allows to share common code (models/views/controllers) in a Rails plugin.
See the Railscast for a short introduction.
I use it too. I have developed three applications for different clients, but with all the shared code in a plugin.
Our app currently spawns a new database for each client. We're starting to wonder whether we should consider refactoring this to a multi-tenant system.
What benefits / trade-offs should we be considering? What are the best practices for implementing a multi-tenant app in Rails?
I've been researching the same thing and just found this presentation to offer an interesting solution: Using Postgre's schemas (a bit like namespaces) to separate data at the DB level while keeping all tenants in the same DB and staying (mostly) transparent to rails.
Writing Multi-Tenant Applications in Rails - Guy Naor
Multi-tenant systems will introduce a whole range of issues for you. My quick thoughts are below
All SQL must be examined and
refactored to include a ClientId
value.
All Indexes must be examined to
determine if the ClientId needs to be
included
An error in a SQL statement by a
developer/sysadmin in production will
affect all of your customers.
A database corruption/problem will
affect all of your customers
You have some data privacy issues
whereby poor code/implementation could
allow customerA to see data belonging
to CustomerB
A customer using your system in a
heavy/agressive manner may affect
other customers perception of performance
Tailoring static data to an individual customers preference becomes more complex.
I'm sure there are a number of other issues but these were my initial thoughts.
It really depends upon what you're doing.
We are making a MIS program for the print industry that tracks inventory, employees, customers, equipment, and does some serious calculations to estimate costs of performing jobs based on a lot of input variables.
We are anticipating very large databases for each customer, and we currently have 170 tables. Adding another column to almost every table just to store the client_id hurts my brain.
We are currently in the beta stage of our program, and here are some things that we have encountered:
Migrations: A Rails assumption is that you will only have 1 database. You can adapt it for multiple databases, and migrations is one of them. You need a custom rake task to apply migrations to all existing databases. Be prepared to do a lot of trouble shooting because a migration may succeed on one DB, but fail on another.
Spawning Databases: How do you create a new db? From a SQL file, copying an existing db, or running all migrations? How do you keep you schema consistent between your table creation system, and your live databases?
Connecting to the appropriate database: We use a cookie to store a unique value that maps to the correct DB. We use a before filter in an Authorized controller that inheirits from ActionController that gets the db from that unique value and uses the establish_connection method on a Subclass of ActiveRecord::Base. This allows us to have some models pull from a common db and others from the client's specific db.
If you have specific questions about any of these, I can help.
I don't have any experience with this personally, but during the lightning talks at the 2009 Ruby Hoedown, Andrew Coleman presented a plugin he designed and uses for multi-tenant databases in rails w/ subdomains. You can check out the lightning talk slides and here's the acts_as_restricted_subdomain repository.
Why would you? Do you have heavy aggregation between users or are you spawning too many DBs? Have you considered using SQLite files per tenant instead of shared DB servers (since multitenant apps often are low-profile and don't need that much concurrency)?