Building consumable uri/urls for a model (rails/datamapper/SOA) - ruby-on-rails

Perhaps you can help me think this through to greater detail.
I need to build or make available a uri for a model instance that can be referenced or used by another application which may or may not be a rails application.
e.g.
I create a standard Post with content; I want to build a URL for that post another application can consume or reference by looking at the model in the database (or another less sticky fashion). Datamapper has a URI field, I want to build a canonical uri, store it there and have another application be able to access, announce, manipulate, etc.
Basically, I have several applications that may be in different places, that need to access the same model, to do differing things with the model. I need a way to make that happen clearly without putting them all in one monster application.
I've looked at Pubsubhub, RSS, etc. but haven't found any concrete examples of what I'm trying to do. Do I need to create an common API for the applications, etc?

DataMapper is very flexible about using existing databases.
Many people come to DataMapper because it can create and tear down the database structures without migrations. However, you do not have to work with it in that way.
I have had good success with using a large set of models owned by a central 'housekeeping' app and then declaring a small subset of the same models in separate 'interface' apps.
Some trial and error is required to figure out what works but it can certainly be done. I'd suggest putting your models in modules and including them across apps if possible.
A final point it sounds like you want URIs/URLs to be the primary interface. If that is the case I strongly suggest you look at Sinatra. It is entirely oriented around URLs (and I find Rails routes very obtuse).

Related

Rails, handle two sites with different url and design but with the same db

I'm looking for the best way to solve a problem.
At this moment I have a site for a customer, example.domain.com
My customer ask to create another website with some changes in design, but the contents are the same of the first website. I don't want to duplicate the website, because every feature I add to the site A must be deployed also to site B, and I'm looking a smart way to handle the situation.
I need to keep two different domains and I need also custom mailers and other small tweaks in the controllers (and maybe in some models).
My idea is to put in application controller a before filter like this
before_action :detect_domain
private
def detect_domain
case request.env['HTTP_HOST']
when "example.domain.com"
request.variant = :host1
when "example1.domain.com"
request.variant = :host2
end
end
Then I use the variant with some conditional to choose the mailer, to customize the views and to apply some code changes.
Any other idea?
Using a before filter and a per-request variable like your proposal will work, with a couple caveats that I'll mention below. I'd recommend a tool like the request_store gem to actually store the per-request value of which "skin" is selected.
Now, for the caveats. First, the main problem with per-request variables is that your Rails app does not always exist in the context of a request. Background jobs and console sessions operate outside of the usual request/response flow of your app. You will need to think about what happens when your models or other non-controller/view code is executed when that variable isn't set. I would suggest simply not having your models depend on RequestStore at all -- have the controllers pass any request-specific information down into the models, if needed.
Secondly, it's not clear from your description if you want any data or logical separation between the two domains, or if you just want different look-and-feels. If the former, you might consider the apartment gem, which aims to make database multi-tenancy easier.
EDIT: I also want to mention that, as an alternative to the multi-tenant solution above, you also have the option of a multi-instance solution. Wherein, you use an environment variable to indicate which version of the site should be displayed, and spin up multiple instances of your app (either on the same server with a reverse proxy, or on separate servers with separate DNS entries or a reverse proxy). The downside is increased infrastructure costs, but the context problem I mentioned above no longer exists (everything always has access to environment variables).

How do you allow two Ruby apps to use the same model?

I'm working on a project that has two different Rails apps. One is the web app, and another is the API app. They both use the same database. How, or what, is the proper way for them to use the same models and schema? Currently, it appears as though the model files are copied from one app to the other, and they seem woefully out of synch when checking the differences.
A few options come to mind:
Git submodules. Store your models in a separate repo (or repos), and then include it as a Git submodule in each app. This allows you to reuse them without having redundancy, though you’ll likely have to ensure that you keep which revision each app is using in-sync.
Make it one app, with two distinct parts. Maybe your API could be a simple Sinatra (or other microframework) app, and reside within the Rails app, and then mounted to a path/subdomain. This may be the least amount of work and the simplest, but it’s also probably the most cluttered.
Use your own API. You already have an API, why not use it? Your front-end app can just talk to that and doesn’t have to have any the model logic inside it. This also has the nice side-effect of ensuring that your API works and is full-featured, as you’re now a consumer of it instead of just third-parties.
Which one you pick is up to you, though I happen to prefer the last. While it’s probably more work, you’ll likely end up with an overall better design by making your business logic external (and making MVC violations harder, if not impossible).

Rails app with non-HTTP access

Hypothetical question (at the moment!)
Suppose I have a great idea for an application. It acts on data which can be well-represented by tables in a relational database, using interlinked objects which represent those tables. It supports a well-defined API for interacting with (Creating, Reading, Updating, Deleting) those objects, and viewing information about them.
In short, it's a perfect fit for Rails... except it doesn't want to be a web-app. Perhaps it wants a Command Line interface; or an OS-native dialog-based interface; or perhaps it wants to present itself as a resource to other apps. Whatever - it just isn't designed to present itself over HTTP.
These questions suggest it's certainly possible, but both approach the problem from the point of view of adapting an existing web-app to have an additional, non-web, interface.
I'm interested in knowing what the best way to create such an app would be. Would you be best to rails new non_web_app, in order to get the skeleton built "for free", then write some "normal" Ruby code that requires config/environment - but then you have a lot of web-centric cruft that you don't need? Or would it be better to roll up your sleeves and build it from whole cloth, taking just the libraries you need and manually writing any required configuration?
If the latter, what exactly is needed to make a Rails app, but without the web bits?
If you want to access the Rails ORM to develop a CRUD non-web application, just include ActiveRecord in your own Ruby script; you will avoid using a lot of Rails modules you probably don't need (routing, template generator, ...) Here is an example of how to do it.
If you prefer to have the full Rails stack, do not run your Rails web app in an application server (WEBrick, Passenger, Mongrel, ...) to avoid any HTTP exposure, and interact with your application using tasks or the rails console.
I would avoid taking Rails too far off the rails. If I were doing this and felt that the gains of rails w/o the web stuff I'd do the following:
rails new non_web_app
and ignore the webbish cruft and use rails to generate models. In this way you get the tight, comfortable database behavior and can tie various gems in as you want to augment those models. I'd not bother implementing views, of course, and I'd consider implementing controllers in which the various render bits are removed and to use you instantiate an instance of the controller and call the action directly. This means the controller represents your API into your business logic still but the "views" it now "renders" are simply the return of the data output.
Then you could simply strip out the bits you do not need...the public directory, the view structure under app, config/routes.rb, etc. You'll need to test those changes incrementally and make sure that removing some now extraneous bit doesn't throw the Rails world into chaos.
Rails is for Web apps. That means HTTP. Now, you could package a Web app so that it runs on the desktop instead, or you could use ActiveRecord with a desktop application framework like Monkeybars.

What is the best strategy to combine IntrAnet and Web-exposed website?

I was wondering if somebody has some insight on this issue.
A little background:
We've been using Rails to migrate from an old dBase and Visual Basic based system
to build internal company IntrAnet that does things like label printing,
invetory control, shipping, etc - basically an ERP
The Dilemma
Right now we need to replace an old customer-facing website that was done in Java, that
would connect to our internal system for our clients to use. We want to be able to pull information like inventory, order placement, account statements from our internal system and expose it to site live. The reason is that we take orders on the website, through fax & phone and sometimes we have walk-ins. So sometimes (very rarely thou) even a short delay in inventory update on our old Java site causes us to put an order on backorder, because we sell the same item to 2 customers within half an hour. It's usually fixed within one day but we want to avoid this in the future.
Actual Question
Does anyone have any suggestion on how to accomplish this in a better
way?
Here are three options that I see:
a) Build a separate Rails app on a web server, that will connect to the same DB that our internal app connects to.
+++ Pluses:Live data - same thing that our internal apps see, i.e. orders are created in real time, inventory is depleted right away
--- Minuses: Potential security risk, duplication of code - i.e. I need to duplicate all the controllers, models, views, etc. that deal with orders.
b) Build a separate Rails app on a web server, that will connect to a different DB from our internal app.
+++ Pluses: Less security exposure.
--- Minuses:Extra effort to sync web DB and internal DB (or using a web service like REST-API), extra code to handle inventory depletion and order # creation, duplication of code - i.e. I need to duplicate all the controllers, models, views, etc. that deal with orders.
c) Expose internal app to the web
+++ Pluses: all the problems from above eliminated. This is much "DRY"er method.
--- Minuses: A lot more security headaches. More complicated login systems - one for web & one for internal users using LDAP.
So any thoughts? Anyone had similar problem to solve? Please keep in mind that our company has limited resources - namely one developer that is dedicated to this. So this has to be one of those "right" and "smart" solutions, not "throw money/people/resources at this" solutions.
Thank you.
I would probably create separate controllers for the public site and use ActiveResource to pull data from you internal application. Take a look at
http://blog.rubybestpractices.com/posts/gregory/rails_modularity_1.html
http://api.rubyonrails.org/classes/ActiveResource/Base.html
Edit - fixed link and added api link
I would go for a. You should be able to create the controllers so that they are re-usable.
Internal users are as likely to duplicate data as external users.
It's likely that a public UI and an internal, for-the-staff, UI will need to be different. The data needs to be consistent so I would put quite a bit of effort into ensuring that there is exactly one, definitive database. So: one database two UIs?
Have a "service" layer that both UIs can use. If this was Java I would be pretty confident of getting the services done quickly. I wonder how easy it is in Ruby/Rails.
The best outcome would be that your existing Customer Java UI can be adapted to use the Rails service layer.
Assuming you trust your programmers to not accidentally expose things in the wrong place, the 'right' solution seems to me to have a single application, but two different sets of controllers and views, one for internal use, and one for public-facing. This will give you djna's idea of one database, two UIs.
As you say having two separate databases is going to involve a lot of duplication, as well as the problem of replication.
It doesn't make sense to me to have two totally separate apps using the same database; the ActiveRecord part of a Rails app is an abstraction of the database in Ruby code, therefore having two abstractions for a single database seems a bit wrong.
You can also then have common business rules in your models, to avoid code duplication across the two versions of the site.
If you don't completely trust your programmers, then Mike's ActiveResource approach is pretty good - it would make it a lot harder to expose things by accident (although ActiveResource is a lot less flexible and feature rich than ActiveRecord)
What version of Rails are you using? Since version 2.3 Rails Engines is included, this allows to share common code (models/views/controllers) in a Rails plugin.
See the Railscast for a short introduction.
I use it too. I have developed three applications for different clients, but with all the shared code in a plugin.

Generate new models and schema at runtime

Let's say your app enables users to create their own tables in the database to hold their own, custom data. Each table would have it's own schema. What are some good approaches?
My first stab involved dynamically creating migration files and model files bu I'd like to run this on heroku where you can't write to the filesystem.
I'm thinking eval may be the way to go to create and run the migration class and the model class. But I want to make sure the model class exists when a new process of the app is spawned. Can probably do this by storing these class definition with each user as they create new tables and then run through them all at startup. But now it's convulted enough that I may be missing something obvious.
It's probably a better idea not to generate new classes on runtime. Besides all of the security risks, each thread's startup time will be abominable if you ever get a significant number of users.
I would suggest rethinking your app design and aim at generic tables to hold the user's custom data. If you have examples of data structures that users can create we might be able to help.
Have you thought about a non-sql database for those tables? Look at CouchDB - there are several plugins on Github that integrate it with rails. Records in the database are JSON documents, with arbitrary key-value structure. May be perfect for a user-defined schema.
There is (was?) a cool Wiki project, called Informl. It was a Wiki, not just for web pages but for web applications. (Get it? It's informal because it's a Wiki, it's got forms because it is an application, and it's user-generated, thus Web 2.0, which means that according to an official UN resolution it is legally required to have a name which is missing a vwl.)
So, in other words, it was not just about user-generated content, but also user-generated structured data.
They did this by generating PostgreSQL-specific SQL at runtime to create new tables and then have ActiveRecord reload the schemas.
The code is up on RubyForge. It's based on Rails 1.2.3. I guess you could do much better than that today, especially with the upcoming extensibility interfaces in Rails 3.

Resources