Read-only web apps with Rails/Sinatra - ruby-on-rails

I am working on an administrative web app in Rails. Because of various implementation details that are not really relevant, the database backing this app will have all of the content needed to back another separate website. It seems like there are two obvious options:
Build a web app that somehow reads from the same database in a read-only fashion.
Add a RESTful API to the original app and build the second site in such a way as for it to take its content from the API.
My question is this: are either of these options feasible? If so, which of them seems like the better option? Do Rails, Sinatra, or any of the other Rack-based web frameworks lend themselves particularly well to this sort of project? (I am leaning towards Sinatra because it seems more lightweight than Rails and I think that my Rails experience will carry-over to it nicely.)
Thanks!

Both of those are workable and I have employed both in the past, but I'd go with the API approach.
Quick disclaimer: one thing that's not clear is how different these apps are in function. For example, I can imagine the old one being a CRUD app that works on individual records and the new one being a reporting app that does big complicated aggregation queries. That makes the shared DB (maybe) more attractive because the overlap in how you access the data is so small. I'm assuming below that's not the case.
Anyway, the API approach. First, the bad:
One more dependency (the old app). When it breaks, it takes down both apps.
One more hop to get data, so higher latency.
Working with existing code is less fun than writing new code. Just is.
But on the other hand, the good:
Much more resilient to schema changes. Your "old" app's API can have tests, and you can muck with the database to your heart's content (in the context of the old app) and just keep your API to its spec. Your new app won't know the difference, which is good. Abstraction FTW. This the opposite side of the "one more dependency" coin.
Same point, but from different angle: in the we-share-the-database approach, your schema + all of SQL is effectively your API, and it has two clients, the old app and the new. Unless your two apps are doing very different things with the same data, there's no way that's the best API. It's too poorly defined.
The DB admin/instrumentation is better. Let's say you mess up some query and hose your database. Which app was it? Where are these queries coming from? Basically, the fewer things that can interact with your DB, the better. Related: optimize your read queries in one place, not two.
If you used RESTful routes in your existing app for the non-API actions, I'm guessing your API needs will have a huge overlap with your existing controller code. It may be a matter of just converting your data to JSON instead of passing it to a view. Rails makes it very easy to use an action to respond to both API and user-driven requests. So that's a big DRY win if it's applicable.
What happens if you find out you do want some writability in your new app? Or at least access to some field your old app doesn't care about (maybe you added it with a script)? In the shared DB approach, it's just gross. With the other, it's just a matter of extending the API a bit.
Basically, the only way I'd go for the shared DB approach is that I hated the old code and wanted to start fresh. That's understandable (and I've done exactly that), but it's not the architecturally soundest option.
A third option to consider is sharing code between the two apps. For example, you could gem up the model code. Now your API is really some Ruby classes that know how to talk to your database. Going even further, you could write a Sinatra app and mount it inside of the existing Rails app and reuse big sections it. Then just work out the routing so that they look like separate apps to the outside world. Whether that's practical obviously depends on your specifics.
In terms of specific technologies, both Sinatra and Rails are fine choices. I tend towards Rails for bigger projects and Sinatra for smaller ones, but that's just me. Do what feels good.

Related

Is it sensible to run Backbone on both server (Node.js and Rails) and client for complex validations?

This is verbose, I apologise if it’s not in accordance with local custom.
I’m writing a web replacement for a Windows application used to move firefighters around between fire stations to fill skill requirements, enter sick leave, withdraw firetrucks from service, and so on. Rails was the desired back-end, but I quickly realised I needed a client-side framework and chose Backbone.js.
Only one user will be on it at a time, so I don’t have to consider keeping clients in sync.
I’ve implemented most of the application and it’s working well. I’ve been avoiding facing a significant shortcoming, though: server-side validations. I have various client-side procedures ensuring that invalid updates can’t be made through the interface; for instance, the user can’t move someone who isn’t working today to another station. But nothing is stopping a malicious user from creating a record outside of the UI and saving it to the server, hence the need for server-side validation.
The client loads receives all of today’s relevant records and processes them. When a new record is created, it’s sent to the server, and processed on the client if it saved successfully.
The process of determining who is working today is complex: someone could be scheduled to work, but have gone on holidays, but then been called in, but then been sent home sick. Untangling all this on the server (on each load?!) in Ruby/Rails seems an unfortunate duplication of business logic. It would also have significant overhead in a specific case involving calculating who is to be temporarily promoted to a higher rank based on station shortages and union rules, it could mean reloading and processing almost all today’s data, over and over as each promotion is performed.
So, I thought, I have all this Backbone infrastructure that’s building an object model and constraining what models can be created, why not also use it on the server side?
Here is my uncertainty:
Should I abandon Rails and just use Node.js or some other way of running Backbone on the server?
Or can I run Node.js alongside Rails? When a user opens the application, I could feed the same data to the browser and Node, and Rails would check with the server-side Backbone to make sure the proposed new object was valid before saving it and returning it to the browser.
One factory is how deeply Rails is entrenched in this application. There isn’t that much server-side Ruby for creation/deletion of changes, but I made a sort of adaptation layer for loading the data to compensate for the legacy database model. Rails is mostly just serving JSON, and CSS, Javascript, and template assets. I do have a lot of Cucumber features, but maybe only the data-creation ones would need to be updated?
Whew! So, I’m looking for reassurance: is it reasonable, like suggested in this answer, to be running both Rails and Node on the server, with some kind of inter-process communication? Or has Rails’s usefulness shrunk so much (it is pretty much a single-page application like mentioned in that answer) that I should just get rid of it entirely and suffer some rewriting to a Node environment?
Thanks for reading.
It doesn't sound like you're worried a lot about concurrency as much as being able to do push data, which both platforms are completely capable of performing. If you have a big investment in Ruby code now, and no one is complaining about its use then what might be the concern? If you just want to use Node for push and singularity of using javascript through the stack then it might be worth it to move code over to it. From your comments, I really feel it is more about what is interesting to you, but you'll have to support the language(s) of choice. If you're the only one on the team then its pretty easy to just slip into a refactor to node just because it is interesting. Just goes back to who's complaining more, you or the customer. So to summarize: Node lets you move toward a single language in your code base, but you have to worry about what pitfalls javascript on the server has currently. Ruby on Rails is nice because you have all the ability to quickly generate features and proto them out.

Advice on communication between rails front-end and scala backend

We are developing the following setup:
A rails webapp allows users to make requests for tasks that are passed on to a Scala backend to complete, which can take up to 10 seconds or more. While this is happening, the page the user used to make the request periodically polls rails using AJAX to see if the task is complete, and if so returns the result.
From the user's point of view the request is synchronous, except their browser doesn't freeze and they get a nice spinny thing.
The input data needed by the backend is large and has a complex structure, as does the output. My initial plan was to simply have the two apps share the same DB (which will be MongoDB), so the rails app could simply write an id to a 'jobs' table which would be picked up by the scala backend running as a daemon, but the more I think about it the more I worry that there may be lots of potential gotchas in this approach.
The two things that worry me the most is the duplication of model code, in two different languages, which would need to be kept in sync, and the added complexity of dealing with this at deployment. What other possible problems should I take into account when assessing this approach?
Some other possibilities I'm looking at are 1) making the Scala backend a RESTful service or 2) implementing a message queue. However, I'm not fully convinced about either option as they will both require more development work and it seems to me that in both cases the model code is effectively duplicated anyway, either as part of the RESTful API or as a message for the message queue - am I wrong about this? If one of these options is better, what is a good way to approach it?
I have used several times resque for similar problems and Ive been always very happy with it, it gives you all you need to implement a job queue and is backed on redis. I would strongly suggest you to take a look at it
Don't take me wrong, but, for this project what does Rails provide that Scala does not?
Put another way, can you do without Scala entirely and do it all in Rails? Obviously yes, but you're developing the back end in Scala for a reason, right? Now, what does Scala provide that Rails does not?
I'm not saying you should do one or the other, but maintaining duplicate code is asking for trouble/complication.
If you have a mountain of Ruby code already invested in the project, then you're committed to Rails, barring a complete Twitter-esque overhaul (doesn't sound like that's an option nor desired).
Regardless, not sure how you are going to get around the model duplication. Hitting a Mongo backend does buy you BSON result sets, which you could incorporate with Spray, their excellent implementation of Akka, out-of-the-box REST, and JSON utilities.
Tough problem, sounds like you're caught between 2 paradigms!

Best practices for accessing data between Rails applications?

I have two Rails applications whose data are pretty interdependent. If I needed to easily access models of another application in Django or otherwise share them, this was trivial, as I would simply include both applications in my project and be able to import the models of the other application. At that point, I could just access objects in the other app's database using Django's ORM and it was consistent and awesome.
In Rails, I'm getting the overwhelming vibe that the way to do this is by creating RESTful API hooks in each application for each thing you want the other application to request. Seems clean and modular enough, but it's starting to get messy because I have a situation as follows:
Make GET request to application A
Application A needs to fetch data from application B, so it makes a GET request to an API hook on application B.
As part of the function in B's controller, it needs data that is part of A's data model, so it makes requests to A.
Steps 2-3 are happening in succession
It's obvious this is creating a scalability nightmare and in our case, requests are timing out because of this race condition in which the new requests are waiting for the original request to complete but it never will until the new ones complete. We can spin up more web server processes, but I feel like there has got to be a better way to do this that doesn't involve making these superfluous GET requests when I maintain both of these applications.
Ultimately, my questions are 1) is there a more direct approach to getting another Rails app's data than issuing a GET request, or am I just trying to use what's actually a great design in a way that is killing it? and 2) If I am fighting against the grain of the design pattern and this really is a great approach, is there any general advice you can offer that will help me keep the data in these apps from having a dependency on one another that causes race conditions like this? I realize I could pass the information that app B will need from A up front (it is just one field right now but could later be more), but is that right?
If there is a large portion of shared data/models, put those models into a gem/plugin and use them in both apps. Keep the DB config per-app, though, IMO.
If they're that tightly interwoven, I'm not 100% convinced that one or both apps are doing it right, though.
I also happened to discover that Mongoid has support for multiple databases, which allows us to do exactly what I was aiming to do: http://mongoid.org/docs/installation/replication.html

One big Rails application vs separate application

I am working on one big project. Now we need to add new functionality: scheduler managment.
It's not a main task of application, but it is quite complicated part of it.
Is it good idea to extract it as a separate application?
It will share some data (users and some other personalities) and it will use the same database.
The main reason I want to do it is to simplify main application.
I understand, that it is mayby too wide question. But maybe you can share your expirience of developing this kind of applications and maybe there are any articles I can read and world-wide best practices.
While others have mentioned some of the benefits of separating the applications, I'll touch on a couple of reasons why you might NOT want to separate the code.
You're going to need to maintain a single set of tests, especially if both applications are sharing the same database. If you don't do this, it's hard to predict when changing one application would break the other, especially if the applications start to need different things out of the database.
The two applications are obviously going to have a lot of overlap (users, for example). Separating into two applications could potentially force you to duplicate code, since rails by default has some pretty specific ideas about how a rails application should be structured. If your applications are sharing certain views, for example, what will you do to coordinate change in both applications when one application wants to modify the view?
Neither of these is insurmountable, but rails is easiest to develop when you follow rails conventions. As you begin to deviate, you end up having to do more work. Also, don't take either of these points as invalidating the other answers here, but merely counterpoints that you need to think about.
When you can use the functionality in other projects too, then I would separate it.
Maybe you can create a rails engine to share it easily between projects.
Consider asking yourself "What about re-usability?" Is the new scheduling functionality likely to be re-usable in another context with another application? If the answer is "yes," then perhaps making the scheduling management more modular in design will save you time in the future. If the answer is "no," then I would think you have more leeway in how tightly you integrate scheduling management with your existing app.
The practical difference here would be writing generalized scheduling management functionality that has assignable tables and methods upon which to act versus more 'hard coding' it with the data/code scheme of your 'onebig project.'
hth -
Perry
Adding management-tools into a web-app often complicate deployment, is my experience. Especially when the use of your application grows, and you need to performance-tune it, dragging along a huge "backend" may be problematic.
For sake of deploy-, scale- and test-ability, I prefer my applications to be small and focused. Sometimes it even pays off to have the entire admin-enviroment over REST-XML-services.
But as other answers point out: this is more a "it depends" solution. These are my €0.02.

Breaking up a Rails app into two

We have a very lage Rails app that has two distinct sections: the front end and the CMS/Admin. We would like to break up the app into two pieces (for maintenance, as we have distinct teams that work on the front end vs. back end and they could have different release cycles).
One thought was to start a new Admin 2.0 app that has access to the models/schema from the original application, but has its own controllers/views and its own models that extend the original models until it is safe to fully decouple. Is this advisable? If not, what would be an appropriate plan to migrate away from one monolithic codebase?
warning, this is a bit ranty, and does not go anywhere.
Having worked on a very large app that operates in the manor you describe (for scalability reasons), I still have mixed opinions (an no conclusive answers).
Currently we operate 3 major apps (+ one or two smaller ones that use a fragment of the schema).
RVW (our admin app): This is the only app that writes, runs on a single server, and is responsible for maintaining the schema.
reevoo.com: ecommerce, price comparison, stuff like that. This (for historic reasons runs on a slightly different schema, run on a read only slave of RVW, with database views to map the schemas. All writes are done by sticking things on queues that RVW picks up and acts on. This works very well, although the number of random db related issues (mostly related to the views) is an issue. The main problem with this app is the difficulty sharing code (gems work well, I've often dreamed of bringing the schemas into line and sharing the core models in a gem!). We share code between apps using ruby gems. And test using lots of integration tests that cross app boundaries (using drunit (presentation on this available)).
reevoomark: very high load b2b app. This has many servers each with a full stack (db server, app server one per node). These have their databases populated with a db export - import batch job. This works very well in the short term, the shear flexibility of it is just ace, but integration testing between apps is very hard.
My advice would be to avoid splitting the apps at all costs, keeping things DRY quickly becomes a major challenge. My advice would be to stick with one app, two sets of routes (selected at startup by environment variables).
This gives you all the advantages of the other solutions, while making code sharing implicit. Splitting your test packs out would make your test cycles shorter and make things more manageable for the two teams. I would avoid working on different code bases, as doing this promotes the apps drifting apart and making code sharing tricky (as in .com).
If you decide do split, have a good set of high level cross app tests. Custom (per app) extensions to a core set of models sounds like a good plan, although with distinct code bases and teams you may still end up with duplicate code. Rails engines should be a good way of sharing the models, but be prepared for model reloading to become a little schizophrenic.
Good luck!
Have you namespaced your admin controllers? That would be a relatively easy point of subdivision and also avoid many of the negative side effects of forking your code into two apps.
Have you considered looking at Rails Engines? Added to Rails in 2.3.

Resources