I'm developing an app (basically an intranet) which has a few small sets of users, each a company using the app internally.
Up to now, each set of users has its own deployment with a separate domain name and database, but all living on the same server. This means that each time I have to push an upgrade I need to deploy once per client. Also, each new client means adding a new deploy target, for which I'm currently using Capistrano's multistage plugin, but it's getting a bit ridiculous.
This is a less than ideal setup, so after some thought I came up with the idea of modifying the app so that it handles multiple domains, each mapped to a different database, but on a single deployment. I created a small proof-of-concept app which basically has a before_filter in ApplicationController acting as a multiplexer for domains/databases, connecting ActiveRecord to each domain's database on each request. This worked really well, but I haven't applied this to the big app yet and I can think of at least one problem down the road: running migrations across all databases. I'm pretty sure I can work around that one though, maybe I'll tweak the rake task a little, but I'm worried that might not be the last of problems with it.
Has anyone ever tried this, or can think of any major reasons why this would be a bad idea? I would like to listen to some opinions.
Thanks!
This is usually called multi-tenancy. Here is a presentation or video about doing it in rails. Couldn't tell if it was any good, it was blocked here at work.
And no, there is nothing wrong with it as an idea. I'm not sure about your particular implementation, but I have worked on apps that were multi-tenant in the past and can't say we ever had much difficulty except when trouble clients wanted to stay on a legacy version of the product and we wanted to move forward.
I have a similar app and still the same problem as you, and after many tries, i ended up (before a desirable core solution came) with one env file per domain and kind a filter like yours.
I've been running in production for almost 1 year, and the only problem i detected is that rails expected the main db (even you won't use it) to have the same migration level as the others. (this problem arise under certain conditions)
If you need futher details, just let me know.
I hope this helps.
Related
This is verbose, I apologise if it’s not in accordance with local custom.
I’m writing a web replacement for a Windows application used to move firefighters around between fire stations to fill skill requirements, enter sick leave, withdraw firetrucks from service, and so on. Rails was the desired back-end, but I quickly realised I needed a client-side framework and chose Backbone.js.
Only one user will be on it at a time, so I don’t have to consider keeping clients in sync.
I’ve implemented most of the application and it’s working well. I’ve been avoiding facing a significant shortcoming, though: server-side validations. I have various client-side procedures ensuring that invalid updates can’t be made through the interface; for instance, the user can’t move someone who isn’t working today to another station. But nothing is stopping a malicious user from creating a record outside of the UI and saving it to the server, hence the need for server-side validation.
The client loads receives all of today’s relevant records and processes them. When a new record is created, it’s sent to the server, and processed on the client if it saved successfully.
The process of determining who is working today is complex: someone could be scheduled to work, but have gone on holidays, but then been called in, but then been sent home sick. Untangling all this on the server (on each load?!) in Ruby/Rails seems an unfortunate duplication of business logic. It would also have significant overhead in a specific case involving calculating who is to be temporarily promoted to a higher rank based on station shortages and union rules, it could mean reloading and processing almost all today’s data, over and over as each promotion is performed.
So, I thought, I have all this Backbone infrastructure that’s building an object model and constraining what models can be created, why not also use it on the server side?
Here is my uncertainty:
Should I abandon Rails and just use Node.js or some other way of running Backbone on the server?
Or can I run Node.js alongside Rails? When a user opens the application, I could feed the same data to the browser and Node, and Rails would check with the server-side Backbone to make sure the proposed new object was valid before saving it and returning it to the browser.
One factory is how deeply Rails is entrenched in this application. There isn’t that much server-side Ruby for creation/deletion of changes, but I made a sort of adaptation layer for loading the data to compensate for the legacy database model. Rails is mostly just serving JSON, and CSS, Javascript, and template assets. I do have a lot of Cucumber features, but maybe only the data-creation ones would need to be updated?
Whew! So, I’m looking for reassurance: is it reasonable, like suggested in this answer, to be running both Rails and Node on the server, with some kind of inter-process communication? Or has Rails’s usefulness shrunk so much (it is pretty much a single-page application like mentioned in that answer) that I should just get rid of it entirely and suffer some rewriting to a Node environment?
Thanks for reading.
It doesn't sound like you're worried a lot about concurrency as much as being able to do push data, which both platforms are completely capable of performing. If you have a big investment in Ruby code now, and no one is complaining about its use then what might be the concern? If you just want to use Node for push and singularity of using javascript through the stack then it might be worth it to move code over to it. From your comments, I really feel it is more about what is interesting to you, but you'll have to support the language(s) of choice. If you're the only one on the team then its pretty easy to just slip into a refactor to node just because it is interesting. Just goes back to who's complaining more, you or the customer. So to summarize: Node lets you move toward a single language in your code base, but you have to worry about what pitfalls javascript on the server has currently. Ruby on Rails is nice because you have all the ability to quickly generate features and proto them out.
We are developing the following setup:
A rails webapp allows users to make requests for tasks that are passed on to a Scala backend to complete, which can take up to 10 seconds or more. While this is happening, the page the user used to make the request periodically polls rails using AJAX to see if the task is complete, and if so returns the result.
From the user's point of view the request is synchronous, except their browser doesn't freeze and they get a nice spinny thing.
The input data needed by the backend is large and has a complex structure, as does the output. My initial plan was to simply have the two apps share the same DB (which will be MongoDB), so the rails app could simply write an id to a 'jobs' table which would be picked up by the scala backend running as a daemon, but the more I think about it the more I worry that there may be lots of potential gotchas in this approach.
The two things that worry me the most is the duplication of model code, in two different languages, which would need to be kept in sync, and the added complexity of dealing with this at deployment. What other possible problems should I take into account when assessing this approach?
Some other possibilities I'm looking at are 1) making the Scala backend a RESTful service or 2) implementing a message queue. However, I'm not fully convinced about either option as they will both require more development work and it seems to me that in both cases the model code is effectively duplicated anyway, either as part of the RESTful API or as a message for the message queue - am I wrong about this? If one of these options is better, what is a good way to approach it?
I have used several times resque for similar problems and Ive been always very happy with it, it gives you all you need to implement a job queue and is backed on redis. I would strongly suggest you to take a look at it
Don't take me wrong, but, for this project what does Rails provide that Scala does not?
Put another way, can you do without Scala entirely and do it all in Rails? Obviously yes, but you're developing the back end in Scala for a reason, right? Now, what does Scala provide that Rails does not?
I'm not saying you should do one or the other, but maintaining duplicate code is asking for trouble/complication.
If you have a mountain of Ruby code already invested in the project, then you're committed to Rails, barring a complete Twitter-esque overhaul (doesn't sound like that's an option nor desired).
Regardless, not sure how you are going to get around the model duplication. Hitting a Mongo backend does buy you BSON result sets, which you could incorporate with Spray, their excellent implementation of Akka, out-of-the-box REST, and JSON utilities.
Tough problem, sounds like you're caught between 2 paradigms!
I am reasonably new to Ruby on Rails so I am not sure how to implement this. My understanding is that rails is not designed with multiple databases in mind, although I could use establish_connection etc to make it work.
My main problem is:
I have an SaaS/application that will serve several businesses. Each
business will have several database tables such as: users, comments,
messages, transfers, navigation history, logs etc. It seems I have 3
options:
1: Store everybody's data in one database with every object belonging_to a business or just tagging something like a businessID/name. Use this tag to fetch the appropriate data and worry about scaling/performance later as my app grows. (Would I have to worry about this pretty early on?)
2: One database per Business. No need to store associations, and db queries perform consistently throughout the application's life (possibly bad assumption here).
3: Have separate instances of my app each running some number of businesses (not sure this is any good).
What I have seen used in other frameworks/businesses is just (2) multiple dbs.
I am also really interested is what is the best practice in rails as well. I know several applications have this same problem and hearing how this has been solved will help.
Any help is much appreciated. Thank you so much.
Env.
Ruby 1.9.2
Rails 3.1
Production:Heroku or EY (still deciding, now running on heroku)
According to this page, You'd need to apply some metaprogramming for multiple databases.
Why not make your deployment script to deploy to different directories with different database settings? One branch per business? Might require some more maintenance, but allows for per-business code if you need it.
I've been following a lot of good tutorials on building rails apps but I seem to be missing the whole specifying and validating db field sizes part. I love not needing to have to think about it when roughing out an app (I would have never done this with a PHP or ASP.net app). However, now that I'm ready to go to production, I think I might have done myself a disservice by not specifying field sizes as I went. My production db will be MySQL. What is the best practice here? Do I need to go through all of my migration files and specify sizes, update all the models with validation, and update all my form partial views with input max widths? or am I missing a critical step in my development process?
I think you're fine. Launching a web application is a process and the first step is to expose your app to the user and get the ball rolling. Refinements can follow. You are NOT fubared.
In general, i've NOT specified max field length's for each and every single field. Specify it only on those fields where it makes sense otherwise the default is good.
You're not making applications that come into play as a matter of life and death are you? If yes, then i'm not qualified to answer, i guess :)
There is a massive database (GB) that I am working with now and all of the previous development has been done on a slicehost slice. I am trying to get ready for more developers to come in and work so I need each person to be able to setup his own machine for development, which means potentially copying this database. Selecting only the first X rows in each table to cut size could be problematic for data consistency. Is there any way around this, or is a 1 hour download for each developer going to be necessary? And beyond that, what if I need to copy the production DB down for dev purposes in the future?
Sincerely,
Tyler
databases required for development and testing rarely need to be full size, it is often easier to work on a small copy. A database subsetting tool like Jailer ( http://jailer.sourceforge.net/ ) might help you here.
Why not have a dev server that each dev connects to?
Yes all devs develop against the same database. No developement is ever done excpt through scripts that are checked into Subversion. If a couple of people making changes run into each other, all the better that they find out as soon as possible that they are doing things which might conflict.
We also periodically load a prod backup to dev and rerun any scripts for things which have not yet been loaded to prod to keep out data up-to-date. Developing against the full data set is critical once you have a medium sized database because the coding techniques which appear to be fine to a dev on a box by himself with a smaller dataset, will often fail misreably against prod sized data and when there are multiple users.
To make downloading the production database more efficient, be sure you're compressing it as much as possible before transmission, and further, that you're stripping out any records that aren't relevant for development work.
You can also create a patch against an older version of your database dump to ship over only the differences and not an entirely new copy of it. This works best when each INSERT statement is recorded one per line, something that may need to be engaged on your tool specifically. With MySQL this is the --skip-extended-insert option.
A better approach is to have a fake data generator that can roll out a suitably robust version of the database for testing and development. This is not too hard to do with things like Factory Girl which can automate routine record creation.
In case anyone's interested in an answer to the question of "how do I copy data between databases", I found this:
http://justbarebones.blogspot.com/2007/10/copy-model-data-between-databases.html
It answered the question I asked when I found this S.O. question.