I'm quite new to Rails but in my current assignment I have no other choice but use RoR. My problem is that in my app I would like to create, connect and destroy databases automatically on user demand but as far as I understand it is quite hard to accomplish this with ActiveRecord. It would be nice to hear some advice from more experienced RoR developers on this issue.
The problem in details:
I have a main database (which I access with activerecord). In this database I store a list of my active programs (and some template data for creating new programs). I would like to create a separate database for each of this programs (when a user creates a new program in my app).
In the programs' databases I would like to store the state and basic info of the particular program and a huge amount of program related data (which is used to calculate the state and is necessary to have for audit reasons).
My problem is that for example I want a dashboard listing all the active programs and their state data. So first I have to get the list from my main db and after that I have to connect to all the required program databases and get the state data.
My question is what is the best practice to accomplish this? What should I use (ActiveRecord, a particular gem, etc.)?
Hi, thanks for your answers so far, I would like to add a couple of details to make my problem more clear for you:
First of all, I'm not confusing database and table. In my case there is a tool which is processing log files. Its a legacy tool (written in ruby 1.8.6) and before running it, I have to run an SQL script which creates a database with prefilled- and also with empty tables for this tool. The tool then processes the logs and inserts the calculated data into different tables in this database. The catch is that the new system should support running programs parallel which means I have to create different databases for different programs.(this was not an issue so far while the tool was configured by hand before each run, but now the configuration must be automatic by my tool) There is no way of changing the legacy tool while it would be too complicated in the given time frame, also it's a validated tool. So this is the reason I cannot use different tables for different programs, because my solution should be based on an other tool.
Summing my task up:
I have to crate a complex tool using RoR and Ruby 2.0.0 which:
- creates a specific database for a legacy tool every time a user want to start a new program
- configures this old tool on a daily basis to process the required logs and insert the calculated data into the appropriate database
- access these databases and show dashboards based on their data
The database I'm using is MySQL.
I cannot use other framework, because the future owner of my tool won't be able to manage/change/update it. So I have to go with RoR, which is quite painful for me right now and I really hope some of you guys can give me a little guidance.
Ok, this is certainly outside of the typical use case scenario, BUT it is very doable within Rails and ActiveRecord.
First of all, you're going to want to execute some SQL directly, which is fine, but you'll also have to take extra care if you're using user input to determine the name of the new database for instance, and do your own escaping. (Or use one of ActiveRecord's lower-level escaping methods that we normally don't worry about.) The basic idea though is something like:
create_sql = <<SQL
CREATE TABLE foo ...
SQL
ActiveRecord::Base.connection.execute(create_sql)
Although now that I look at ActiveRecord::ConnectionAdapters::Mysql2Adapter, there's a #create method that might help you.
The next step is actually doing different things in the context of different databases. The key there is ActiveRecord::Base.establish_connection. Using that, and passing in the params for the database you just created, you should be able to do what you need to for that particular db. If the db's weren't being created dynamically, I'd put that line at the top of a standard ActiveRecord model so that that model would always connect to that db instead of the main one. If you want to use the same class, and connect it to different db's (one at a time of course), you would probably remove_connection before calling establish_connection to the next one.
I hope this points you in the right direction. Good luck!
Related
Sorry if the question is awkwardly phrased -- still a Mongo/Mongoid/Rails newbie here.
What I'm trying to ask:
In my development, I've been changing around the design of my Models as I go, adding some fields here, removing some fields there (one of the great things about MongoDB/Mongoid is that you can do this very quickly and easily!)
Everything is working, but in browsing through the development database, I've got some "detritus" -- documents with the old fields (and data) that aren't being used. It's no big deal other than to my garbage-collective sensibilities. I could, in theory, drop the DB and start from scratch, but that's messy.
Is there a utility / gem / etc. that will, essentially, look at the current document design and drop any fields in the live DB that don't match up to the data model?
I know this can be done manually, and I know about the mongoid migrations gems that are out there -- those are both good and, ultimately, more thorough solutions (which I'll look at).
For now, though, I'm wondering if there's a simple "quick shot" type of utility to simply sync up the DB and drop any fields that aren't explicitly specified in my models.
Thanks!
I don't know of any tools that do this for you. It's not a common ask because most people see this flexibility as a useful feature.
For your development database, you should probably clear it out and start over.
In production you have a couple choices:
Write your code to be robust against fields being missing and the database documents not matching your mongoid model. In other words, be prepared for the two to get out of sync.
Get in the habit of migrating your data every time you change the model. It's a bit of hassle and not strictly necessary if you follow the first, but if the untidiness bothers you, this is a fine idea. See Managing mongoid migrations for strategies.
I am reasonably new to Ruby on Rails so I am not sure how to implement this. My understanding is that rails is not designed with multiple databases in mind, although I could use establish_connection etc to make it work.
My main problem is:
I have an SaaS/application that will serve several businesses. Each
business will have several database tables such as: users, comments,
messages, transfers, navigation history, logs etc. It seems I have 3
options:
1: Store everybody's data in one database with every object belonging_to a business or just tagging something like a businessID/name. Use this tag to fetch the appropriate data and worry about scaling/performance later as my app grows. (Would I have to worry about this pretty early on?)
2: One database per Business. No need to store associations, and db queries perform consistently throughout the application's life (possibly bad assumption here).
3: Have separate instances of my app each running some number of businesses (not sure this is any good).
What I have seen used in other frameworks/businesses is just (2) multiple dbs.
I am also really interested is what is the best practice in rails as well. I know several applications have this same problem and hearing how this has been solved will help.
Any help is much appreciated. Thank you so much.
Env.
Ruby 1.9.2
Rails 3.1
Production:Heroku or EY (still deciding, now running on heroku)
According to this page, You'd need to apply some metaprogramming for multiple databases.
Why not make your deployment script to deploy to different directories with different database settings? One branch per business? Might require some more maintenance, but allows for per-business code if you need it.
Let's say for example I'm managing a Rails application that has static content that's relevant in all of my environments but I still want to be able to modify if needed. Examples: states, questions for a quiz, wine varietals, etc. There's relations between your user content and these static data and I want to be able to modify it live if need be, so it has to be stored in the database.
I've always managed that with migrations, in order to keep my team and all of my environments in sync.
I've had people tell me dogmatically that migrations should only be for structural changes to the database. I see the point.
My counterargument is that this mostly "static" data is essential for the app to function and if I don't keep it up to date automatically (everyone's already trained to run migrations), someone's going to have failures and search around for what the problem is, before they figure out that a new mandatory field has been added to a table and that they need to import something. So I just do it in the migration. This also makes deployments much simpler and safer.
The way I've concretely been doing it is to keep my test fixture files up to date with the good data (which has the side effect of letting me write more realistic tests) and re-importing it whenever necessary. I do it with connection.execute "some SQL" rather than with the models, because I've found that Model.reset_column_information + a bunch of Model.create sometimes worked if everyone immediately updated, but would eventually explode in my face when I pushed to prod let's say a few weeks later, because I'd have newer validations on the model that would conflict with the 2 week old migration.
Anyway, I think this YAML + SQL process works explodes a little less, but I also find it pretty kludgey. I was wondering how people manage that kind of data. Is there other tricks available right in Rails? Are there gems to help manage static data?
In an app I work with, we use a concept we call "DictionaryTerms" that work as look up values. Every term has a category that it belongs to. In our case, it's demographic terms (hence the data in the screenshot), and include terms having to do with gender, race, and location (e.g. State), among others.
You can then use the typical CRUD actions to add/remove/edit dictionary terms. If you need to migrate terms between environments, you could write a rake task to export/import the data from one database to another via a CSV file.
If you don't want to have to import/export, then you might want to host that data separate from the app itself, accessible via something like a JSON request, and have your app pull the terms from that request. That seems like a lot of extra work if your case is a simple one.
I'm trying to figure out the cut-off with respect to when a "text entry" should be stored in the database vs. as a static file. Are there any rules of thumb here? The text entries will be at the most several paragraphs and have links to images and tables (and hyperlinks to other text entries). Some criteria for the text entry:
I'm thinking of using DITA as the content format
The text should be searchable
If the text is revised, a new version will be created
thanks in advance, Chuck
The "rails way" would be using a database.
The solution will be more scalable, therefore faster and probably easier to develop with (using migration and so on). Using the file system, you will have to build lots of functions on your own, that are already implemented for database usage.
You could create a Model (e.g.) Document and easily use existing versioning systems, like paper_trail. When using an indexed search, you can just have an has_many relation enabling you to realise the depencies between the models (destroy a model means to destroy the search index).
Rather than a cut-off, you could look at what databases provide and ask yourself if those features would be useful. Take Isolation (the I in ACID): if you have any worries that multiple people could be trying to edit an entry at the same time, a database would handle that well while you'd have to handle the locks yourself working with files. Or Atomicity: you might want to update two things at once (e.g. an index page and an entry page) and know they will either both succeed or both fail.
Databases do a number of things beyond ACID, such as taking advantage of multiple datatypes, making querying easier, and allowing for scaling. It's a question worth asking since most databases end up having data stored in a bunch of files on disk. Would you end up writing a mini-database if you used files yourself?
Besides, if you're using rails you mind as well take advantage of its ActiveRecord functionality, and make it possible to use the many plugins that expect a database.
I'd use a database for even a small, single user rails app.
After much reading on ruby on rails and multiple database connections, it seems that I have found something that not that many folks do, at least not with ror. I am used to querying many different databases and schemas and pulling back the information either for a report or for one seamless page. So, a user doesn't have to log on to several different systems. I can create a page that has all the systems on one or two web pages.
Is that not a normal occurrence in the web and database driven design?
EDIT: Is this because most all my original code is in classic asp?
I really honestly think that most ORM designers don't seem to take the thought that users may want to access more than one database into account. This seems to be a pretty common limitation in the ORM universe.
Our client website runs across 3 databases, so I do this to. Actually, I'm condensing everything into views off of one central database which then connects to the others.
I never considered this to be "normal" behavior though. I would guess that most of the time you would be designing for one system and working against that.
EDIT: Just to elaborate, we use Linq to SQL for our data layer and we define the objects against the database views. This way we keep reports and application code working off the same data model. There is some extra work setting up the Linq entities, because you have to manually define primary keys and set up associations... however so far it has definitely proven worthwhile. We tried to do so with Entity Framework, but had a lot of trouble getting the relationships set up appropriately and had to give up. The funny thing is I had thought Entity Framework was supposed to be designed for more advanced scenarios like ours...
It is not uncommon to hit multiple databases during a single part of an application's workflow. However, in every instance that I have done it, this has been performed through several web service calls, which among other things wrap the databases in question.
I have not, to my knowledge, ever had a need to hit multiple databases directly at once and merge results into a single report.
I've seen this kind of architecture in corporate Portals- where lots of data is pulled in via different data sources. The whole point of a portal is to bring silo'd systems together- users might not want to be using lots of systems in isolation (especially if they have to sign into each one). In that sort of scenario it is normal, particularly if it is a large company that has expanded rapidly and has a large number of heterogenous systems.
In your case whether this is the right thing to do depends on why you have these seperate DBs.
With ORM's it may be a little difficult. However, it can be done. Pull the objects as needed from the various databases, then use them as a composite to create a new object that is the actual one that is desired. If you can skip the ORM part of the process, then you can directly query the databases and build your object directly.
Pulling data from two databases and compiling a report is not uncommon, but because cross-database queries cannot be optimized by the query engine of either database, OLTP systems typically use a single database, to keep the application performant.
If you build the system from the ground up, it is not advisable to do it this way. If you are working with a system you didn't design, there is no much choice and it is not uncommon (that is the difference between "organic" and "planned" grow).
Not counting master and various test instances, I hit nine databases on a regular basis. Yes, I inherited it, and yes, "Classic" ASP figures prominently. Of course, all the "brillant" designers of this mess are long gone. We're replacing it with things more sane as quickly as we safely can.
I would think that if you're building a new system, and keep adding databases and get to the point of two or three databases, it's probably time to re-think your design. OTOH, if you're aggregating data from multiple, disparate systems, then, no, it's not that strange. Depending on the timliness you need, and your budget for throwing hardware at the problem, and if your data is mostly static, this would be a good scenario for a "reporting server" that pulls the data down from the Live server periodically.