Migration from Informatica to Ab-initio - data-warehouse

We have a migration work from Informatica to Ab Initio. Is there anyone who knows of any automated tool which can at least migration 30-40 % of the easy Infa mapping to Ab initio. I think complex ones anyways have to be done manually. Thanks in advance.

Related

Best Practice for DB Migrations that Affect Many Tables

Say you have a database of 50 tables. You are making a change to the wording of one of the columns that by relationship affects 20 of those tables. How would you set up this migration? I see at least three possibilities
a separate migration for a change on every table
a single migration for all of them
changing the initial declaration of the creation of the tables.
I'm quite confident 3 is the worst approach because now everyone cannot simply migrate up but would have to rebuild the entire schema. But I'm stuck between 1 and 2. 2 is probably the best approach because you are creating one migration for one change that just so happens to affect a lot of tables. This is what I'm leaning towards. On the other hand, it feels very messy.
Any resources on this would be appreciated as well. Thanks
It makes more sense to go for option 2.
Say you take option one and do x separate migrations. You'll end up with x migrations that by themselves will mess up with your database's integrity, so you'll have to run them all together (or rollback them all together if you want to undo the changes). So if all your changes need to be made together, then it makes sense to put them together in the same file

Active record migrations and refactoring

I'm in the midst of a fairly steep bit of refactoring on my current project. Previous to reaching this crossroads I have two models that I came to realize are really the same model but with in a different state and I want to represent the system that way. As a result I have to take all the objects of the soon to be defunct model and move them into the other model and set the new status column correctly. The problem is simple code-wise, especially since the models are so similar as is.
The pain point for me is that I have to make these changes at some midpoint in my migration in both directions. The path from here to there will be sort of like:
add_column :model_ones, :status, :string
make_all_model_two_records_into_model_one_records()
drop_table :model_twos
Clearly the other direction is easy to define as well
create_table :model_twos do |t|
...
end
move_model_ones_with_status_x_into_model_twos_table
remove_column :model_ones, :status
This is a good but when I get to that magical moment that I remove ModelTwo.rb from my repo then the whole thing goes to pot. At that point I can't ever migrate from the ground up w/o reading that source. My reaction to that is to either write straight sql to move the data back and forth or to take that data conversion out of the migration. If I take it out, where the heck does it go? How do I ensure it happens at the right time when migrating?
And let's say I surmount that aspect of the problem and I can now migrate happily from zero to present. I can NEVER migrate down, right? Does this represent some moment in time where the concept of staged migrations is simply dead to me?
I suppose I could go back and massage the earlier migrations to convince the world that ModelTwo never existed at all, but the thought of violating the sanctity of existing migrations makes my skin crawl.
People have to be doing this sort of refactor with Rails already somewhere. It has to be feasible, right? I can't figure out how to do it.
Thanks in advance,
jd
I would:
Create a migration that adds the status column
Run a rake task to move your data across
Test all the data moved correctly
Run another migration for removing the old table that isn't needed.
Sometimes you are going to need to alter old migrations to ensure you can build development environments easily. I am not sure why you think it's such a problem. The migrations are there to help you, not some magic rule you should feel obliged to abide by.
Sometimes you can get too hung up on best practice and forget that it's very hard to have "best practice rules" that apply to every situation. They are good as a guide but ultimately best practice is doing what is best for your project.

Best practice for maintaining migration files

Currently, for each table in my database, I add columns in several steps (ie. I add columns by migrating new files on multiple occasions). This results in a large number of migration files (~50 or so?). This seems very un-DRY.
I end up with large "add-details_to" files mixed with single entry "add_(column_name)_to" files, making it difficult to tell which file was used to migrate which column.
Is there a way to DRY up the migration files so that I have a single migration file for each table?
For example, if I add multiple columns in a single migration, then decide I want to remove one of those columns, what is the best practice?
1) create a down migration for the one column I want to remove
2) rollback the entire multiple-column migration, then create a new up migration with only the columns I want.
I currently follow 1, but it seems to me that 2 would allow me to get rid of my initial mistake migration files, thereby avoiding the lots-of-migration-files-for-each-table problem.
Any thoughts would be appreciated!
I think in general it's a good option to just let your migration files grow and just manage the growing requirements through tests. shoulda-matchers is a great tool for this.
I definitely do not like the idea of down migrations, especially after its up has been run on the server (few exceptions if the down is against the immediate migration). I would rather create another migration to do what would have been done in the down. Though, I will admit there are times down is the way to go.
But at the end this all depends on where you are in your app. If working on a feature locally and want to consolidate, I could see you doing that, where you are doing a db:migrate:redo till you get what you need on your current migration. However, once you push something up (especially to production) I'd add another migration.

Rails.Vim :RInvert

How reliable is this plugin for writing down migrations. Some people in the rails community I have spoken with have told me they swear by it and others are telling me to just stay away. Any and all thoughts will be appreciated.
It is phenomenal, but I have had it not work quite right before. However, I would highly recommend doing a rake db:migrate:redo after running a migration for the first time anyways to make sure that the up and the down both work. Even if it only writes 90% of the down migration for you, I don't know why you would stay away.
From Rails 3.1 onwards, for most cases, you don't need to write a down method. The migrations will have one change method and Rails automatically does the down migration in case of rollbacks.
Refer: http://edgeguides.rubyonrails.org/migrations.html#writing-your-change-method
If you're just generating DDL changes (adding columns, etc), it has always been rock solid for me. However, if you're removing columns or generating DML statements such as copying data from one field to another, translating data, etc... :RInvert will not handle those. But there's still no reason I can think of to not use what they do generate as a starting point. If you don't like the generated down migration by :RInvert, just delete it and you're no worse off than before you ran it.

Foreign Key constraints not copying into testdb during rake

Here is one issue which I am unable to debug. While doing an rake db:test:clone_structure, the foreign keys that are not copied from development database to test database. Is there anything that I am missing?
Your problem is that Rails (or ActiveRecord) doesn't understand foreign keys inside the database, nor does it understand CHECK constraints or anything else fancier than a unique index. Rails is generally nice to work with but sometimes there is more attitude than good sense in Rails.
There is Foreigner for adding FK support to ActiveRecord but that doesn't know about Oracle. You might be able to adapt the PostgreSQL support to Oracle but I don't know my way around Oracle so that might not be a good idea. Foreigner also doesn't support CHECK constraints (yet).
The quick solution would be to dump the FKs and CHECKs as raw SQL and pump that SQL into your test and production databases. Then wrap a quick script around that that does a rake db:test:clone_structure followed by the raw SQL FK and CHECK copying.
Sorry that there's no easy way to do this but once you get outside the bounds of what a framework wants to do things get ugly (and the more comprehensive the framework, the uglier things get). A little bit of SQL wrangling wrapped around the usual rake command isn't that nasty though.

Resources