How to generate SQL from fixture YMLs in Symfony 1.0? - symfony1

I am trying to build a script to automate the update process for symfony projects. E.g. apply any patches from one revision to another including any possible model updates and new fixtures without re-loading the entire database.
I have completed most of it, but I have been unable to find a way to generate an SQL file containing the INSERT statements that get executed by propel load data when it inserts the fixture.
I know that I could dump the database to get the info, but that's not the point and that wouldn't work in my case because I'd need an empty db that has no other data on it.
So in short, my question is: Is there a way to take a fixture YML and generate the SQL inserts necessary?
I know this could be done by processing the yml and building the logic, but the build process it does it somewhere and there's no reason to write that again..
Any help would be much appreciated.

Perhaps I did not understand your question correctly, but couldn't you just call the task that loads the fixtures into the db, or alternatively write code that calls sfPropelData::loadData() (I'm assuming Propel here).
$loader = new sfPropelData();
$loader->loadData(sfConfig::get('sf_data_dir').'/fixtures');
Even sfPropelData and its parent sfData do not generate sql code when loading fixtures, but rather do it using the ORM (as can be seen in sfPropelData::loadDataFromArray().
If you explained your problem more, and why this solution isn't enough, I may be able to help.

Related

Grails 3 script or command with domain classes

All I want is a simple script to update some db tables.
My first try was with create-script. These scripts seem not be able to load domain classes. Then I found people saying, you have to create a command.
But in order to create a command you need to create a plugin.
This seems not very straight forward to a have a simple dbupdate script.
Can somebody enlighten me on this.
Thanks
Torsten
Alright! Looks like it is more related to how perform certain db related operations which are not straight forward db-migrations.
Though there could be enormous choices around it, I would like to discuss what could commonly be used:
Groovy Shell: You could create a straight forward groovy shell for your grails project and there run a script you created. No matter it is for some data update or some migration or just a report etc.
The basic idea is to have a command line available to run scripts that perform certain task. This link should help more.
DBMigration: Though in comment you already said it's not a dbmigration,I think your definition of dbmigration is restricted to schema related operations only. That's not true! DBmigration includes schema + CRUD(including a complicated version with join etc as well). We don't prefer crud operations as part of dbmigration as it could be achieved during bootstrap time or using a service method or even some external query or tool.
Like you perform any operation using your application, you could perform this action too. Suppose, you have to perform some updates and inserts in a table. Simply create a service and a controller which access this service to perform desired results. I know this makes least sense, but overall idea is to have a code which could perform desired action.
Well! I would suggest go ahead with first option and create a new shell project pointing to same database. Perform the action there.

Why using migration in RoR or Sinatra?

Perhaps a stupid question, and without code, so I'm not sure I'm on the right StackExchange site. If so, sorry and give me a comment please.
I begin programming in Sinatra (only intranet until now) and in the samples they almost always use migration with activerecord, the same with RoR.
I have experience enough with activerecord itself and it is very helpfull but I'm not sure why migration is always used ? If I create a project I just create a SQL script or a Ruby activerecord script that does the creation of the DB for me and that's it..
If I need the site or script somewhere else I just need to execute that script and ready.
Obviously I'm missing here a lot, so who can me explain the other benefits or point me to a good explanation ?
From Rails docs:
Migrations are a convenient way to alter your database schema over
time in a consistent and easy way. They use a Ruby DSL so that you
don't have to write SQL by hand, allowing your schema and changes to
be database independent.
So, the main two benefits are:
It's like Git for db schema, you won't know how that's useful until you are in a mid-size or big project with many contributors and someone makes a bobo :)
You write ruby code to build your db schema, this way if you decide to move from mysql to pg for example, you don't need to open up pg manual and check code compatibility
Update
In the api docs of migrations, you will find many nice use cases (to be honest i didn't know about half of them) ... check it out (http://api.rubyonrails.org/classes/ActiveRecord/Migration.html)
Building a creation script for your database is great, provided two things:
All your database deployments are on new machines
You get your database schema exactly right the first time
In our modern agile environment we not only don't believe it is possible for a project larger than a few lines of code, we also don't encourage aspiring to it.
Active Record's Migrations offer a way to describe your database incrementally. Each "migration" defines how to add its features to the database ("upgrade"), and how to remove them if necessary ("downgrade").
When running migrations, on top of actually running the scripts, Active Record also maintains its own table (schema_migrations), to remember which migrations have been applied to the database, and which are pending.
This enables you to build your database alongside the features as you develop them. It also facilitates working in teams, since each team member develops her own migrations, and Active Record "stitches" everything together, so you don't have to maintain a monolithic creation script.

How to unit test database migrations?

I am about to perform database migrations in MVC .NET. I was wondering how can I unit test this.
For example, I would like to apply the migration in this question: Rename a db column. How could I unit test this?
My thoughts:
Migrate all existing migrations, except my latest one which I am going to test
Add data to the context
Apply new migration
Test data still there
If my thoughts makes sense, any idea how to apply these to MVC .NET? Thanks!
For integration test. Just my rough idea. Probably depends on your environment - build server, DB server, deployment making, ...
You have to always have same (known) starting point with database. Either you'll create it manually and commit it into VCS and always manually update to M-1 (M=migration). Or just suppose all the migrations M-1 worked before (because it was tested) and create it automatically using i.e. migrate.exe. Then you try to do your steps and then test, preferably using different "channel" than EF, that the data is there, that the column is there etc. Just pure SQL through good old ADO.NET is enough here. Because it doesn't need to be versatile, you can create some simple helper(s) that will run the query using the well known Connection-Command-Reader path and return it as a raw data i.e. simple IEnumerable (I did that myself, through dynamic to make super simple.).
My advice is just to keep it simple, nothing fancy and clever. It's just to support testing.

Rails: transfer data from one schema to another

I'm migrating a PHP app to Rails. The new app has a significantly different schema.
Anybody have experience data from one schema to another? Right now, I'm looking at dumping CSV files and writing Ruby scripts to handle the insertion on the other side. I've also considered using Navicat to export/import to a temp database with the new schema (if it's simple enough), then dump the database and insert the values into a new database using db:seed.
I figure this is gonna be a total pain whichever way I go -- just hoping to minimize the angst. Thanks in advance!
UPDATE:
Decided to export from Navicat into XML, then use Nokogiri to create seed files for Seed_fu.
Check out Dr. Nic's Magic Model Gem. http://magicmodels.rubyforge.org/dr_nic_magic_models/
Then use rake tasks to iterate through your CSVs and insert using the newly generated models.

Rails: Proper work flow to shred XML data into relational SQL database

I have a Rails project and I used Migrations to setup the database schema (I'm using sqlite3). I have a XML file that I want to insert into my database. What's the best way to approach this? I'm thinking there's some Ruby script that I can write once and use to parse the XML file and insert it into my database, but intuitively it feels like this is a common problem and should have already been automated in the Ruby/Rails world. I guess some people would call this XML shredding, but querying Google hasn't turned up much for me.
Any thoughts?
db/seeds.rb could be a good place to put the data.

Resources