Testing PG Constraint in Rails [duplicate] - ruby-on-rails

SO...
I am adding history tables populated by triggers for auditing in my project via something like...
execute <<-SQL
CREATE OR REPLACE FUNCTION process_history_table() RETURNS TRIGGER AS $history_table$
BEGIN
IF (TG_OP = 'DELETE') THEN
INSERT INTO history_table VALUES (DEFAULT, 'D', now(), OLD.*);
RETURN OLD;
ELSIF (TG_OP = 'UPDATE') THEN
INSERT INTO history_table VALUES (DEFAULT, 'U', now(), NEW.*);
RETURN NEW;
ELSIF (TG_OP = 'INSERT') THEN
INSERT INTO history_table VALUES (DEFAULT, 'I', now(), NEW.*);
RETURN NEW;
END IF;
RETURN NULL; -- result is ignored since this is an AFTER trigger
END;
$history_table$ LANGUAGE plpgsql;
CREATE TRIGGER history_table
AFTER INSERT OR UPDATE OR DELETE ON table
FOR EACH ROW EXECUTE PROCEDURE process_history_table();
SQL
...and this will work for production and other environments. The problem is when someone runs bundle exec rake db:drop db:create db:schema:load db:migrate RAILS_ENV=test or something similar (most important is the db:schema:load portion), this will bypass trigger creation as triggers are not saved in the db/schema.rb file.
Perhaps the correct solution is to say that when using rails, developers should never run db:schema:load and always run db:migrate instead to ensure all migrations can continuously be re-run. However, we have not been operating that way for a long time and I believe it would be quite painful to do so as we may need to update several dozen or more migrations. Any thoughts on how I could incorporate triggers into my application incrementally and have the developer / test environments continue to be built / re-created the same way as today would be very helpful.
Thanks!

If you need or want database-specific features that ActiveRecord doesn't understand then you should switch to db/structure.sql for keeping track of your schema. db/structure.sql is pretty much a raw dump of your schema made using the database's native tools so it will contain triggers, CHECK constraints, indexes on function results, and everything else.
Switching is easy:
Update your config/application.rb to contain config.active_record.schema_format = :sql.
Do a rake db:structure:dump to get an initial db/structure.sql.
Delete db/schema.rb from your directory tree and revision control.
Add db/structure.sql to revision control.
Adjust your rake habits:
Use db:structure:dump instead of db:schema:dump
Use db:structure:load instead of db:schema:load
Everything else should work as usual (assuming, of course, that you're sane and using PostgreSQL for development, testing, and production).
With this change made, your triggers will be tracked in db/structure.sql and recreating the database won't lose them.

You can use gem 'paper_trail' which Tracks changes to your models, for auditing or versioning. Here is the Link

Isn't https://github.com/jenseng/hair_trigger what you need ?
It allows you to register your trigger in your project and then recreate, update them with a command.
NOTE: I always wanted to use it, but could never had the chance to do it in the end for various reasons so I can't vouch for the quality of the gem.
EDIT: No, they should always use rake db:schema:load for an existing DB

Related

Rails, PostgreSQL, and History Triggers

SO...
I am adding history tables populated by triggers for auditing in my project via something like...
execute <<-SQL
CREATE OR REPLACE FUNCTION process_history_table() RETURNS TRIGGER AS $history_table$
BEGIN
IF (TG_OP = 'DELETE') THEN
INSERT INTO history_table VALUES (DEFAULT, 'D', now(), OLD.*);
RETURN OLD;
ELSIF (TG_OP = 'UPDATE') THEN
INSERT INTO history_table VALUES (DEFAULT, 'U', now(), NEW.*);
RETURN NEW;
ELSIF (TG_OP = 'INSERT') THEN
INSERT INTO history_table VALUES (DEFAULT, 'I', now(), NEW.*);
RETURN NEW;
END IF;
RETURN NULL; -- result is ignored since this is an AFTER trigger
END;
$history_table$ LANGUAGE plpgsql;
CREATE TRIGGER history_table
AFTER INSERT OR UPDATE OR DELETE ON table
FOR EACH ROW EXECUTE PROCEDURE process_history_table();
SQL
...and this will work for production and other environments. The problem is when someone runs bundle exec rake db:drop db:create db:schema:load db:migrate RAILS_ENV=test or something similar (most important is the db:schema:load portion), this will bypass trigger creation as triggers are not saved in the db/schema.rb file.
Perhaps the correct solution is to say that when using rails, developers should never run db:schema:load and always run db:migrate instead to ensure all migrations can continuously be re-run. However, we have not been operating that way for a long time and I believe it would be quite painful to do so as we may need to update several dozen or more migrations. Any thoughts on how I could incorporate triggers into my application incrementally and have the developer / test environments continue to be built / re-created the same way as today would be very helpful.
Thanks!
If you need or want database-specific features that ActiveRecord doesn't understand then you should switch to db/structure.sql for keeping track of your schema. db/structure.sql is pretty much a raw dump of your schema made using the database's native tools so it will contain triggers, CHECK constraints, indexes on function results, and everything else.
Switching is easy:
Update your config/application.rb to contain config.active_record.schema_format = :sql.
Do a rake db:structure:dump to get an initial db/structure.sql.
Delete db/schema.rb from your directory tree and revision control.
Add db/structure.sql to revision control.
Adjust your rake habits:
Use db:structure:dump instead of db:schema:dump
Use db:structure:load instead of db:schema:load
Everything else should work as usual (assuming, of course, that you're sane and using PostgreSQL for development, testing, and production).
With this change made, your triggers will be tracked in db/structure.sql and recreating the database won't lose them.
You can use gem 'paper_trail' which Tracks changes to your models, for auditing or versioning. Here is the Link
Isn't https://github.com/jenseng/hair_trigger what you need ?
It allows you to register your trigger in your project and then recreate, update them with a command.
NOTE: I always wanted to use it, but could never had the chance to do it in the end for various reasons so I can't vouch for the quality of the gem.
EDIT: No, they should always use rake db:schema:load for an existing DB

Can I write PostgreSQL functions on Ruby on Rails?

We are starting a project based on Ruby on Rails. We used to work with Perl and PostgreSQL functions, and with Rails and Active Record I've not seen how we are supposed to create functions in PostgreSQL and keep the record with Active Record and models.
I know we can create it manually in PostgreSQL, but the "magic" with Active Record is that the database can be recreated with all the models.
Is there any way to create the PostgreSQL function using Rails and keep it in the models?
This part of your question:
I know we can create it manually in PostgreSQL, but the "magic" with Active Record is that the database can be recreated with all the models.
tells me that you're really looking for a way to integrate PostgreSQL functions with the normal Rails migration process and Rake tasks such as db:schema:load.
Adding and removing functions in migrations is easy:
def up
connection.execute(%q(
create or replace function ...
))
end
def down
connection.execute(%q(
drop function ...
))
end
You need to use separate up and down methods instead of a single change method because ActiveRecord will have no idea how to apply let alone reverse a function creation. And you use connection.execute to feed the raw function definition to PostgreSQL. You can also do this with a reversible inside change:
def change
reversible do |dir|
dir.up do
connection.execute(%q(
create or replace function ...
))
end
dir.down do
connection.execute(%q(
drop function ...
))
end
end
end
but I find that noisier than up and down.
However, schema.rb and the usual Rake tasks that work with schema.rb (such as db:schema:load and db:schema:dump) won't know what to do with PostgreSQL functions and other things that ActiveRecord doesn't understand. There is a way around this though, you can choose to use a structure.sql file instead of schema.rb by setting:
config.active_record.schema_format = :sql
in your config/application.rb file. After that, db:migrate will write a db/structure.sql file (which is just a raw SQL dump of your PostgreSQL database without your data) instead of db/schema.rb. You'll also use different Rake tasks for working with structure.sql:
db:structure:dump instead of db:schema:dump
db:structure:load instead of db:schema:load
Everything else should work the same.
This approach also lets you use other things in your database that ActiveRecord won't understand: CHECK constraints, triggers, non-simple-minded column defaults, ...
If your only requirement is creating them somewhere in your Rails app, this is possible through ActiveRecord::Base.connection.execute, which you can use to execute raw SQL queries.
stmt = 'CREATE FUNCTION...'
ActiveRecord::Base.connection.execute stmt
You would then call the function using ActiveRecord::Base.connection.execute as well (I'd imagine you'd have methods in your model to handle this).

What generates the structurel.sql file in Rails?

I have 3 different schemas in one Rails application. My preparation of the test database using rake db:test:prepare fails with:
psql:/Users/me/myapp/db/structure.sql:7417: ERROR: relation "schema_migrations" does not exist
LINE 1: INSERT INTO schema_migrations (version) VALUES ('20131213203...
That's because it is not proper setting the Postgres search_path before doing all the insertions to the schema_migrations table. I haven't messed with this code in about 8 months and can't remember what I did. I haven't the faintest idea of how I even got those other schema to dump.
You may want to try rake db:structure:dump and / or rake db:schema:dump and try re-running rake db:test:prepare. The former should create the structure.sql and the later the schema.db
I was able to accomplish what I needed by doing two things:
Overriding the purge task under db:test in AR's railties to call a custom drop_database_objects method.
Using a little-known attribute in my database.yml: schema_search_path: public
The first thing lets me drop only the database objects I want, leaving my other support databases intact.
The second thing just creates the structure from my main database, and doesn't try to create the structure from the other databases. It looks like a bug in that it structure:dump doesn't set the schema search path appropriately at the end of the structure.sql script, right before the inserts into the schema_migrations table in a multi-schema instance. These fixes make that not necessary.

How to keep overview of the migrations?

I have a question regarding my migrations in rails.
Normally when i want to add a colum to a model for i dont make extra migrations but instead i perform this steps:
rake db:rollback
next i change the migration file in db/migrations and rerune:
rake db:migrate
The biggest problem is that when i do this i loose my data.
Previous i wrote migrations from the command line with for example
rake g migration Add_Column_House_to_Users house:string
The problem with this approach is that my db/migrations folder afterwards get very large and not very clear! I mean at the end i dont know wich variables the object has! Im not an expert in rails and would like to ask you how to keep the overview over the migrations!Thanks
Just a minor thought - I just use the file db/migrate/schema.rb to determine whats in the database as opposed to tracking through the migrations
You definitely shouldn't use db:rollback with a table with existing data.
I have a few production RonR apps with a ton of data and there are 100+ entries in the migrations table and adding new migrations to tweak tables is the rails way to do things. Not sure what you mean by lucid, but your schema and data model are going to change over time and that is ok and expected.
One tip. The migrations are great, but they are just the beginning, you can include complex logic as needed to fix your existing data (like so)
Changing data in existing table:
def up
add_column :rsvps, :column_name_id, :integer
update_data
end
def update_data
rsvps = Rsvp.where("other_column is not null")
rsvps.each do |rsvp|
invite = Blah.find(rsvp.example_id)
...
rsvp.save
end
end
Another tip: backup your production database often (should do this anyway), but use it to test all of your migrations before deploying. I run scripts like this all the time for local testing:
mysql -u root -ppassword
drop database mydatabase_dev;
create database mydatabase_dev;
use mydatabase_dev;
source /var/www/bak/mydatabase_backup_2013-10-04-16.28.06.sql
exit
rake db:migrate

Create Sequence In Migration Not Reflected In Schema

I have an application that requires a sequence to be present in the database. I have a migration that does the following:
class CreateSequence < ActiveRecord::Migration
def self.up
execute "CREATE SEQUENCE sequence"
end
def self.down
execute "DROP SEQUENCE sequence"
end
end
This does not modify the schema.rb and thus breaks rake db:setup. How can I force the schema to include the sequence?
Note: The sequence exists after running rake db:migrate.
Rails Migrations because they aim toward a schema of tables and fields, instead of a complete database representation including stored procedures, functions, seed data.
When you run rake db:setup, this will create the db, load the schema and then load the seed data.
A few solutions for you to consider:
Choice 1: create your own rake task that does these migrations independent of the Rails Migration up/down. Rails Migrations are just normal classes, and you can make use of them however you like. For example:
rake db:create_sequence
Choice 2: run your specific migration after you load the schema like this:
rake db:setup
rake db:migrate:up VERSION=20080906120000
Choice 3: create your sequence as seed data, because it's essentially providing data (rather than altering the schema).
db/seeds.rb
Choice 4 and my personal preference: run the migrations up to a known good point, including your sequence, and save that blank database. Change rake db:setup to clone that blank database. This is a bit trickier and it sacrifices some capabilities - having all migrations be reversible, having migrations work on top of multiple database vendors, etc. In my experience these are fine tradeoffs. For example:
rake db:fresh #=> clones the blank database, which you store in version control
All the above suggestions are good. however, I think I found a better solution. basically in your development.rb put
config.active_record.schema_format = :sql
For more info see my answer to this issue -
rake test not copying development postgres db with sequences
Check out the pg_sequencer gem. It manages Pg sequences for you as you wish. The one flaw that I can see right now is that it doesn't play nicely with db/schema.rb -- Rails will generate a CREATE SEQUENCE for your tables with a serial field, and pg_sequencer will also generate a sequence itself. (Working to fix that.)

Resources