Rails: Oracle constraint violation - ruby-on-rails

I'm doing maintenance work on a Rails site that I inherited; it's driven by an Oracle database, and I've got access to both development and production installations of the site (each with its own Oracle DB). I'm running into an Oracle error when trying to insert data on the production site, but not the dev site:
ActiveRecord::StatementInvalid (OCIError: ORA-00001: unique constraint (DATABASE_NAME.PK_REGISTRATION_OWNERSHIP) violated: INSERT INTO registration_ownerships (updated_at, company_ownership_id, created_by, updated_by, registration_id, created_at) VALUES ('2006-05-04 16:30:47', 3, NULL, NULL, 2920, '2006-05-04 16:30:47')):
/usr/local/lib/ruby/gems/1.8/gems/activerecord-oracle-adapter-1.0.0.9250/lib/active_record/connection_adapters/oracle_adapter.rb:221:in `execute'
app/controllers/vendors_controller.rb:94:in `create'
As far as I can tell (I'm using Navicat as an Oracle client), the DB schema for the dev site is identical to that of the live site. I'm not an Oracle expert; can anyone shed light on why I'd be getting the error in one installation and not the other?
Incidentally, both dev and production registration_ownerships tables are populated with lots of data, including duplicate entries for country_ownership_id (driven by index PK_REGISTRATION_OWNERSHIP). Please let me know if you need more information to troubleshoot. I'm sorry I haven't given more already, but I just wasn't sure which details would be helpful.
UPDATE: I've tried dropping the constraint on the production server but it had no effect; I didn't want to drop the index as well because I'm not sure what the consequences might be and I don't want to make production less stable than it already is.
Curiously, I tried executing by hand the SQL that was throwing an error, and Oracle accepted the insert statement (though I had to wrap the dates in to_date() calls with string literals to get around an "ORA-01861: literal does not match format string" error). What might be going on here?

Based on the name of the constraint, PK_REGISTRATION_OWNERSHIP, you have a primary key violation. If these databases aren't maintaining this data in lockstep, something/someone has already inserted a record into the registration_ownerships table in your production database with company_ownership_id=2 & registration_id=2920. (I'm guessing at the specifics based on the names)
If this particular set of values needs to exist in the production database,
1) check that what's already there isn't what you're trying to insert. if it is, you're done.
2) If you need to insert your sample data as-is, you need to modify the existing data & re-insert it (and all the dependent/refering records), then you can insert your values.

If you query the table and find no matching rows, then one of the following may be the cause:
The session is trying to insert the row twice.
Another session has inserted the row, but hasn't committed yet.
Also, check that the state of the unique constraint is the same between dev and prod. Perhaps the one on dev is marked as not validated - check that the index exists on dev and is a unique index (note: in Oracle it is possible to have a unique constraint validated by a non-unique index).

Take a hard look at the underlying unique index for the constraint. The reason dropping the constraint doesn't change anything is because the index remains, and it's a unique index. What does the following tell you about the indexes in both environments? Are both indexes valid? Are both defined the same? Are they both actually unique?
SELECT ai.table_name, ai.index_name, ai.uniqueness, aic.column_name, ai.status
FROM all_constraints ac JOIN all_indexes ai ON (ac.index_name = ai.index_name)
JOIN all_ind_columns aic ON (ai.index_name = aic.index_name)
WHERE ac.owner = 'YOUR_USER'
AND ac.constraint_name = 'PK_REGISTRATION_OWNERSHIP'
ORDER BY ai.index_name, column_position;

As it happens, there was a spare copy of the "registrations" model lying around the directory; even though it had a different name ("registrations_2349871.rb" or something like that) Rails was running all model functionality (saving, validating, etc) twice, hence the key constraint violation! I've never seen behavior like this before. Deleting the rogue file solved the problem.

Related

RailsAdmin: ActiveRecord::RecordNotUnique , inserts id when creating user #2972

I seem to be getting a problem when inserting into the users table. I am not sure why, but only that it is getting the current user's id (confirmed by seeding additional user. I know the solution would be to remove adding the id when adding users, but I don't know how and have been trying to find the right file for 30 minutes. I am using MYSQL. The error is below:
ActiveRecord::RecordNotUnique in RailsAdmin::MainController#new
Mysql2::Error: Duplicate entry '1' for key 'PRIMARY': INSERT INTO `users`
Any possible solution to this? I am willing to fix if someone just points me to the right file(s). Thanks!
This is my first answer, so take this with many grains of salt. I've had similar issues in the past when I've messed around with the database directly in SQL and ignored callbacks in my models. Messes up the primary key sequence. Some version of resetting the primary key usually helped. Something like:
https://apidock.com/rails/ActiveRecord/ConnectionAdapters/PostgreSQLAdapter/reset_pk_sequence%21
Should look something like: ActiveRecord::Base.connection.reset_pk_sequence!('users')
That might be for PostgreSQL, however. You might have to find a MySQL way to do it. Hope that helps!

Rails add_foreign_key limited to a single reference field

I am working with a database that uses a lot of constraints defined within the schema. This is necessary, to ensure that other services and clients that use the database do not break the data model (please don't reply that this level of DB definition is inappropriate for a Rails application). Unfortunately this seems to take Rails beyond its ability to define, dump and subsequently recreate schemas, unless somebody knows something that I have missed.
The specific issue that I have encountered is with add_foreign_key statements in schema.rb, and I am looking to see if anybody knows a workaround that will save me embedding SQL directly into the schema.rb definition.
The Postgres DDL that I need to represent is:
ALTER TABLE ONLY trackers
ADD CONSTRAINT valid_protocol_sub_process
FOREIGN KEY (protocol_id, sub_process_id)
REFERENCES sub_processes(protocol_id, id) MATCH FULL;
Unfortunately, when I rake db:schema:dump the existing database to schema.rb this results in the following:
add_foreign_key "trackers", "sub_processes",
column: "protocol_id",
primary_key: "protocol_id",
name: "valid_protocol_sub_process"
This results in an invalid specification, when recreating the database, that only includes a single field and (fortunately) fails to run, since the resulting schema constraints would be incorrect.
I have attempted to change the primary_key and column option strings to include both fields to match the required SQL, but ActiveRecord puts quotes around the whole lot, making the SQL statement invalid. I also attempted to use an array of columns too, but it appears to just #to_s the array.
Is this just beyond the ability of add_foreign_key, or is there a way to use multiple fields in a foreign key specification?
It appears that there is no checking that schema.rb can validly represent the full database if you use database specific DDL. Although I understand that schema.rb may not be able to represent every possibility, it is unfortunately that there is no error produced to indicate that the schema.rb generated by rake is invalid.
In order to get a full SQL dump of the database, performed by the database's own schema dumping tool, I added:
config.active_record.schema_format = :sql
to application.rb. This ensures that in future I get a valid, usable database schema to rebuild an environment.

Rails on heroku: after push, get "PG::UniqueViolation: ERROR: duplicate key value violates unique constraint"

This has been asked several times before (here and here, and more).
Every time I push my rails app to Heroku (for at least the last few months, I'd say), I have to reset my keys using the familiar
ActiveRecord::Base.connection.tables.each { |t| ActiveRecord::Base.connection.reset_pk_sequence!(t) }
incantation. Otherwise I get postgresql failures like this when I try to create new records:
PG::UniqueViolation: ERROR: duplicate key value violates unique constraint "users_clients_pkey" DETAIL: Key (id)=(401) already exists. : INSERT INTO "users_clients" ("user_id", "client_id") VALUES (20, 46) RETURNING "id"
(This is an example; it happens on various tables, depending on what the first action is that's done on the app after a push.)
Once I do the reset-keys incantation, it's fine until my next push to heroku... even when my push does not include any migrations.
I'm a little baffled as to why this is happening and what can be done to prevent it.
No, there's no datatable manipulation code in my deployment tasks.
Its happening because the primary key(id) value already exists. Why? Because the primary key sequence in postgres is messed up. without looking at the database or knowing the schema, it difficult to suggest a solution but if your database can affort a downtime of 10-15mins. you can try
If there is just one table which is problem. you can Export all data into new set of table with new names without ID column.
drop existing tables and rename the newly created table to old tables's name.
enable writes to our app again.
But if entire DB is in a mess, then it need something more elaborate but I can't tell without looking the schema.

Scaffolding user ID resetting

in the application i am currently creating in ruby on rails. I am trying to do some tests in rails console where i have to destroy data in the database and the database is connected to a server. I am importing an XML and parsing it and putting it into a database with scaffolding.
Now what i need: Basically what i am attempting to do is to destroy the data and replace it with a new one every week..but the problem i am getting, the userid is gone up to 700+ and there are only 50 records :S cause it doesnt reset...
To delete all records i am currently using "whatever.destroy_all" does the trick
Any help?
Btw i am using SQLITE
The ID column created in the table usually is set as unique and to increment by 1 for each new record, which is why each time you destroy and add new data the ID keeps getting higher.
The fact that the ID # is getting larger and larger is not an issue at all.
If you really want to start back at zero, I would think you could drop the table and recreate it, but that seems like overkill for a trivial issue.
Regarding the connection to the other scaffold, how are you connecting the two and what do they both represent?
Ideally the data population for testing should be done through fixtures (or easy tools such as factorygirl etc..)
The main advantage of having a fix data set is you can run your tests in any environment. But as per your requirement you can do something like this,
When you populate the date through the active records pass the id parameter as well
Ex: User.new(:id => 1, :name => "sameera").create
By this way you can have constant id's But make sure you increment the id accordingly.

Symfony schema works on my local server, but gives foreign_key constraint issues moving to Dreamhost

I'm using symfony 1.4 and doctrine. I've spent the last couple days playing with my schema, and I've gotten it to load up / build / behave properly but only on my local machine. When I copy the files to an account on Dreamhost, change the configuration to allow a connection to that database (and nothing else) I get the following error trying to delete something which should cascade (and does when I delete it on my local machine):
SQLSTATE[23000]: Integrity constraint violation: 1451 Cannot delete or update a parent
row: a foreign key constraint fails (`ezshirtdb`.`item_options`, CONSTRAINT
`item_options_item_id_items_id` FOREIGN KEY (`item_id`) REFERENCES `items` (`id`))
This is my schema: http://pastie.org/1097068
These are my fixtures: http://pastie.org/1097072
The tables in the dreamhost DB are all InnoDB, the database itself seems to be MYISAM. Is that an issue? In this case, I can't delete Item #1, which has ItemOptions associated onto it, or any of the categories (which have items associated).
I'm totally lost, and could use a couple pointers. Thanks y'all.
I got errors like this a while back, and it was due to the foreign key being a different integer size after being generated.
Take a look at the database, and ensure that both the Items.ID field, and the ItemOptions.item_id field are the same type.
Delete the database and create it again. MyISAM or InnoDB, is irrelevant.

Resources