I have a situation where a myobject.save! is resulting in this error:
PG::UniqueViolation: ERROR: duplicate key value violates unique constraint "things_pkey"
DETAIL: Key (id)=(12345) already exists.
: INSERT INTO "things" ("id", ...) VALUES (12345, ...) RETURNING "id"
So, rails has a persisted record but tries to do an insert instead of an update, and then includes the id in the insert (because, I'm guessing, in the insert case it's not accustomed to excluding any columns).
Further up in the code, there is a save! on the same object which may or may not have fired in the case I'm looking at. The only thing notable about this save is it happens inside a rescue block. I did some simple tests in console to see if for some reason an object isn't considered persisted if it's created inside a rescue block, and didn't find any such behavior.
What could be causing rails to think my object isn't persisted?
Figured it out!
I was building the object with user.things.build. User#things was not an association. It's a method which returns an ActiveRecord::Relation. This method had recently changed, to be Things.where(id: ...). So, rails was obediently using as much of the query as possible when building the new object.
Related
I am trying to overwrite a record in rails 4.0.
old_p.update_attributes(new_p.attributes)
old_p is the record pulled from the database, new_p is the record the user has created that will replace the record from the database. new_p actually has its own record in the database, but is only stored there temporarily.
This seems to work some of the times, but most of the time it comes back with
ActiveRecord::RecordNotUnique in Controller#overwrite
PG::UniqueViolation: ERROR: duplicate key value violates unique constraint
This is an upgrade from rails 2, and seemed to have been working as expected when it was a rails 2 app. There seems to be little documentation on update_attributes, but it seems as if it is copying the id of the object as well.
I have also tried assign_attributes with .save later, but to the same effect.
If it is copying the id of the object as well, is there a way to easily leave out the id? As the record has some 20+ attributes I would have to manually enter and they could change often. Or is there something else that I am missing?
You can use
old_p.update_attributes(new_p.attributes.tap { |h| h.delete('id')})
I have an ActiveRecord object that I load from database.
When I call valid? on this object it returns false due to a rails unique constraint not met, at least so the validation says.
I checked the database schema and the unique field also has an index defined, so the uniqueness is also ensured on the database level.
What is going on here and how is this even possible in the first place?
You should check #object.errors.inspect for inspection of what's going on and then fix accordingly.
Also it does matter that when are you checking the validity of an object i.e. before save or after save.
The more elegant way is to use #object.save!
Ruby should tell you what went wrong during the attempt to save the object.
If you do not have unique indexes defined on your database tables, this is what happens!
To be a bit more elaborate: I thought the database had a unique index on the column, but that turned out to be a 'regular' index.
The problem occurred, because at some point in the application, the model got saved without validating it first. Which led to non unique entries in the database. By calling valid? triggers the rails internal routine that checks for uniqueness (however that is implemented) , which returned false, correctly.
Lesson learned: Always make sure to add a unique index at the database level.
I have a table and have the validation for uniqueness setup in the table. eg.
create table posts (
id INT UNSIGNED AUTO_INCREMENT PRIMARY KEY UNIQUE,
title varchar(255) unique,
content text
);
Here title is unique. Do also need to inform the model class about this uniqueness? If not when i insert a duplicate title, it gives me error. How do I catch that. Currently rails shows me the backtrace and i could not put my own error messages
def create
#f = Post.new(params[:post])
if #f.save
redirect_to posts_path
else
#flash['message'] = "Duplicated title"
render :action=>'new'
end
end
I am not being redirected to the new and instead show a big backtrace.
Use the validates_uniqueness_of validation. "When the record is created, a check is performed to make sure that no record exists in the database with the given value for the specified attribute (that maps to a column)"
You will have to add all of the validations to your models. There is a gem called schema_validations, which will inspect your db for validations and create them in your models for you. https://github.com/lomba/schema_validations
Yes you do as noted in other answers, the answer is validate_uniqueness_of - http://ar.rubyonrails.org/classes/ActiveRecord/Validations/ClassMethods.html#M000086. Note, even though you have a validation in your model a race condition does exist where Rails may try and do two inserts unaware of there being a unique record already in the table
When the record is created, a check is performed to make sure that no
record exists in the database with the given value for the specified
attribute (that maps to a column). When the record is updated, the
same check is made but disregarding the record itself.
Because this check is performed outside the database there is still a
chance that duplicate values will be inserted in two parallel
transactions. To guarantee against this you should create a unique
index on the field. See add_index for more information.
So what you have done, by creating a unique index on the database is right, though you may get database driver exceptions in your exception log. There are workarounds for this, such as detecting when inserts happen (through a double click).
The Michael Hartl Rails Tutorial covers uniqueness validation (re. the "email" field) here. It appears the full uniqueness solution is:
Add the :uniqueness validation to the model.
Use a migration to add the unique index to the DB.
Trap the DB error in the controller. Michael's example is the Insoshi people_controller--search for the rescue ActiveRecord::StatementInvalid statement.
Re. #3, it looks like Michael just redirects to the home page on any DB statement exception, so it's not as complex (nor as accurate) as the parsing suggested by #Ransom Briggs, but maybe it's good enough if, as #Omar Qureshi says, the uniqueness constraint covers 99% of the cases.
I just realized one of my models has object_id as a column, and that's going to cause prolems.
Any suggestions on possible alternatives for the name object_id?
What is this column supposed to be mapping to? Is it a foreign key to an objects table?
Figure out what you're really trying to represent. It's probably not just any generic thing in the whole world. (If it is, maybe things is a better name.)
If you're working under constraints and you absolutely must have that object_id column, you could still refer to it directly with attributes[:object_id] and bypass Rails's magic methods.
As a last resort, you could overwrite the method with your own #object_id method that simply returns that attribute from your database (this is what Rails did with the #id method). I can't think of anything that would definitely break off the top of my head, but it's probably not a great idea. The object ID is used for a lot of miscellaneous things, so you may get strange behavior if you do object comparisons, use your object as a hash key, etc.
You don't need to use object_id at in your model. And there should be no column named object_id in the database.
object_id is just a default methods that all (except BasicObject in Ruby 1.9) objects have (see docs).
Returns an integer identifier for obj.
The same number will be returned on
all calls to id for a given object,
and no two active objects will share
an id.
Replaces the deprecated Object#id.
2.object_id # 5 or anything, but the same
2.id # NoMethodError: undefined method `id' for 2:Fixnum
2.object_id # 5
"anything has object_id".object_id # 22522080
"anything has object_id".object_id # 22447200 - string has different object_id because it's a new instance everytime
So, just use id to access database identifier of ActiveRecord classes. That is the one created by Ruby On Rails for model objects.
OR if you do need to have the column in the database called object_id then you can create a method on the ActiveRecord model like this:
def general_id
read_attribute(:object_id)
end
I'm kind of surprised that Rails doesn't create __object_id__ as a reference to the original form of object_id, like send has a Ruby variant called __send__.
Edit: The Ruby method __id__ appears to be the __send__ equivalent for object_id. It may be safe to use object_id as a method for your foreign key, or it may not. I don't actually use Rails in my current job.
I've set up a trigger-based partitioning scheme on one of our pg 8.3 databases according to the pg docs here:. Basically, I have a parent table, along with several child tables. An insert trigger on the parent redirects any inserts on the parent into the appropriate child table -- this works well.
The ActiveRecord pg adapter, however, seems to rely on the postgres INSERT ... RETURNING "id" extension to get the id of the returned row after the initial insert. But the trigger seems to break the RETURNING clause -- no id is returned, although the row is created correctly.
While I suppose this behavior makes sense -- after all, nothing is being inserted in the main table, I really need to find some kind of work-around, as other child records will be inserted that require the row id of the just-inserted row.
I suppose I could add some kind of unique id to row prior to insert and then re-read it using this key after insert, but this seems pretty kludgy. Does anyone have a better work-around?
Since Rails v.2.2.1, you can turn off 'returning id' behavior just by overriding #supports_insert_with_returning method in PostgreSQLAdapter.
class ActiveRecord::ConnectionAdapters::PostgreSQLAdapter
def supports_insert_with_returning?
false
end
end
Currently it looks like my best option is to just change the table prefix in a before_create event so that the insert happens on the underlying partition table directly, bypassing the insert trigger altogether. This is not a perfect solution, however, but seems to be the most performant and the simplest.
The only other solution I can come up with is to add a guid column to each table, and re-read the row from the parition table by guid immediately after insert to get the id.
Any other suggestions are welcome. Thanx -- m