I'd like to create a unique index for a table, consisting of 3 columns, but
to check only if one of them has a specific value:
something like
add_index :table, [:col1, :col2, :col3], unique: true
but only if col3 = true,
otherwise I don't care about col1, col2, :col3 = false uniqueness.
is there a way to do it in a migration to keep it at the DB level, or can I only
validate this case in the model?
I don't believe you can have conditional uniqueness constraints at the database layer (via migrations). You can add this as a conditional validation at the AR layer though which should be sufficient for your purposes (though it should be noted this can introduce some race conditions). ie.
validates [:col1, :col2], uniqueness: true, if: ":col3 == true"
Hope that helps.
Related
I'm using the shema_plus gem in rails, and I want to make it so I am unable to duplicate data into my database if the combination of two columns is the same.
shouldn't this work?:
t.string :name, null: false, index: { with: :deleted_at, unique: true }
':deleted_at' is definitely a field in my migration.
but this allows me to enter in the same name twice on the mysql side.
also, here is some info that mysql gives me:
Index: index_plans_on_name_and_deleted_at
Definition:
Type BTREE
Unique Yes
Columns name
deleted_at
EDIT_____________________________________________________
It looks like I needed a default value for deleted_at, like :
t.datetime :deleted_at, default: '2000-01-01'
Not sure about the gem, but you can directly achieve this in ActiveRecord using:
validates :name, :uniqueness => {:scope => :deleted_at}
assuming you want the combination of name and deleted_at to be unique.
I have the following validator in my model:
validates_uniqueness_of :item_id, conditions: -> { where.not(status: "published") }
What does the condition target here? Does it prevent the validator itself to look on the table row if there is status: "published" or is it an extension for the uniqueness validator to exclude the rows with status: "published" in the uniqueness (yes, there is a difference)?
Is there also a difference between the above validator and the following, assuming that status_published? is a method checking for the status to be "published" or not?
validates_uniqueness_of :item_id, :unless => lambda { status_published? }
And finally, if there is no difference, how can I accomplish the second case, where uniqueness validator will check if the value is unique only in the rows which are true for the condition?
According to the documentation, conditions limit the constraint to the set of records that match them, so the validation doesn't run at all if those said conditions aren't matched.
On a side-note, if you're doing this in Rails 4, you may want to look at the new validates syntax:
validates :item_id, uniqueness: true, unless: :status_published?
I'm using the validates_overlap gem (https://github.com/robinbortlik/validates_overlap) in a Rails app. Here is the Model code:
validates :start_time, :end_time, overlap: { scope: "device_id", exclude_edges: ["start_time", "end_time"] }
And here is the SQL it triggers:
SELECT 1 AS one FROM "bookings" WHERE
((bookings.end_time IS NULL OR bookings.end_time > '2014-04-11 13:00:00.000000') AND
(bookings.start_time IS NULL OR bookings.start_time < '2014-04-11 16:00:00.000000') AND
bookings.device_id = 20) LIMIT 1
I just want to know if I should be adding an index in my postgres database that covers start_time, end_time and device_id, or something similar? e.g. something like this:
add_index :bookings, [:device_id, :start_time, :end_time], unique: true
Adding the above index to ensure database consistency would make no sense. After all you are validating the Range AND excluding the actual edges (the unique index would check exactly the edges!).
Adding a non unique index to speed up the validation is a good idea. If so you should analyze your data and app queries.
The easiest approach is to simply add a single index for each column. Postgres can still use these for the multicolumn query (see heroku devcenter ).
Only if it really matters (or you do not query the columns in other combinations) a multicolumn index is necessary. If so the device_id should be first in index Rule of thumb: index for equality first—then for ranges.
I would like to set a index based on 4 columns in my db in order to ensure fast lookups and also to ensure that no two rows have the identical entries in ALL the 4 columns. Thus ensuring uniqueness based on the 4 columns. I know this can be done in other languages and frameworks, but can it be done in rails?
I have seen the following command to set an index in rails:
add_index "users", ["email"], :name => "index_users_on_email", :unique => true
However, can something similar be done for more than 1 column?
Also if this cannot be done for more than 1 column, how do people handle uniqueness based on multiple columns in rails then?
Yes, you can create an index on multiple columns in Rails.
add_index users, [email, col1, col2, col3], :name => "my_name", :unique => true
should work. As long as you specify the name you should be good.
PostgreSQL (not sure about MySQL) has a character limit on constraint names, so when using add_index for multiple columns, make sure you're either giving a custom name, or that your column names are short enough to fit under the limit, because otherwise the auto-generated index_users_on_col1_and_col2_and_col3 could screw things up for you.
I have a table called scheduled_sessions and a Boolean column called instructor_performed to check if the instructor did teach the class or not.
So I need to find all the records where the instructor didn't teach the class. But I can't do this: ScheduledSession.where(:instructor_performed => false) because if the cell is blank, it won't return that record. I just need all records that are NOT true.
It sounds like your instructor_performed column can be true, false, or NULL, so you need to query for false or NULL, like this:
ScheduledSession.where(instructor_performed: [false, nil])
You could avoid this complexity if you'd set up your database table to disallow null values in that column. You can specify this constraint when you create the table in a migration:
add_column :scheduled_session, :instructor_performed, :boolean,
null: false, default: false
or
create_table :scheduled_session do |t|
t.boolean :instructor_performed, null: false, default: false
...
end
Or you can change the constraint for an existing column:
change_column_null :scheduled_session, :instructor_performed, false, false
In all of the above, we're setting the column to allow only true or false values, and we're telling it to use a default value of false. (Without setting the default, you can't add a no-nulls constraint because your existing data violates it.)
I almost always disallow nulls when I'm setting up boolean columns (unless I truly want tri-state attributes), because it lets me do this to find everything that's not true:
ScheduledSession.where(instructor_performed: false)
Note that other answers (now deleted) that encouraged use of an SQL fragment like "instructor_performed != true" won't work because SQL won't let you use = or != to match a NULL value. Kind of weird, but them's the rules. Instead SQL makes you do this:
SELECT * from scheduled_sessions WHERE instructor_performed IS NULL
OR instructor_performed = FALSE;
which the above Rails where query hides from you somewhat, as long as you're still aware that you're searching for two values.