How to enforce uniqueness of an entire row? - ruby-on-rails

I've seen other SO questions like - How do you validate uniqueness of a pair of ids in Ruby on Rails? - which describes adding a scoped parameter to enforce uniqueness of a key pair, i.e. (from the answer)
validates_uniqueness_of :user_id, :scope => [:question_id]
My question is how do you do this kind of validation for an entire row of data?
In my case, I have five columns and the data should only be rejected if all five are the same. This data is not user entered and the table is essentially a join table (no id or timestamps).
My current thought is to search for a record with all of the column values and only create if the query returns nil but this seems like a bad work around. Is there an easier 'rails way' to do this?

You'll need to create a custom validator (http://guides.rubyonrails.org/active_record_validations.html#performing-custom-validations):
class TotallyUniqueValidator < ActiveModel::Validator
def validate(record)
if record.attributes_for_uniqueness.values.uniq.size == 1
record.errors[:base] << 'All fields must be unique!'
end
end
end
class User
validates_with TotallyUniqueValidator
def attributes_for_uniqueness
attributes.except :created_at, :updated_at, :id
end
end
The important line here is:
if record.attributes_for_uniqueness.values.uniq.size == 1
This will grab a hash of all the attributes you want to check for uniqueness (in this case everything except id and timestamps) and converts it to an array of just the values, then calls uniq on it which returns only uniq values and if the size is 1 then they were all the same value.
Update based on your comment that your table doesn't have an id or timestamps:
You can then simply do:
if record.attributes.except(:id).values.uniq.size == 1
...because I'm pretty sure it still has an id unless you're sure it doesn't then just remove the except part.

You can add a unique index to the table in a migration:
add_index :widgets, [:column1, :column2, :column3, :column4, :column5], unique: true
The resulting index will require that each combination of the 5 columns must be unique.

Related

How to check if there is an index on a column through Rails console?

Let's say I do Image.column_names and that shows all the columns such as post_id but how do I check if post_id has an index on it?
There is an index_exists? method on one of the ActiveRecord "connection adapters" classes.
You can use it like on of the following methods:
ActiveRecord::Migration.connection.index_exists? :images, :post_id
ActiveRecord::Base.connection.index_exists? :images, :post_id
If you know the name of the index, instead you'll need to use index_name_exists?
ActiveRecord::Base.connection.index_name_exists? :images, :index_images_on_join_key
As others have mentioned, you can use the following to check if an index on the column exists:
ActiveRecord::Base.connection.index_exists?(:table_name, :column_name)
It's worth noting, however, that this only returns true if an index exists that indexes that column and only that column. It won't return true if you're using compound indices that include your column. You can see all of the indexes for a table with
ActiveRecord::Base.connection.indexes(:table_name)
If you look at the source code for index_exists?, you'll see that internally it's using indexes to figure out whether or not your index exists. So if, like me, their logic doesn't fit your use case, you can loop through these indexes and see if one of them will work. In my case, the logic was thus:
ActiveRecord::Base.connection.indexes(:table_name).select { |i| i.columns.first == column_name.to_s}.any?
It's also important to note, indexes does not return the index that rails automatically generates for ids, which explains why some people above were having problems with calls to index_exists?(:table_name, :id)
The following worked for me:
ActiveRecord::Base.connection.index_exists?(:table_name, :column_name)
For an updated answer, as of Rails 3+ multi-column, uniqueness and custom name are all supported within the #index_exists? method.
# Check an index exists
index_exists?(:suppliers, :company_id)
# Check an index on multiple columns exists
index_exists?(:suppliers, [:company_id, :company_type])
# Check a unique index exists
index_exists?(:suppliers, :company_id, unique: true)
# Check an index with a custom name exists
index_exists?(:suppliers, :company_id, name: "idx_company_id")
Source: Rails 6 Docs

Rails + validates_overlap gem: When to add indexes to the database?

I'm using the validates_overlap gem (https://github.com/robinbortlik/validates_overlap) in a Rails app. Here is the Model code:
validates :start_time, :end_time, overlap: { scope: "device_id", exclude_edges: ["start_time", "end_time"] }
And here is the SQL it triggers:
SELECT 1 AS one FROM "bookings" WHERE
((bookings.end_time IS NULL OR bookings.end_time > '2014-04-11 13:00:00.000000') AND
(bookings.start_time IS NULL OR bookings.start_time < '2014-04-11 16:00:00.000000') AND
bookings.device_id = 20) LIMIT 1
I just want to know if I should be adding an index in my postgres database that covers start_time, end_time and device_id, or something similar? e.g. something like this:
add_index :bookings, [:device_id, :start_time, :end_time], unique: true
Adding the above index to ensure database consistency would make no sense. After all you are validating the Range AND excluding the actual edges (the unique index would check exactly the edges!).
Adding a non unique index to speed up the validation is a good idea. If so you should analyze your data and app queries.
The easiest approach is to simply add a single index for each column. Postgres can still use these for the multicolumn query (see heroku devcenter ).
Only if it really matters (or you do not query the columns in other combinations) a multicolumn index is necessary. If so the device_id should be first in index Rule of thumb: index for equality first—then for ranges.

Make array of values from table attribute that matched another tables attribute

In my Ruby on Rails 3 app controller, I am trying to make an instance variable array to use in my edit view.
The User table has a user_id and reseller_id.
The Certificate table has a user_id.
I need to get the reseller_id(s) from the User table that have the user_id(s) in both User table and Certificate table.
Here is my User model:
class User < ActiveRecord::Base
attr_accessible :email, :name, :password, :password_confirmation, :remember_token, :reseller_id, :validate_code, :validate_url, :validated, :admin, :avatar
belongs_to :reseller
has_one :certificate
end
Here is my Certificate model:
class Certificate < ActiveRecord::Base
attr_accessible :attend, :pass, :user_id
validates :user_id, presence: true
end
Here is my controller, this seems to only store the last user_id in the Certificate table.
##train should be reseller.id(s) of all users in Certification table.
#certs = Certificate.all
#certs.each do |user|
#id = []
#id << user.user_id
#id.each do |id|
if User.find(id)
#train = []
#train << User.find(id).reseller_id
end
end
end
Thank you
1) Correct version of your code
#certs = Certificate.all
#reseller_id = [] # 1
#certs.each do |user|
id = user.user_id # 2
if u = User.find(id)
#reseller_id << u.reseller_id # 3
end
end
2) Rails way
Something like this
#reseller_id = User.joins(:certificates).select('users.reseller_id').map { |u| u['reseller_id']}
PS
Don't keep this code in controller please :-)
Well, first of all, you should not nest your ids' each block inside of the each block for certificates. You should build your ids array, then loop over it later. The reason you are only getting the last user_id, is because "#id" will only ever have a single element in it as your code is currently written. You will also run into the same problem with your "#train" array. Because you are declaring the array inside the iterator, it is getting re-created (with nothing in it) on every iteration. Using your existing code, this should work:
#certs = Certificate.all
#ids = []
#certs.each do |user|
#ids << user.user_id
end
#train = []
#ids.each do |id|
if User.find(id)
#train << User.find(id).reseller_id
end
end
A more Rubyish and concise way would be the following:
cert_user_ids = Certificate.all.map(&:user_id)
reseller_ids = cert_user_ids.map { |id| User.find(id).reseller_id rescue nil }.compact
Map is an enumerable method that returns an array of equal size to the first array. On each iteration, whatever the code inside the block returns "replaces" that element in the new array that is returned. In other words, it maps the values of one array to a new array of equal size. The first map function gets the user_ids of all certificates (using &:user_id is a shortcut for Certificate.all.map { |cert| cert.user_id } ) The second map function returns the "reseller_id" of the user, if a user is found. It returns nil if no user is found with that id. Finally, compact removes all nil values from the newly mapped array of reseller_ids, leaving just the reseller ids.
If you want to do this in the most efficient and railsy way possible, minimizing database calls and allowing the database to do most of the heavy lifting, you will likely want to use a join:
reseller_ids = User.joins(:certificates).all.map(&:reseller_id)
This grabs all users for which a certificate with that user's id exists. Then it utilizes map again to map the returned users to a new array that just contains user.reseller_id.
Ruby tends to be slower at this type of filtering than RDBM systems (like mysql), so it is best to delegate as much work as possible to the database.
(Note that this join will compare user.id to certificate.user_id by default. So, if your 'primary key' in your users table is named 'user_id', then this will not work. In order to get it to work, you should either use the standard "id" as the primary key, or you will need to specify that 'user_id' is your primary key in the User model)

Set Primary Key value 0 and auto increment on Migration PostgreSQL

I have a model with 2 fields => :name and :age
I need to do a Migration that add a column :position which needs auto increment and start with 0 (zero).
I tried these way:
class AddPosition < ActiveRecord::Migration
def up
add_column :clientes, :position, :integer, :default => 0, :null => false
execute "ALTER TABLE clientes ADD PRIMARY KEY (position);"
end
But it doesn't work because it not auto increment. If I try to use primary key as type:
class AddPosition < ActiveRecord::Migration
def up
add_column :clientes, :position, :primary_key, :default => 0, :null => false
end
end
rake db:migrate don't run because multiple values.
Anyone could explain a way to have zeros and autoincrement on Primary Key w/ Rails 3.2?
Here's how you can set up auto increment column in PostgreSQL:
# in migration:
def up
execute <<-SQL
CREATE SEQUENCE clients_position_seq START WITH 0 MINVALUE 0;
ALTER TABLE clients ADD COLUMN position INTEGER NOT NULL DEFAULT nextval('clients_position_seq');
SQL
end
But unfortunately it may not be what you need. The above would work if you'd insert values into clients table with SQL like this: INSERT INTO clients(name, age) VALUES('Joe', 21), and rails doesn't work that way.
The first problem is that rails expects primary key to be called id. And while you can override this convention, it would cause more problems than it would solve. If you want to bind position to primary key value, better option is to add virtual attribute to your model, a method like this:
def position
id.nil? ? nil : id - 1
end
But let's suppose you already have conventional primary key id and want to add position field so that you can reorder clients after they have been created, without touching their ids (which is always a good idea). Here comes the second problem: rails won't recognize and respect DEFAULT nextval('clients_position_seq'), i.e. it won't pull values from PG backed sequence and would try to put NULL in position by default.
I'd like to suggest looking at acts_as_list gem as better option. This would make DB sequence manipulations unnecessary. Unfortunately it uses 1 as initial value for position but that can be cured by setting custom name for list position field and defining method as I showed above.

Validates_uniqueness_of does not work when doing a large Transaction

I have a validate_uniqueness_of :field inside my ActiveRecord model. When i do a single create/update it works nicely but i have to do some large batch creation from csv files inside a Transaction
When i am in the transaction the validate_uniqueness_of does not detect the error and the model is saved!
Could it be that the non-unique values are created during the transaction?
The validate methods check before the transaction and then all values are still not present in the table and thus unique.
Edit: Create a index with the unique property turned on for your field and the transaction will fail and thus preventing the addition of non-unique elements.
To do some you should add something this in your migration file
add_index("tablename", "fieldname", { :name => "fieldname_index", :unique => true })
Edit 2: A transaction like this will will give something like a "ActiveRecord::StatementInvalid: Mysql::Error: Duplicate entry '123' for key 1: <sql statement here>" error.
Table.transaction do
i1 = Table.new
i1.fieldname = "123"
i1.save
i2 = Table.new
i2.fieldname = "123"
i2.save
end
validates_uniqueness_of is subject to race conditions, and you still need to have the appropriate unique constraints on your database. You are describing this situation. The link provides a few solutions.

Resources