I have a boolean field that has a default value of false in the joining table of a has many through relationship: tags and tag lists.
add_column :taggings, :tag_visible, :boolean, :default => false
The theory is that a tag list can have many tags (and vice versa) but a tags visibility can be turned off/on per tag list.
This is also part of a nested resource: Document has_one :tag_list
For the most part this is all working. The default value is set on creation and I am updating each instance with an ajax call.
However when I update the document which includes the tag_list as a token input field it resets all of the tagging's visibility back to false regardless of what it was previously.
Any leads would be greatly appreciated.
Turns out that in my tags token_input setter inside my tag_list model I was deleting and recreating a new record in the tagging model instead of updating it.
old code:
self.taggings = Tag.ids_from_tokens(tokens, user_id).each_with_index.map {|t,i| Tagging.new(:tag_id => t, :tag_colour => tag_colours[i % tag_colours.size]) }
fix:
self.tag_ids = Tag.ids_from_tokens(tokens, user_id)
self.taggings.each_with_index {|t,i| t.update_attributes(:tag_colour => tag_colours[i % tag_colours.size]) }
Related
I'm using sunspot-solr in ROR and I need help in creating a searchable block using two tables.(join of two tables)
The query I want to be executed when the indexes are formed is :
SELECT a.id,a.title
FROM table_one a,table_two b
WHERE a.status=1
AND a.id=b.id
AND b.status=1
I want the "title" field to be searchable(text), only if the id exists in both tables and both have status 1.And I want them to be stored fields(no db hits).
class TableOne
has_many :table_twos
searchable do
text :title, :stored => true
string :status, :stored => true
string :id, :multiple => true, :stored => true do
table_twos.map(&:id)
end
end
When I searched a word, I got 5 results.
But when I delete an entry of one of the results from table_two and searched the same word again.. I still got 5 results when I should get only the other 4.
Any help ?
If you delete an associated record that was stored as a solr/sunspot record, you will have no choice but to reindex that record.
So to solve the issue I did somthing like without(:id,nil) in my controller and I got the results as I wanted them.
I'm not sure its the right way to go about it though.
Let's say I do Image.column_names and that shows all the columns such as post_id but how do I check if post_id has an index on it?
There is an index_exists? method on one of the ActiveRecord "connection adapters" classes.
You can use it like on of the following methods:
ActiveRecord::Migration.connection.index_exists? :images, :post_id
ActiveRecord::Base.connection.index_exists? :images, :post_id
If you know the name of the index, instead you'll need to use index_name_exists?
ActiveRecord::Base.connection.index_name_exists? :images, :index_images_on_join_key
As others have mentioned, you can use the following to check if an index on the column exists:
ActiveRecord::Base.connection.index_exists?(:table_name, :column_name)
It's worth noting, however, that this only returns true if an index exists that indexes that column and only that column. It won't return true if you're using compound indices that include your column. You can see all of the indexes for a table with
ActiveRecord::Base.connection.indexes(:table_name)
If you look at the source code for index_exists?, you'll see that internally it's using indexes to figure out whether or not your index exists. So if, like me, their logic doesn't fit your use case, you can loop through these indexes and see if one of them will work. In my case, the logic was thus:
ActiveRecord::Base.connection.indexes(:table_name).select { |i| i.columns.first == column_name.to_s}.any?
It's also important to note, indexes does not return the index that rails automatically generates for ids, which explains why some people above were having problems with calls to index_exists?(:table_name, :id)
The following worked for me:
ActiveRecord::Base.connection.index_exists?(:table_name, :column_name)
For an updated answer, as of Rails 3+ multi-column, uniqueness and custom name are all supported within the #index_exists? method.
# Check an index exists
index_exists?(:suppliers, :company_id)
# Check an index on multiple columns exists
index_exists?(:suppliers, [:company_id, :company_type])
# Check a unique index exists
index_exists?(:suppliers, :company_id, unique: true)
# Check an index with a custom name exists
index_exists?(:suppliers, :company_id, name: "idx_company_id")
Source: Rails 6 Docs
I've seen other SO questions like - How do you validate uniqueness of a pair of ids in Ruby on Rails? - which describes adding a scoped parameter to enforce uniqueness of a key pair, i.e. (from the answer)
validates_uniqueness_of :user_id, :scope => [:question_id]
My question is how do you do this kind of validation for an entire row of data?
In my case, I have five columns and the data should only be rejected if all five are the same. This data is not user entered and the table is essentially a join table (no id or timestamps).
My current thought is to search for a record with all of the column values and only create if the query returns nil but this seems like a bad work around. Is there an easier 'rails way' to do this?
You'll need to create a custom validator (http://guides.rubyonrails.org/active_record_validations.html#performing-custom-validations):
class TotallyUniqueValidator < ActiveModel::Validator
def validate(record)
if record.attributes_for_uniqueness.values.uniq.size == 1
record.errors[:base] << 'All fields must be unique!'
end
end
end
class User
validates_with TotallyUniqueValidator
def attributes_for_uniqueness
attributes.except :created_at, :updated_at, :id
end
end
The important line here is:
if record.attributes_for_uniqueness.values.uniq.size == 1
This will grab a hash of all the attributes you want to check for uniqueness (in this case everything except id and timestamps) and converts it to an array of just the values, then calls uniq on it which returns only uniq values and if the size is 1 then they were all the same value.
Update based on your comment that your table doesn't have an id or timestamps:
You can then simply do:
if record.attributes.except(:id).values.uniq.size == 1
...because I'm pretty sure it still has an id unless you're sure it doesn't then just remove the except part.
You can add a unique index to the table in a migration:
add_index :widgets, [:column1, :column2, :column3, :column4, :column5], unique: true
The resulting index will require that each combination of the 5 columns must be unique.
I'm using "acts_as_tree" plugin for my user messaging thread feature on my website. I have a method that makes deleting selected messages possible. The messages don't actually get deleted. Their sender_status or recipient_status columns get set to 1 depending on what user is the sender or recipient of the message.
Anyway if both users have those status's set to one then that last line makes sure the message row is completely moved from the database. Now this is fine as long as it's not the parent message being deleted. If the parent message deleted then the children that haven't been selected for deletion won't be accessible anymore.
Here is the method:
def delete_all_users_selected_messages(message_ids, user_id, parent_id)
Message.where(:id => message_ids, :sender_id => user_id).update_all(:sender_status => 1)
Message.where(:id => message_ids, :recipient_id => user_id).update_all(:recipient_status => 1)
Message.delete_all(:sender_status => 1, :recipient_status => 1, :parent_id => parent_id).where("id != ?", parent_id)
end
It's quite obvious what I'm trying to do. I need to have the parent ignored. So where the primary key is equal to the parent_id means that row is a parent (normally the parent_id is nil but I needed it set to the primary keys value for some other reason, long story and not important). Anyway is there an SQL statement I can add on to the end of the last line in tat method? To make sure it only deletes messages where the id of the row is not equal to the parent_id?
I can arrange for the parent_id row to never be permitted for deletion unless the actual thread (MessageThreads table that references the messages tables conversations) is deleted.
Any way how can I make it so this parent row is ignored when that delete_all method is run?
Kind regards
Nowadays in Rails 4, you can do:
Model.where.not(attr: "something").delete_all
and
Model.where.not(attr: "something").destroy_all
And don't forget about difference:
destroy_all: The associated objects are destroyed alongside this object by calling their destroy method. Instantiates all the records and destroys them one at a time, so with a large dataset, this could be slow
delete_all: All associated objects are destroyed immediately without calling their destroy method. Callbacks are not called.
Why not use association from the parent record, something like this?
Message.where(:id => parent_id).first
.children.where(:sender_status => 1, :recipient_status => 1)
.delete_all
This worked for me in the end.
Message.where('id != ? AND parent_id = ?', parent_id, parent_id).where(:sender_status => 1, :recipient_status => 1).delete_all
Basically returns all messages of that particular conversation except the one where id == parent_id. Whenever id == parent_id this means it's a parent message.
I would consider a slightly different approach and have the model that has the has_many relationship have :dependent => destroy with it, e.g.
User has_many :messages, :dependent => :destroy
That way you don't get the 'dangling orphan record' issue you describe.
I would try and approach it this way rather than thinking 'all records except'.
I don't know if there is something I am not addressing but this is what comes to find for the issue described.
I want to have a "Customer" Model with a normal primary key and another column to store a custom "Customer Number". In addition, I want the db to handle default Customer Numbers. I think, defining a sequence is the best way to do that. I use PostgreSQL. Have a look at my migration:
class CreateAccountsCustomers < ActiveRecord::Migration
def up
say "Creating sequenze for customer number starting at 1002"
execute 'CREATE SEQUENCE customer_no_seq START 1002;'
create_table :accounts_customers do |t|
t.string :type
t.integer :customer_no, :unique => true
t.integer :salutation, :limit => 1
t.string :cp_name_1
t.string :cp_name_2
t.string :cp_name_3
t.string :cp_name_4
t.string :name_first, :limit => 55
t.string :name_last, :limit => 55
t.timestamps
end
say "Adding NEXTVAL('customer_no_seq') to column cust_id"
execute "ALTER TABLE accounts_customers ALTER COLUMN customer_no SET DEFAULT NEXTVAL('customer_no_seq');"
end
def down
drop_table :accounts_customers
execute 'DROP SEQUENCE IF EXISTS customer_no_seq;'
end
end
If you know a better "rails-like" approach to add sequences, would be awesome to let me know.
Now, if I do something like
cust = Accounts::Customer.new
cust.save
the field customer_no is not pre filled with the next value of the sequence (should be 1002).
Do you know a good way to integrate sequences? Or is there a good plugin?
Cheers to all answers!
I have no suggestions for a more 'rails way' of handling custom sequences, but I can tell you why the customer_no field appears not to be being populated after a save.
When ActiveRecord saves a new record, the SQL statement will only return the ID of the new record, not all of its fields, you can see where this happens in the current rails source here https://github.com/rails/rails/blob/cf013a62686b5156336d57d57cb12e9e17b5d462/activerecord/lib/active_record/persistence.rb#L313
In order to see the value you will need to reload the object...
cust = Accounts::Customer.new
cust.save
cust.reload
If you always want to do this, consider adding an after_create hook in to your model class...
class Accounts::Customer < ActiveRecord::Base
after_create :reload
end
I believe that roboles answer is not correct.
I tried to implement this on my application (exactly the same env: RoR+PostgreSQL), and I found out that when save is issued on RoR with the object having empty attributes, it tries to perform an INSERT on the database mentioning that all VALUES shall be set to NULL. The problem is the way PostgreSQL handles NULLs: in this case, the new row will be created but with all values empty, i.e. the DEFAULT will be ignored. If save only wrote on the INSERT statement attributes filled on RoR, this would work fine.
In other words, and focusing only on the type and customer_no attribute mentioned above, this is the way PostgreSQL behaves:
SITUATION 1:
INSERT INTO accounts_customers (type, customer_no) VALUES (NULL, NULL);
(this is how Rails' save works)
Result: a new row with empty type and empty customer_no
SITUATION 2:
INSERT INTO accounts_customers (type) VALUES (NULL);
Result: a new row with empty type and customer_no filled with the sequence's NEXTVAL
I have a thread going on about this, check it out at:
Ruby on Rails+PostgreSQL: usage of custom sequences
I faced a similar problem, but I also put :null => false on the field hopping that it will be auto-populated with nextval.
Well, in my case AR was still trying to insert NULL if no attribute was supplied in the request, and this resulted in an exception for not-null constraint violation.
Here's my workaround. I just deleted this attribute key from #attributes and #changed_attributes and in this case postgres correctly put the expected sequence nextval.
I've put this in the model:
before_save do
if (#attributes["customer_no"].nil? || #attributes["customer_no"].to_i == 0)
#attributes.delete("customer_no")
#changed_attributes.delete("customer_no")
end
end
Rails 3.2 / Postgres 9.1
If you're using PostgreSQL, check out the gem I wrote, pg_sequencer:
https://github.com/code42/pg_sequencer
It provides a DSL for creating, dropping and altering sequences in ActiveRecord migrations.