I have a model Graph with fields name and version. I want the name and version to be unique so have declared
validates_uniqueness_of :name, scope: :version
When a new Graph object is created, if it has the same name as a previous Graph object, then I want the version to be incremented. However, I do not want the version incremented upon update. So far I have implemented this with a call back
class Graph < ApplicationRecord
enum preservation_status: [:unlocked, :locked]
validates_presence_of :name
validates_presence_of :version
validates_uniqueness_of :name, scope: :version
has_many :graph_points, dependent: :destroy
belongs_to :data_set
validates_uniqueness_of :name, scope: :version
before_validation :set_version, on: :create
private
def set_version
graphs = Graph.where(name: name)
return if graphs.empty?
self.version = graphs.maximum(:version) + 1
end
This does not work. If a graph of that name exists, then the code appears to enter an infinite loop and I have to restart the server. How to I fix this?
For instance, if I have have one existing graph with name 'Plot_quick Male', with version = 1, then try to create a new graph with the same name, the sql that results is below:
(0.9ms) SELECT COUNT(*) FROM "graphs" WHERE "graphs"."name" = $1 [["name", "Plot_quick Male "]]
(0.7ms) SELECT MAX("graphs"."version") FROM "graphs" WHERE "graphs"."name" = $1 [["name", "Plot_quick Male "]]
Graph Exists (0.5ms) SELECT 1 AS one FROM "graphs" WHERE "graphs"."name" = $1 AND "graphs"."version" = $2 LIMIT $3 [["name", "Plot_quick Male "], ["version", 2], ["LIMIT", 1]]
and then the server hangs.
The default value for version is set in the schema i.e.
t.integer "version", default: 0
I managed to eliminate the problem from development by deleting all versions of graph and recreating new ones. It might have been something strange in some of the graphs that were created with earlier versions of the code.
I will mark this is the answer for the time being, but if anyone comes up with a answer that make sense, I will have a look.
Related
I'm working on a Rails project, and trying to create a scope in my model instead of an instance method and preload that scope when needed. But I'm not very experienced with scopes and having trouble getting it to work or may be I'am doing it all wrong. I had an instance method doing the same thing, but noticing some n+1 issues with it. I was inspired by this article How to preload Rails scopes and wanted to try scopes.
(As a side note, I'm using ancestry gem)
I have tried three different ways to create the scope. They all works for Channel.find_by(name: "Travel").depth, but errors out for Channel.includes(:depth) or eager_load.
first try:
has_one :depth, -> { parent ? (parent.depth+1) : 0 }, class_name: "Channel"
2nd try:
has_one :depth, -> (node) {where("id: = ?", node).parent ? (node.parent.depth+1) : 0 }, class_name: "Channel"
3rd try:
has_one :depth, -> {where("id: = channels.id").parent ? (parent.depth+1) : 0 }, class_name: "Channel"
All three works fine in console for:
Channel.find_by(name: "Travel").depth
Channel Load (0.4ms) SELECT "channels".* FROM "channels" WHERE "channels"."name" = $1 LIMIT $2 [["name", "Travel"], ["LIMIT", 1]]
=> 2
..but
Channel.includes(:depth) gives me three different errors for each scope (1st, 2nd, 3rd);
Error for first scope:
NameError (undefined local variable or method `parent' for #<Channel::ActiveRecord_Relation:0x00007fdf867832d8>)
Error for 2nd scope:
ArgumentError (The association scope 'depth' is instance dependent (the scope block takes an argument). Preloading instance dependent scopes is not supported.)
Error for 3rd scope:
Object doesn't support #inspect
What am I doing wrong? Or, what is the best approach? I appreciate your time and help.
I think the .depth method is returning an integer value not associated records. Eager loading is the mechanism for loading the associated records of the objects returned by Model.find using as few queries as possible.
If you want to speed the depth method you need to enable the :cache_depth option. According to ancestry gem documentation:
:cache_depth
Cache the depth of each node in the 'ancestry_depth' column(default: false)
If you turn depth_caching on for an existing model:
- Migrate: add_column [table], :ancestry_depth, :integer, :default => 0
- Build cache: TreeNode.rebuild_depth_cache!
In your model:
class [Model] < ActiveRecord::Base
has_ancestry, cache_depth: true
end
The guide does not say what return value would be for association= methods. For example the has_one association=
For the simple case, it returns the assigned object. However this is only when assignment succeeds.
Sometimes association= would persist the change in database immediately, for example a persisted record setting the has_one association.
How does association= react to assignment failure? (Can I tell if it fails?)
Is there a bang! version in which failure raises exception?
How does association= react to assignment failure? (Can I tell if it fails?)
It can't fail. Whatever you assign, it will either work as expected:
Behind the scenes, this means extracting the primary key from this
object and setting the associated object's foreign key to the same
value.
or will save the association as a string representation of passed in object, if the object is "invalid".
Is there a bang! version in which failure raises exception?
Nope, there is not.
The association= should not be able to fail. It is a simple assignment to a attribute on your attribute. There are no validations called by this method and the connection doesn't get persisted in the database until you call save.
The return value of assignments is the value you pass to it.
http://guides.rubyonrails.org/association_basics.html#has-one-association-reference-when-are-objects-saved-questionmark
So another part of the guide does talk about the return behavior for association assignment.
If association assignment fails, it returns false.
There is no bang version of this.
Update
Behaviors around :has_many/has_one through seems to be different.
Demo repository: https://github.com/lulalalalistia/association-assignment-demo
In the demo I seeded some data in first commit, and hard code validation error in second commit. Demo is using rails 4.2
has_many through
class Boss < ActiveRecord::Base
has_many :room_ownerships, as: :owner
has_many :rooms, through: :room_ownerships
end
When I add a room, exception is raised:
irb(main):008:0> b.rooms << Room.first
Boss Load (0.2ms) SELECT "bosses".* FROM "bosses" ORDER BY "bosses"."id" ASC LIMIT 1
Room Load (0.1ms) SELECT "rooms".* FROM "rooms" ORDER BY "rooms"."id" ASC LIMIT 1
(0.1ms) begin transaction
(0.1ms) rollback transaction
ActiveRecord::RecordInvalid: Validation failed: foo
irb(main):014:0> b.rooms
=> #<ActiveRecord::Associations::CollectionProxy []>
has_one through
class Employee < ActiveRecord::Base
has_one :room_ownership, as: :owner
has_one :room, through: :room_ownership
end
When I add a room I don't get exception:
irb(main):021:0> e.room = Room.first
Room Load (0.2ms) SELECT "rooms".* FROM "rooms" ORDER BY "rooms"."id" ASC LIMIT 1
RoomOwnership Load (0.1ms) SELECT "room_ownerships".* FROM "room_ownerships" WHERE "room_ownerships"."owner_id" = ? AND "room_ownerships"."owner_type" = ? LIMIT 1 [["owner_id", 1], ["owner_type", "Employee"]]
(0.1ms) begin transaction
(0.1ms) rollback transaction
=> #<Room id: 1, created_at: "2016-10-03 02:32:33", updated_at: "2016-10-03 02:32:33">
irb(main):022:0> e.room
=> #<Room id: 1, created_at: "2016-10-03 02:32:33", updated_at: "2016-10-03 02:32:33">
This makes it difficult to see whether the assignment succeeds or not.
We have over 29K users in our database. "User" is one of our tables and has a unique field "email" with an index defined in one of our migrations:
#3434324_devise_create_user.rb
class DeviseCreateUsers < ActiveRecord::Migration
def change
create_table(:users) do |t|
t.string :email, :null => false, :default => ""
end
add_index :users, :email, :unique => true
end
end
We don't have any issues creating users... for the most part. However, with some emails (potentially those that have been previously deleted from the DB at one point or where created long time ago), we are encountering a rare issue:
We can't create them because the "unique" validation fails, however, there is not a single user with that email.
This is an example of what is happening (which does not make any sense if you ask me).
When we try to create a user with the email "example#example.com":
$ p = Devise.friendly_token[0,20]
$ User.create!({email: "example#example.com", password: p})
We get the following result:
(11.6ms) BEGIN
User Exists (11.2ms) SELECT 1 AS one FROM "users" WHERE "users"."email" = 'example#example.com' LIMIT 1
(10.3ms) ROLLBACK
ActiveRecord::RecordInvalid: Validation failed: Email has already been taken
from /Users/example/.rvm/gems/ruby-2.0.0-p451#example/gems/activerecord-4.1.6/lib/active_record/validations.rb:57:in `save!'
At the same time, if we look for that email in the database:
$ User.find_by_email("example#example.com")
It does not exist!
User Load (12.7ms) SELECT "users".* FROM "users" WHERE "users"."disabled" = 'f' AND "users"."email" = 'example#example.com' LIMIT 1
=> nil
Does this make sense for any of you?
There is a big difference between the query you get when you run User.find_by_email and the one that is run by the validation: the former has an extra condition on the disabled column. It would seem that when you've been deleting using you've just be flagging them as deleted rather than actually removing.
Given what you've posted I can't tell where that is coming from (perhaps a default scope or an extension to rails) but it would certainly account for the difference in results. You could confirm this by searching for the user in the psql shell.
You could change the validation to ignore these disabled rows:
validates_uniqueness_of :email, conditions: -> { where(disabled: false) }
However you still wouldn't be able to create this user since the unique index on email would prevent this. If you want to able to have multiple users with the same email (but only one that is not disabled) you would have to make this a multi column index on email and some other attribute that would be different for all the "old" users but would be the same for active users
I just noticed that one of the attributes of an object was being updated when this object was appended to an array. This behavior looks very surprising to me and I thought I might be missing something fundamental about ActiveRecord.
In my app, every idea has a father (through its father_id attribute). This association is set in models/idea.rb with the following :
class Idea < ActiveRecord::Base
belongs_to :father, :class_name => "Idea" # , :foreign_key => "idea_id"
has_many :children, :class_name => "Idea", :foreign_key => "father_id", :dependent => :destroy
[...]
Here is what happens in rails console :
I first select a given idea :
irb(main):003:0> n = Idea.find(1492)
Idea Load (1.1ms) SELECT "ideas".* FROM "ideas" WHERE "ideas"."id" = $1 LIMIT 1 [["id", 1492]]
=> #<Idea id: 1492, father_id: 1407, [...]>
I then retrieve its children through the associations :
irb(main):004:0> c = n.children
Idea Load (0.5ms) SELECT "ideas".* FROM "ideas" WHERE "ideas"."father_id" = 1492
=> []
It doesn't have any, which is ok.
I then want to append the idea itself to the 'c' variable but this triggers an unwanted UPDATE action in the database :
irb(main):005:0> c << n
(0.1ms) BEGIN
(0.9ms) UPDATE "ideas" SET "father_id" = 1492, "updated_at" = '2013-12-06 12:57:25.982619' WHERE "ideas"."id" = 1492
(0.4ms) COMMIT
=> [#<Idea id: 1492, father_id: 1492, [...]>]
The father_id attribute, which had the value 1407 now has the value 1492, i.e. the idea's id.
Can anyone explain me why this happens and how I can create an array that includes an object's children and the object itself without altering the object's attributes ?
NB : I'm using ruby 1.9.3p448 (2013-06-27 revision 41675) [x86_64-darwin13.0.0]
This is expected behavior. You're adding a new idea to the set of ideas belonging to a specific father. This happens because it's not an array you're appending to, it's an ActiveRecord association. In your console, try n.children.class.
If you want a flat array which won't modify objects appended to it, you want:
c = n.children.to_a
c << n
I have a Comment model that belongs_to a Message. In comments.rb I have the following:
class Comment < ActiveRecord::Base
belongs_to :message, :counter_cache => true, :touch => true
end
I've done this because updating the counter_cache doesn't update the updated_at time of the Message, and I'd like it to for the cache_key.
However, when I looked in my log I noticed that this causes two separate SQL updates
Message Load (4.3ms) SELECT * FROM `messages` WHERE (`messages`.`id` = 552)
Message Update (2.2ms) UPDATE `messages` SET `comments_count` = COALESCE(`comments_count`, 0) + 1 WHERE (`id` = 552)
Message Update (2.4ms) UPDATE `messages` SET `updated_at` = '2009-08-12 18:03:55', `delta` = 1 WHERE `id` = 552
Is there any way this can be done with only one SQL call?
Edit I also noticed that it does a SELECT of the Message beforehand. Is that also necessary?
It probably does two queries because it's not been optimised yet.
Why not branch and create a patch :D