I have a Rails app with a users table. PostgreSQL is the database. For some reason, all updates to one of the user records succeed and then silently revert. What is going on?
Broken:
> u = User.find_by(username: 'alice')
> u.last_access
=> Thu, 19 Jul 2018 17:59:35 UTC +00:00
> u.last_access -= 20.days
=> Fri, 29 Jun 2018 17:59:35 UTC +00:00
> u.save!
(1.6ms) BEGIN
...
User Update (1.4ms) UPDATE "users" SET "updated_at" = $1, "last_access" = $2 WHERE "users"."id" = $3 [["updated_at", "2019-01-14 19:02:56.271382"], ["last_access", "2018-06-29 17:59:35"], ["id", 1]]
(0.8ms) COMMIT
=> true
> reload!
> User.find_by(username: 'alice').last_access
=> Thu, 19 Jul 2018 17:59:35 UTC +00:00
> # WHY NOT 29 JUN???
The same operations work for a different user:
> u = User.find_by(username: 'bob')
> u.last_access
=> Mon, 24 Dec 2018 03:33:47 UTC +00:00
> u.last_access -= 20.days
=> Tue, 04 Dec 2018 03:33:47 UTC +00:00
> u.save!
(1.8ms) BEGIN
...
User Update (6.3ms) UPDATE "users" SET "updated_at" = $1, "last_access" = $2 WHERE "users"."id" = $3 [["updated_at", "2019-01-14 18:59:56.087223"], ["last_access", "2018-12-04 03:33:47"], ["id", 2]]
(2.0ms) COMMIT
=> true
> reload!
> User.find_by(username: 'bob').last_access
=> Tue, 04 Dec 2018 03:33:47 UTC +00:00
> # GOOD
I'm using the paper_trail gem for versioning, but I can't find any feature for freezing objects in that gem.
paper_trail is configured to ignore the last_access column:
has_paper_trail ignore: %i[created_at last_access last_login updated_at]
There is a PostgreSQL index on the column:
t.index ["last_access", "last_login"], name: "index_users_on_last_access_and_last_login", using: :btree
The broken user record isn't frozen in ActiveRecord:
> User.find_by(username: 'alice').frozen?
=> false
If you execute the update on the database directly does the change take affect? For instance, you could do:
User.connection.execute 'UPDATE "users" SET "last_access" = \'2018-12-04 03:33:47\' WHERE username = 'alice'
If the change doesn't take affect at that point then I would suspect that there is a trigger or something in your database that is causing the strange behavior.
Edit:
It's very strange to me that your log shows a successful update from Rails but the record isn't actually being changed. I'm still not convinced that there isn't something going on in the database.
Try doing a schema dump of that table and carefully look through it for any triggers that might be causing the value to be changed out from under you.
pg_dump -t 'public.users' --schema-only DB_NAME
Related
I have a table with 2 foreign key relationships - to a Timeline and a Phase. When I create a record in my development database it all works 100% as expected, but when I do it in test mode it refuses to add the Timeline - you can see from the INSERT statement that it flatly refuses .. it doesn't even try to add it. When I run the exact same sequence below in development it's fine
I can add/update timeline_id but then it doesn't reference the timeline through the parent phase_timeline object as it should. I repeat that this all works fine in development, but not in test. Its driving me mad. Is it failing a validation possibly.. or could the database be corrupt. Are there some console commands I could run to check out the foreign key relationship further?
[33] pry(main)> t = Timeline.last
Timeline Load (0.3ms) SELECT "timelines".* FROM "timelines" ORDER BY "timelines"."id" DESC LIMIT $1 [["LIMIT", 1]]
=> #<Timeline:0x0055fd716dcfa8 id: 1, title: "England", timelineable_type: "Global", timelineable_id: 1, created_at: Thu, 24 Sep 2020 14:46:28 UTC +00:00, updated_at: Thu, 24 Sep 2020 14:46:28 UTC +00:00>
[34] pry(main)> p = Phase.last
Phase Load (1.3ms) SELECT "phases".* FROM "phases" WHERE "phases"."deleted_at" IS NULL ORDER BY "phases"."id" DESC LIMIT $1 [["LIMIT", 1]]
=> #<Phase:0x0055fd717f8450
id: 1,
name: "First phase",
development_id: 1,
created_at: Thu, 24 Sep 2020 14:46:28 UTC +00:00,
updated_at: Thu, 24 Sep 2020 14:46:28 UTC +00:00,
developer_id: 1,
division_id: 1,
number: 1,
deleted_at: nil,
total_snags: 0,
unresolved_snags: 0,
business: "core">
[35] pry(main)> pt = PhaseTimeline.create(phase: p, timeline: t)
(0.2ms) BEGIN
SQL (0.5ms) **INSERT INTO "phase_timelines" ("phase_id") VALUES ($1) RETURNING "id" [["phase_id", 1]]**
(1.8ms) COMMIT
=> #<PhaseTimeline:0x0055fd719ef9c0 id: 5, phase_id: 1, timeline_id: nil>
After a LOT of head scratching and diving into the bowels .. this problem was caused by having 2 model classes with the same name. The classes were in 2 separate folder but had the same scope and were identical but removing the errant one sorted the problem
Im somewhat struggling when using rails exists? method. Actually I'm trying to check if a Location with given longitude and latitude exists. My code looks like this
Location.exists?(longitude: 9.81639, latitude: 52.4003)
Location Exists (0.3ms) SELECT 1 AS one FROM `locations` WHERE `locations`.`longitude` = 9.81639 AND `locations`.`latitude` = 52.4003 LIMIT 1
=> false
Now this always returns false even when an object with this exact data exists.
When using
Location.exists?(country: "Germany")
I get the following output and everything is working just fine. I get the desired results.
Location.where(country: "Germany").first.attributes
Location Load (0.4ms) SELECT `locations`.* FROM `locations` WHERE `locations`.`country` = 'Germany' ORDER BY `locations`.`id` ASC LIMIT 1
{"id"=>1, "spot_id"=>nil, "street"=>"Garbeweg 2", "zip"=>"30655", "city"=>"Hanover", "country"=>"Germany", "longitude"=>9.81639, "latitude"=>52.4003, "created_at"=>Wed, 09 Jul 2014 13:08:48 UTC +00:00, "updated_at"=>Wed, 09 Jul 2014 13:08:48 UTC +00:00}
I just can't figure out what I am doing wrong.
EDIT: Some additional output
Location.where(longitude: 9.81639, latitude: 52.4003).first
Location Load (0.3ms) SELECT `locations`.* FROM `locations` WHERE `locations`.`longitude` = 9.81639 AND `locations`.`latitude` = 52.4003 ORDER BY `locations`.`id` ASC LIMIT 1
=> nil
When checking the equality manually, I get this:
l = Location.first
{"id"=>1, "spot_id"=>nil, "street"=>"Garbeweg 2", "zip"=>"30655", "city"=>"Hanover", "country"=>"Germany", "longitude"=>9.81639, "latitude"=>52.4003, "created_at"=>Wed, 09 Jul 2014 13:08:48 UTC +00:00, "updated_at"=>Wed, 09 Jul 2014 13:08:48 UTC +00:00}
l.longitude == 9.81639 && l.latitude == 52.4003
=> true
I have a create action in my ProductsController (the params come from a form in my view):
def create
vendor = #current_vendor
product = Product.create(:name => params[:product][:name])
vendor.products << product
vendor.belongings.create(:product_id => product.id, :count => params[:belonging][:count], :detail => params[:belonging][:detail])
if vendor.save
flash[:notice] = "Produkt hinzugefügt!"
redirect_back_or_default root_url
else
render :action => :new
end
end
It creates a variable "vendor", which stores the currently logged-in vendor (Authlogic)
A new Product is created (the product name is from the input field in the form) and stored in the variable "product"
The "product" is being connected to the current vendor
In the belongings table, additional informations to the product are being stored
it saves the whole thing
It's a many-to-many relationship throught the belongings table.
My problem is, the create action always creates the product twice!
Thanks for your help! :)
My console log when I create a new object through my form is:
Started POST "/products" for 127.0.0.1 at 2013-09-15 20:40:26 +0200
Processing by ProductsController#create as HTML
Parameters: {"utf8"=>"✓", "authenticity_token"=>"lNk/qQMP0xhlCuGgHtU+d5NEvIlCFcPSKB0FxDZH0zY=", "product"=>{"name"=>"Erdbeeren"}, "belonging"=>{"count"=>"20", "detail"=>"Rot"}, "commit"=>"Create"}
DEPRECATION WARNING: ActiveRecord::Base#with_scope and #with_exclusive_scope are deprecated. Please use ActiveRecord::Relation#scoping instead. (You can use #merge to merge multiple scopes together.). (called from current_vendor_session at /Users/reto_gian/Desktop/dici/app/controllers/application_controller.rb:11)
Vendor Load (0.3ms) SELECT "vendors".* FROM "vendors" WHERE "vendors"."persistence_token" = '04f75db0e2ef108ddb0ae1be1da167536d47b4d79c60ecb443ad2ea5717ecd752388e581f9379746568c72372be4f08585aa5581915b1be64dc412cded73a705' LIMIT 1
(0.1ms) begin transaction
SQL (0.8ms) INSERT INTO "products" ("created_at", "name", "updated_at") VALUES (?, ?, ?) [["created_at", Sun, 15 Sep 2013 18:40:26 UTC +00:00], ["name", "Erdbeeren"], ["updated_at", Sun, 15 Sep 2013 18:40:26 UTC +00:00]]
(0.8ms) commit transaction
(0.1ms) begin transaction
SQL (0.6ms) INSERT INTO "belongings" ("created_at", "product_id", "updated_at", "vendor_id") VALUES (?, ?, ?, ?) [["created_at", Sun, 15 Sep 2013 18:40:26 UTC +00:00], ["product_id", 7], ["updated_at", Sun, 15 Sep 2013 18:40:26 UTC +00:00], ["vendor_id", 1]]
(0.9ms) commit transaction
(0.1ms) begin transaction
SQL (0.6ms) INSERT INTO "belongings" ("count", "created_at", "detail", "product_id", "updated_at", "vendor_id") VALUES (?, ?, ?, ?, ?, ?) [["count", "20"], ["created_at", Sun, 15 Sep 2013 18:40:26 UTC +00:00], ["detail", "Rot"], ["product_id", 7], ["updated_at", Sun, 15 Sep 2013 18:40:26 UTC +00:00], ["vendor_id", 1]]
(0.9ms) commit transaction
Redirected to http://localhost:3000/
Completed 302 Found in 30ms (ActiveRecord: 5.1ms)
I think the problem could be in the line
vendor.products << product
this is adding the variable product (which is Product.create(:name => params[:product][:name]) to vendor.products a second time - which is unnecessary and likely the source of your problem
Update 2
Incidentally, as I was typing Update 1 #muffinista posted the same conclusion I came to in Update 1 and I just wanted to add clarification that I am not sniping his answer and updating the question randomly. After thinking it through some more after having posted the question, I realized what the issue was. While #muffinista posted the cause, his solution was inadequate - that's why I am leaving Update 1 intact - because I found a better solution and it makes sense for others to get the full context.
Update 1
So I figured what is causing this error, an infinite loop.
I am trying to save the client record within an after_save callback on the Client record. So it keeps trying to save the client record and executing the after_save callback which tries to save the client_record.
How can I achieve what I want, i.e. updating this client.weighted_score attribute whenever the record is updated without jumping into this loop?
Original Question
I have this callback:
after_save :calculate_weighted_score, :if => Proc.new { |c| c.score.present? }
def calculate_weighted_score
# Sum products of weight & scores of each attribute
client = self
weight = Weight.first
score = self.score
client.weighted_score = (weight.firm_size * score.firm_size) + (weight.priority_level * score.priority_level) +
(weight.inflection_point * score.inflection_point) + (weight.personal_priority * score.personal_priority) +
(weight.sales_priority * score.sales_priority) + (weight.sales_team_priority * score.sales_team_priority) +
(weight.days_since_contact * score.days_since_contact) + (weight.does_client_vote * score.does_client_vote) +
(weight.did_client_vote_for_us * score.did_client_vote_for_us) + (weight.days_until_next_vote * score.days_until_next_vote) +
(weight.does_client_vote_ii * score.does_client_vote_ii) + (weight.did_client_vote_ii_for_us * score.did_client_vote_ii_for_us) +
(weight.days_until_vote_ii * score.days_until_vote_ii)
client.o
# self.save
# client.update_attributes(:weighted_score => weighted_score)
end
This is an example of the state of a Client record before this callback is run:
#<Client:0x007fe00dbcea90> {
:id => 10,
:name => "Manta-Jar Gale",
:email => "mj#gmail.com",
:phone => 8769876435,
:firm_id => 1,
:created_at => Fri, 23 Nov 2012 23:50:09 UTC +00:00,
:updated_at => Tue, 27 Nov 2012 17:50:01 UTC +00:00,
:user_id => 1,
:personal_priority => true,
:last_contact => Sat, 08 Jan 2011,
:vote => true,
:vote_for_user => false,
:next_vote => Thu, 02 Jan 2014,
:vote_ii => true,
:vote_ii_for_us => true,
:next_vote_ii => Mon, 01 Jul 2013,
:weighted_score => nil,
:firm_size => 100.0
}
Notice the weighted_score => nil attribute.
After the callback, this same record looks like this:
#<Client:0x007fe00dbcea90> {
:id => 10,
:name => "Manta-Jar Gale",
:email => "mj#gmail.com",
:phone => 8769876435,
:firm_id => 1,
:created_at => Fri, 23 Nov 2012 23:50:09 UTC +00:00,
:updated_at => Tue, 27 Nov 2012 17:50:01 UTC +00:00,
:user_id => 1,
:personal_priority => true,
:last_contact => Sat, 08 Jan 2011,
:vote => true,
:vote_for_user => false,
:next_vote => Thu, 02 Jan 2014,
:vote_ii => true,
:vote_ii_for_us => true,
:next_vote_ii => Mon, 01 Jul 2013,
:weighted_score => 9808,
:firm_size => 100.0
}
Notice the weighted_score => 9808 attribute.
So I know that the callback calculate_weighted_score is being run, and the entire callback seems to be correct up until the assignment of the client.weighted_score. The issue is, the log shows no UPDATE db transaction for that attribute:
Started PUT "/clients/10" for 127.0.0.1 at 2012-11-27 12:50:01 -0500
Processing by ClientsController#update as HTML
Parameters: {"utf8"=>"✓", "authenticity_token"=>"J172LuZQ=", "client"=>{"name"=>"Manta-Jar Gale", "email"=>"mj#gmail.com", "phone"=>"8769876435", "firm_id"=>"1", "topic_ids"=>["1", "2", "9"], "personal_priority"=>"1", "last_contact"=>"2011-01-08", "vote"=>"1", "vote_for_user"=>"0", "next_vote"=>"2014-01-02", "vote_ii"=>"1", "vote_ii_for_us"=>"1"}, "commit"=>"Update Client", "id"=>"10"}
User Load (0.3ms) SELECT "users".* FROM "users" WHERE "users"."id" = 1 LIMIT 1
Client Load (0.2ms) SELECT "clients".* FROM "clients" WHERE "clients"."user_id" = 1 AND "clients"."id" = ? LIMIT 1 [["id", "10"]]
Topic Load (0.3ms) SELECT "topics".* FROM "topics"
(0.1ms) begin transaction
Topic Load (0.2ms) SELECT "topics".* FROM "topics" WHERE "topics"."id" IN (1, 2, 9)
Topic Load (0.1ms) SELECT "topics".* FROM "topics" INNER JOIN "clients_topics" ON "topics"."id" = "clients_topics"."topic_id" WHERE "clients_topics"."client_id" = 10
(0.5ms) UPDATE "clients" SET "name" = 'Manta-Jar Gale', "updated_at" = '2012-11-27 17:50:01.856893' WHERE "clients"."id" = 10
Firm Load (0.2ms) SELECT "firms".* FROM "firms" WHERE "firms"."id" = 1 LIMIT 1
User Load (0.2ms) SELECT "users".* FROM "users" WHERE "users"."id" = 1 LIMIT 1
Score Load (0.2ms) SELECT "scores".* FROM "scores" WHERE "scores"."client_id" = 10 LIMIT 1
SalesTeam Load (0.1ms) SELECT "sales_teams".* FROM "sales_teams" WHERE "sales_teams"."id" = 2 LIMIT 1
PriorityLevel Load (0.1ms) SELECT "priority_levels".* FROM "priority_levels" WHERE "priority_levels"."id" = 1 LIMIT 1
CACHE (0.0ms) SELECT "scores".* FROM "scores" WHERE "scores"."client_id" = 10 LIMIT 1
Max Load (0.2ms) SELECT "maxes".* FROM "maxes" WHERE "maxes"."user_id" = 1 LIMIT 1
CACHE (0.0ms) SELECT "scores".* FROM "scores" WHERE "scores"."client_id" = 10 LIMIT 1
Weight Load (0.2ms) SELECT "weights".* FROM "weights" LIMIT 1
(3.2ms) commit transaction
Redirected to http://localhost:3000/clients
Completed 302 Found in 796ms (ActiveRecord: 26.4ms)
The only UPDATE transaction is for the edit to the name I made to test the callback.
I know there is no UPDATE transaction, because technically I am not saving the record.
But when I try to do any of the commented out statements - i.e. client.save or client.update_attributes(....) I get a Stack Level Too Deep Error.
What is causing this and how can I save this record?
As indicated by #muffinista (and specified in the update to the question itself), the issue here is that the callback is trying to save the record which is calling the callback which is trying to save the record...creating a Stack Overflow (stole that joke from #ivan on this similar question).
It seems the best answer is to use update_column - the docs can be seen here.
That does the same thing that update_attributes does, but doesn't do validation or callbacks - which is fine for what I want in this particular instance.
Of special note, the format of update_column is not the same as update_attributes.
What you do is:
client.update_column(:weighted_score, weighted_score)
Where :weighted_score is my column name, and weighted_score is the local variable I set in the code example above that did my calculation for me.
For what it's worth, I got this answer from this SO answer.
Your callback is running after the original record is saved, but you're modifying the record and trying to re-save it, which triggers the callback again ad infinitum. This is basically a duplicate question, it's covered here for example: when to update record after save?
How about using a before_save filter instead?
In my application I have: config.time_zone = 'Warsaw'
A strange issue I have, is that it seems like Rails are having problems with comparision of datetime fields.
If I change the datetime 1 hour back (and Warsaw is currently in timezone +0100), Rails won't update the database, even if the field has changed. However, if I change the field once again, then the update will go to the database.
Example:
(Rails 3.1.0, ruby-1.9.2-p290, fresh rails app):
$ rails g model User starts_at:datetime
$ rake db:migrate
$ rails c
Loading development environment (Rails 3.1.0)
ruby-1.9.2-p290 :001 > u = User.create({:starts_at => "2011-01-01 10:00"})
SQL (21.3ms) INSERT INTO "users" ("created_at", "starts_at", "updated_at") VALUES (?, ?, ?) [["created_at", Tue, 13 Dec 2011 11:32:50 CET +01:00], ["starts_at", Sat, 01 Jan 2011 10:00:00 CET +01:00], ["updated_at", Tue, 13 Dec 2011 11:32:50 CET +01:00]]
=> #<User id: 1, starts_at: "2011-01-01 09:00:00", created_at: "2011-12-13 10:32:50", updated_at: "2011-12-13 10:32:50">
ruby-1.9.2-p290 :002 > u.starts_at
=> Sat, 01 Jan 2011 10:00:00 CET +01:00 # datetime created
ruby-1.9.2-p290 :003 > u.starts_at = "2011-01-01 09:00:00" # new datetime with one hour back
=> "2011-01-01 09:00:00"
ruby-1.9.2-p290 :004 > u.starts_at
=> Sat, 01 Jan 2011 09:00:00 CET +01:00 # changed datetime
ruby-1.9.2-p290 :005 > u.save
=> true
ruby-1.9.2-p290 :006 > u.starts_at = "2011-01-01 09:00:00"
=> "2011-01-01 09:00:00"
ruby-1.9.2-p290 :007 > u.save
(0.3ms) UPDATE "users" SET "starts_at" = '2011-01-01 08:00:00.000000', "updated_at" = '2011-12-13 10:33:17.919092' WHERE "users"."id" = 1
=> true
I've tested it in this fresh app, because I have a problem with this in larger application. What is going on? I've tried to browse the Rails code, tried to re-copy the relevant code 'by-hand' in console (like update, assign_attributes, even checked time_zone_conversion) and it worked, but not in 'real world'..
looks like you stumbled on a similar issue.
The problem appears to be here:
https://github.com/rails/rails/blob/3-1-stable/activerecord/lib/active_record/attribute_methods/dirty.rb#L62
When rails it testing if the value was changed it compares old & new:
old = From cache (which is Time in your current timezone)
new = Time in UTC (+00:00) as saved in the database
If the difference in time is the UTC offset, the above erroneously succeeds (luckly the new cached value holds the intended change).
The next save/update compares with the new (and correct) cached value and marks the field as changed.
EDIT:
Done some tests, this works well for me:
https://github.com/rails/rails/blob/3-1-stable/activerecord/lib/active_record/attribute_methods/time_zone_conversion.rb#L50
Change
write_attribute(:#{attr_name}, original_time)
to
write_attribute(:#{attr_name}, time.in_time_zone('UTC').to_s)
Boris