One thing that keeps me banging my head on the wall in Rails is its unclean way of saving single attributes back to the model. Or at least, my understanding of it.
From what i know, the closest method that is doing that is update_attribute (which is now deprecated?). However, it has a major drawback. It performs an update on all model fields.
If i'm mistaken, please state what is the best way to do this thing in a clean manner. If i'm correct, i seriously don't understand, why there is not clean method that does this on single attributes?
I just tested this out:
Code:
Order.update(1, :description => 'fff')
SQL executed:
UPDATE `orders` SET `updated_at` = '2011-04-25 05:23:29', `description` = 'fff' WHERE `orders`.`id` = 1
So yes, Rails update does almost what you need. (except that it also updates the updated_at)
Tip: In IRB, you can execute ActiveRecord::Base.logger = Logger.new(STDOUT) to see the log output of Rails (including SQL statments).
Related
For an ActiveRecord query, I can see the SQL generated without actually executing it:
SomeModel.where(something: "something").to_sql
No query was sent to the DB, but I can see the SQL as a string.
Is there anything similar that can be done for the update SQL that will be generated by some_model.save?
I think maybe not, I can't find it!
#jrochkind, If we check the API for .save here then it just calls create_or_update method. So ideally, such methods returning boolean values will not generate queries using to_sql.
Also, I think similar topic is covered here to_sql not working on update_attributes or .save
And probably there is an alternate approach such as to override ActiveRecord execute method
Even I have not seen any method to generate sql for save yet.
I also faced a situation earlier and I found that we can use the sandbox mode of rails console to verify the queries and it will be rollback the changes made in the session once we close the console.
rails console --sandbox
documentation
If you wish to test out some code without changing any data, you can do that by invoking rails console --sandbox.
#Rohan Daxini have added the reason why .to_sql is not available on save
I have this situation too.
I'm considering doing a restore of a production backup in a local environment, doing the update there and using mysqldump.
Do save within a transaction with a rollback at the end
eg.
irb(main):024:0> Post.transaction do
irb(main):025:1* p=Post.create(user_id: User.first.id, text: '11', group_id: 1)
irb(main):026:1> raise ActiveRecord::Rollback
irb(main):027:1> end
I've got the following update query running in a function called by a before_destroy callback in a Rails model:
Annotation.joins(:annotation_groups)
.where({'annotation_groups.group_id' => self.id})
.update_all({qc_approved: false}) if doc.in_qc?`
(I've also tried the following simpler version to see if another angle works: self.annotations.update_all({qc_approved: false}))
Both generate the below SQL query in "Server development log" (debugging in RubyMine):
UPDATE "annotations" SET "qc_approved" = 'f' WHERE "annotations"."id" IN (SELECT "annotations"."id" FROM "annotations" INNER JOIN "annotation_groups" ON "annotation_groups"."annotation_id" = "annotations"."id" WHERE "annotation_groups"."group_id" = 159)
However, as far as I can tell, that SQL never causes a DB update, even though the destroy process afterwards works fine. I can set a breakpoint directly after the statement and look at the database, and the qc_approved fields are still true. However, I can copy and paste the statement into a Postgres console and run it, and it updates the fields correctly.
Is anyone aware as to what would cause this behavior? Does before_destroy exist in its own strange alternate transactional universe that causes odd behavior like this? What scenario would cause the SQL to show up in the server log but not make it to the DB?
Thanks to the quick and helpful comments above confirming the nature of the callback inside the larger transaction, I figured it out.. despite the name, before_destroy was actually executing after dependent destroy calls, so that the joined annotation_group table row was destroyed before the UPDATE statement that relied on it was called in the transaction.
To be more specific, I added :prepend => true to the before_destroy definition so that it ran before the destroys as intended.
I have a Rails 3 app that has several hundred records in a mySQL-DB that need to be updated multiple times each hour. The actual updating is done through delayed_job which is triggered in controller-logic (checking if enough time has passed since the last update, only then sth. happens).
Each update is slow, it can take up to a second in some cases (although it averages at 3 - 5 updates/sec.).
Code looks like this:
class Thing < ActiveRecord::Base
...
def self.scheduled_update
Thing.all.each do |t|
...
t.some_property = new_value
t.save
end
end
end
I've observed that the execution stalls after 300 - 400 records and then the delayed job just seems to hang and times out eventually (entries in delayed_job.log). After a while the next one starts, also fails, and so forth, so not all records get updated.
What is the proper way to do this?
How does Rails handle database-connections when used like that? Could it be some timeout issue that is not detected/handled properly?
There must be a default way to do this, but couldn't find anything so far..
Any help is appreciated.
Another options is update_all.
Rails is a bad choice for mass data records. See if you can create a sql stored procedure or some other way that would avoid active record.
Use object.save_with_validation(false) if you are ok with skipping validations altogether.
When finding records, use :select => 'a,b,c,other_fields' to limit the fields you want ('a', 'b', 'c' and 'other' in this example).
Use :include for eager loading when you are initially selecting and joining across multiple tables.
So I solved my problem.
There was some issue with the rails-version I was using (3.0.3), the Timeout was caused by some bug I suspect. Updating to a later version of the 3.0.x branch solved it and everything runs perfectly now.
I've got pretty much the same problem as this guy: http://www.ruby-forum.com/topic/197440.
I'm trying to touch a column (:touched_at) without having it auto-update :updated_at, but watching the SQL queries, it always updates both to the current time.
I thought it might be something to do with the particular model I was using it on, so I tried a couple different ones with the same result.
Does anyone know what might be causing it to always set :updated_at when touching a different column? touch uses write_attribute internally, so it shouldn't be doing this.
Edit:
Some clarification... the Rails 2.3.5 docs for touch state that "If an attribute name is passed, that attribute is used for the touch instead of the updated_at/on attributes." But mine isn't acting that way. Perhaps it's a case of the docs having drifted away from the actual state of the code?
You pretty much want to write custom SQL:
def touch!
self.class.update_all({:touched_at => Time.now.utc}, {:id => self.id})
end
This will generate this SQL:
UPDATE posts SET touched_at = '2010-01-01 00:00:00.0000' WHERE id = 1
which is what you're after. If you call #save, this will end up calling #create_with_timestamps or #update_with_timestamps, and these are what update the updated_on/updated_at/created_on/created_at columns.
By the way, the source for #touch says it all.
I'm trying to update one of my objects in my rails app and the changes just don't stick. There are no errors, and stepping through with the debugger just reveals that it thinks everything is updating.
Anyway, here is the code in question...
qm = QuestionMembership.find(:first, :conditions => ["question_id = ? AND form_id = ?", q_id, form_id])
qm.position = x
qm.save
For reference sake, QuestionMembership has question_id, form_id, and position fields. All are integers, and have no db constraints.
That is basically my join table between Forms and Questions.
Stepping through the code, qm gets a valid object, the position of the object does get changed to the value of x, and save returns 'true'.
However, after the method exits, the object in the db is unchanged.
What am I missing?
You may not be finding the object that you think you are. Some experimenting in irb might be enlightening.
Also, as a general rule when changing only one attribute, it's better to write
qm.update_attribute(:position, x)
instead of setting and saving. Rails will then update only that column instead of the entire row. And you also get the benefit of the data being scrubbed.
Is there an after_save?
Is the correct SQL being emitted?
In development log, you can actually see the sql that is generated.
For something like this:
qm = QuestionMembership.find(:first, :conditions => ["question_id = ? AND form_id = ?", q_id, form_id])
qm.position = x
qm.save
You should see something to the effect of:
SELECT * FROM question_memberships WHERE question_id=2 AND form_id=6 LIMIT 1
UPDATE question_memberships SET position = x WHERE id = 5
Can you output what sql you are actually seeing so we can compare?
Either update the attribute or call:
qm.reload
after the qm.save
What is the result of qm.save? True or false? And what about qm.errors, does that provide anything that makes sense to you? And what does the development.log say?
I have run into this problem rather frequently. (I was about to say consistently, but I cannot, as that would imply that I would know when it was about to happen.)
While I have no solution to the underlying issue, I have found that it seems to happen to me only when I am trying to update mysql text fields. My workaround has been to set the field to do something like:
qm.position = ""
qm.save
qm.position = x
qm.save
And to answer everyone else... when I run qm.save! I get no errors. I have not tried qm.save?
When I run through my code in the rails console everything works perfectly as evidenced by re-finding the object using the same query brings the expected results.
I have the same issue when using qm.update_attribute(... as well
My workaround has gotten me limping this far, but hopefully someone on this thread will be able to help.
Try changing qm.save to qm.save! and see if you get an exception message.
Edit: What happens when you watch the log on the call to .save!? Does it generate the expected SQL?
Use ./script/console and run this script.. step by step..
see if the position field for the object is update or not when you run line 2
then hit qm.save or qm.save!... to test
see what happens. Also as mentioned by Tim .. check the logs
Check your QuestionMembership class and verify that position does not have something like
attr_readonly :position
Best way to debug this is to do
tail -f log/development.log
And then open another console and do the code executing the save statement. Verify that the actual SQL Update statement is executed.
Check to make sure your database settings are correct. If you're working with multiple databases (or haven't changed the default sqlite3 database to MySQL) you may be working with the wrong database.
Run the commands in ./script/console to see if you see the same behavior.
Verify that a similar object (say a Form or Question) saves.
If the Form or Question saves, find the difference between the QuestionMembership and Form or Question object.
Turns out that it was emitting the wrong SQL. Basically it was looking for the QuestionMembeship object by the id column which doesn't exist.
I was under the impression that that column was unnecessary with has_many_through relationships, although it seems I was misguided.
To fix, I simply added the id column to the table as a primary key. Thanks for all the pointers.