I'm unable to create a row in the DB. Rails apparently starts and then immediately rolls back the transaction without any errors. I'm using sqlite3.
logger.debug("creating billing name...")
BillingName.create() #also tried BillingName.new.save
logger.debug("...created")
Log file:
creating billing name...
^[[1m^[[36m (0.1ms)^[[0m ^[[1mbegin transaction^[[0m
^[[1m^[[35m (0.1ms)^[[0m rollback transaction
...created
select * from billing_name shows indeed no entry has been added. How can I tell why the transaction is being rejected?
You can check the errors after a save or valid?
billing_name = BillingName.new
billing_name.save # or billing_name.valid?
puts billing_name.errors.inspect
These are good answers that help you find the error by inspecting the model. I use those too.
But I've found that ActiveRecord doesn't always provide useful information, especially in callbacks. valid? will be true, errors will be empty, and the console won't show anything except the rollback. So, a good place to look is what is happening in those callbacks, specifically the before_create filter.
As someone who always uses the bang methods, I recently had a similar issue and could not debug it.
The only thing is, I had upgraded to rails 7.
Yup, so apparently, using return inside of a transaction block is no longer supported and will result in a rollback.
GITHUB: Deprecate committing a transaction exited with return or throw #29333
Related
For an ActiveRecord query, I can see the SQL generated without actually executing it:
SomeModel.where(something: "something").to_sql
No query was sent to the DB, but I can see the SQL as a string.
Is there anything similar that can be done for the update SQL that will be generated by some_model.save?
I think maybe not, I can't find it!
#jrochkind, If we check the API for .save here then it just calls create_or_update method. So ideally, such methods returning boolean values will not generate queries using to_sql.
Also, I think similar topic is covered here to_sql not working on update_attributes or .save
And probably there is an alternate approach such as to override ActiveRecord execute method
Even I have not seen any method to generate sql for save yet.
I also faced a situation earlier and I found that we can use the sandbox mode of rails console to verify the queries and it will be rollback the changes made in the session once we close the console.
rails console --sandbox
documentation
If you wish to test out some code without changing any data, you can do that by invoking rails console --sandbox.
#Rohan Daxini have added the reason why .to_sql is not available on save
I have this situation too.
I'm considering doing a restore of a production backup in a local environment, doing the update there and using mysqldump.
Do save within a transaction with a rollback at the end
eg.
irb(main):024:0> Post.transaction do
irb(main):025:1* p=Post.create(user_id: User.first.id, text: '11', group_id: 1)
irb(main):026:1> raise ActiveRecord::Rollback
irb(main):027:1> end
I have a case where I need to create like 10000 entries in a table and after some research I decided to use a transaction to do it.
My problem is I haven't found any documentation or guide that will tell me where I put a transaction or how I execute it
This can be achieved very easily:
ActiveRecord::Base.transaction do
... your code ...
end
The code inside the block will run within a database transaction. If any error occurs during execution, all the changes will be rolled back.
I am wondering what does the querying if the connection is in transaction or not actually do ?
Example :
....
try
if not DATA_MODULE.ACRDatabase1.InTransaction then
DATA_MODULE.ACRDatabase1.StartTransaction;
....
DATA_MODULE.ACRDatabase1.Commit();
except
DATA_MODULE.ACRDatabase1.Rollback;
Does it temporarily stop the current transaction if it detects that there is another transaction going on and waits for the other transaction to complete and only then executes or what? Or does it just misfire (rollback) if there's another transaction detected?
Attempting to start a transaction which has already been started will raise an exception. The call to InTransaction simply determines if the transaction has already started or not and returns a True/False response.
I prefer this...it protects you if you have any problems while editing...and only rollback if you have a problem with the Commit. If any exception is raised after the StartTransaction...you will never get to the Commit. You will always run the finally and will make sure you are not in a Tranaction, if so Rollback. I try not to use Try Except, don't have to worry about the Raise
try
DATA_MODULE.ACRDatabase1.StartTransaction;
....
DATA_MODULE.ACRDatabase1.Commit();
finally
if DATA_MODULE.ACRDatabase1.InTransaction then
DATA_MODULE.ACRDatabase1.Rollback;
at the very least the code should be reraising the exception or your user will never know why the data is not saving. ReRaising Exception
I've got the following update query running in a function called by a before_destroy callback in a Rails model:
Annotation.joins(:annotation_groups)
.where({'annotation_groups.group_id' => self.id})
.update_all({qc_approved: false}) if doc.in_qc?`
(I've also tried the following simpler version to see if another angle works: self.annotations.update_all({qc_approved: false}))
Both generate the below SQL query in "Server development log" (debugging in RubyMine):
UPDATE "annotations" SET "qc_approved" = 'f' WHERE "annotations"."id" IN (SELECT "annotations"."id" FROM "annotations" INNER JOIN "annotation_groups" ON "annotation_groups"."annotation_id" = "annotations"."id" WHERE "annotation_groups"."group_id" = 159)
However, as far as I can tell, that SQL never causes a DB update, even though the destroy process afterwards works fine. I can set a breakpoint directly after the statement and look at the database, and the qc_approved fields are still true. However, I can copy and paste the statement into a Postgres console and run it, and it updates the fields correctly.
Is anyone aware as to what would cause this behavior? Does before_destroy exist in its own strange alternate transactional universe that causes odd behavior like this? What scenario would cause the SQL to show up in the server log but not make it to the DB?
Thanks to the quick and helpful comments above confirming the nature of the callback inside the larger transaction, I figured it out.. despite the name, before_destroy was actually executing after dependent destroy calls, so that the joined annotation_group table row was destroyed before the UPDATE statement that relied on it was called in the transaction.
To be more specific, I added :prepend => true to the before_destroy definition so that it ran before the destroys as intended.
I have the following code in a rails model:
foo = Food.find(...)
foo.with_lock do
if bar = foo.bars.find_by_stuff(stuff)
# do something with bar
else
bar = foo.bars.create!
# do something with bar
end
end
The goal is to make sure that a Bar of the type being created is not being created twice.
Testing with_lock works at the console confirms my expectations. However, in production, it seems that in either some or all cases the lock is not working as expected, and the redundant Bar is being attempted -- so, the with_lock doesn't (always?) result in the code waiting for its turn.
What could be happening here?
update
so sorry to everyone who was saying "locking foo won't help you"!! my example initially didin't have the bar lookup. this is fixed now.
You're confused about what with_lock does. From the fine manual:
with_lock(lock = true)
Wraps the passed block in a transaction, locking the object before yielding. You pass can the SQL locking clause as argument (see lock!).
If you check what with_lock does internally, you'll see that it is little more than a thin wrapper around lock!:
lock!(lock = true)
Obtain a row lock on this record. Reloads the record to obtain the requested lock.
So with_lock is simply doing a row lock and locking foo's row.
Don't bother with all this locking nonsense. The only sane way to handle this sort of situation is to use a unique constraint in the database, no one but the database can ensure uniqueness unless you want to do absurd things like locking whole tables; then just go ahead and blindly try your INSERT or UPDATE and trap and ignore the exception that will be raised when the unique constraint is violated.
The correct way to handle this situation is actually right in the Rails docs:
http://apidock.com/rails/v4.0.2/ActiveRecord/Relation/find_or_create_by
begin
CreditAccount.find_or_create_by(user_id: user.id)
rescue ActiveRecord::RecordNotUnique
retry
end
("find_or_create_by" is not atomic, its actually a find and then a create. So replace that with your find and then create. The docs on this page describe this case exactly.)
Why don't you use a unique constraint? It's made for uniqueness
A reason why a lock wouldn't be working in a Rails app in query cache.
If you try to obtain an exclusive lock on the same row multiple times in a single request, query cached kicks in so subsequent locking queries never reach the DB itself.
The issue has been reported on Github.