how to lock record in RoR - ruby-on-rails

I'm developing a reservation system for stuff that you could rent.
I would like to restrict multiple users from reserving the same item.
I display a list, which user can click on the item to check the details.
If any user has already opened the detail view then other user can not open it at the same time.
I am maintaining a flag call is_lock to check if the record is already locked but I was facing issue when multiple users clicked on the same item at the same time.
So I implementing pessimistic lock, which reduced the rate of occurrence of this issue but multiple users opening the same item but it did not completely fixed the issue. I am still facing the same thing.
begin
Item.transaction do
item = Item.lock.where(id: item_id, is_lock: false)
item.is_lock = true;
item.save!
end
rescue Exception => e
# Something went wrong.
end
Above is the code that I have implemented.
Please let me know if I am doing anything wrong.
EDIT:
I've tried the solution provided by #rmlockerd in following way:
Run rails in 2 separate consoles.
Fetch record with lock that has id:100 from console-1.
Fetch to fetch the same record from console-2.
But the above test failed as I was able to fetch the same record from both console even though the record was locked from console-1.
Run rails in 2 separate consoles.

It might be misleading to just look at the snippet you provided, but there does seem like a possible race condition due to your .where predicate.
If User2 attempts to get a lock on the same item after User1 but before first the transaction commits, the .where will still return the original record with is_lock false. The default behaviour for .lock is to simply wait its turn for a lock. So User2 would block until the original transaction commits, then get a lock and proceed to set is_lock to true as well.
The good news is that when you get a lock, Rails reloads the record so you are getting the latest data. Checking is_lock after obtaining the lock should eliminate that race condition, like so:
Item.transaction do
item = Item.lock.find_by(id: item_id, is_lock: false) # only 1, so where is unnecessary
return if item.blank? || !item.is_lock
item.update!(is_lock: true)
end
# I have the lock stuff...
The .lock method also takes an optional 'locking clause' -- which varies based on the database you use -- that can be used to configure the locking behaviour. For example, if you use Postgres, you could do:
Item.transaction do
item = Item.lock('FOR UPDATE SKIP LOCKED').find_by(id: item_id, is_lock: false)
return if item.blank?
item.update!(is_lock: true)
end
The SKIP LOCKED clause directs Postgres to automatically skip any record that is already locked. In the race condition described above, the second call to .lock would bail immediately and return nil, so a simple check of item presence would suffice. Check out the Postgres or MySQL documentation if you're interested in database-specific locking clauses.

Related

find_or_create ActiveRecord::RecordNotUnique rescue retry not working

Using ruby 2.1.2 and rails 4.1.4. I have a unique constraint on email column in my database. If there's a race condition where one thread is creating a user and the other attempts to create at the same time a record not unique exception is raised. I am trying to handle this exception by retrying so that when it retries it will find the customer during the SELECT in the find_or_create.
However, it does not seem to be working. It runs out of retries and then re-raises the exception.
Here's what I was doing originally:
retries = 10
begin
user = User.find_or_create_by(:email => email)
rescue ActiveRecord::RecordNotUnique
retry unless (retries-=1).zero?
raise
end
I then thought maybe the database connection was caching the SELECT query result causing it to think the user still does not exist and to continue to try to create it. To fix that I tried using Model.uncached to disable query caching:
retries = 10
begin
User.uncached do
user = User.find_or_create_by(:email => email)
end
rescue ActiveRecord::RecordNotUnique
retry unless (retries-=1).zero?
raise
end
`
This is also not working. I'm not sure what else to do to fix this issue? Should I increase retries? Add a sleep delay between retries? Is there a better way to clear query cache (if that's the issue?)
Any ideas?
Thank you!
If you are using Postgres there is an upsert (https://github.com/jesjos/active_record_upsert) gem that makes it really easy to add "ON CONFLICT DO NOTHING/UPDATE" in a scenario like this. You would be able to replace your code with
User.upsert(email: email)
If you want to add or update other data in the same operation as part of the same upsert call, you should first specify that email is the unique key for the model:
class User < ActiveRecord::Base
upsert_keys [:email]
end
Edit: To identify the problem with your approach try posting the relevant section of the log where we can see what SQL statements are being issued, including the transaction BEGIN/COMMITs.
This issue is fixed in rails 6+ with the method create_or_find_by.
Related PR: https://github.com/rails/rails/pull/31989
Documentation: https://apidock.com/rails/v6.0.0/ActiveRecord/Relation/create_or_find_by
This is similar to #find_or_create_by, but avoids the problem of stale reads between the SELECT and the INSERT, as that method needs to first query the table, then attempt to insert a row if none is found.
Warning: There are several drawbacks to #create_or_find_by, so consider reading the doc for your use cases.

Database not updating correctly in Rails

I was hoping you could help me with a problem I've been stuck on for quite a while now. I have a database with tickets. These tickets contain information, like a status. My application uses the Zendesk API to get info from support tickets and store them into my database.
What I want to do is store the previous and current status of a ticket into my database. I am trying to accomplish this by storing the old values before updating my database. At first this seemed to work great. Whenever I change the status in Zendesk, my app changes the previous_state to the old state value and the actual state to the one it gathers from Zendesk.
However, it goes wrong whenever I refresh my page. When that happens (and the method gets called again), for some reason it puts both the previous_state and state on the same value. I must be doing something wrong in one of my update or store lines but I can't figure out what. I hope someone of you can help me out.
Ticket is the Ticket database, client is the zendesk connection. The last loop checks if the status and previous_status are the same and if so, tries to put the previous state back to the previous state before the big update with zendesk. The idea is that the previous state remains unchanged until the actual state changes.
previousTickets = Ticket.all
Ticket.all.each do |old|
old.update(:previous_status => old.status)
end
client.tickets.each do |zt|
Ticket.find_by(:ticket_id => zt.id).update(
subject: zt.subject,
description: zt.description,
ticket_type: zt.type,
status: zt.status,
created: zt.created_at,
organization_id: zt.organization_id,
)
end
Ticket.all.each do |newTicket|
if(newTicket.status == newTicket.previous_status)
b = previousTickets.find_by(:ticket_id => newTicket.ticket_id)
c = b.previous_status
newTicket.update(:previous_status => c)
end
end
Your last loop isn't working because previousTickets does not contain previous tickets, but current ones. This is due to the fact that Ticket.all returns only an ActiveRecord relation. Such is relation loads data in a lazy way : unless you use the content of the relation, it won't be loaded from the database.
You could explicitly load all tickets by converting the relation to an array:
previousTickets = Ticket.all.to_a
But I think you could achieve everything in one single loop: instead of populating all previous_status in the first loop and reverting it in the last, you should simply change the previous_status when you change the current one:
client.tickets.each do |zt|
ticket = Ticket.find_by(:ticket_id => zt.id)
previous_status = ticket.previous_status
previous_status = ticket.status if zt.status != ticket.status
ticket.update(
subject: zt.subject,
description: zt.description,
ticket_type: zt.type,
previous_status: previous_status,
status: zt.status,
created: zt.created_at,
organization_id: zt.organization_id,
)
end

Rails 3.2 ActiveRecord concurrency

I have one application that is a task manager.
Each user can select a new task to be assigned to himself.
Is there a problem of concurrency if 2 users accept the same task at the same moment?
My code looks like this:
if #user.task == nil
#task.user = #user
#task.save
end
if 2 diferent users, on 2 diferent machines open this url at the same time. Will i have a problem?
You can use optimistic locking to prevent other "stale" records from being saved to the database. To enable it, your model needs to have a lock_version column with a default value of 0.
When the record is fetched from the database, the current lock_version comes along with it. When the record is modified and saved to the database, the database row is updated conditionally, by constraining the UPDATE on the lock_version that was present when the record was fetched. If it hasn't changed, the UPDATE will increment the lock_version. If it has changed, the update will do nothing, and an exception (ActiveRecord::StaleObjectError) will be raised. This is the default behavior for ActiveRecord unless turned off as follows:
ActiveRecord::Base.lock_optimistically = false
You can (optionally) use a column-name other than lock_version. To use a custom name, add a line like the following to your model-class:
set_locking_column :some_column_name
An alternative to optimistic locking is pessimistic locking, which relies on table- or row-level locks at the database level. This mechanism will block out all access to a locked row, and thus may negatively affect your performance.
Never tried it but you may use http://api.rubyonrails.org/classes/ActiveRecord/Locking/Pessimistic.html
You should be able to acquire a lock on your specific task, something like that:
#task = Task.find(some_id)
#task.with_lock do
#Then let's check if there's still no one assigned to this task
if #task.user.nil? && #user.task.nil?
#task.user = #user
#task.save
end
end
Again, I never used this so I'd test it with a big sleep inside the lock to make sure it actually locks everything the way you want it
Also I'm not sure about the reload here. Since the row is locked, it may fail. But you have to make sure your object is fresh from the db after acquiring the lock, there may be another way to do it.
EDit : NO need to reload, I checked the source code and with_lock does it for you.
https://github.com/rails/rails/blob/4c5b73fef8a41bd2bd8435fa4b00f7c40b721650/activerecord/lib/active_record/locking/pessimistic.rb#L61

Database lock not working as expected with Rails & Postgres

I have the following code in a rails model:
foo = Food.find(...)
foo.with_lock do
if bar = foo.bars.find_by_stuff(stuff)
# do something with bar
else
bar = foo.bars.create!
# do something with bar
end
end
The goal is to make sure that a Bar of the type being created is not being created twice.
Testing with_lock works at the console confirms my expectations. However, in production, it seems that in either some or all cases the lock is not working as expected, and the redundant Bar is being attempted -- so, the with_lock doesn't (always?) result in the code waiting for its turn.
What could be happening here?
update
so sorry to everyone who was saying "locking foo won't help you"!! my example initially didin't have the bar lookup. this is fixed now.
You're confused about what with_lock does. From the fine manual:
with_lock(lock = true)
Wraps the passed block in a transaction, locking the object before yielding. You pass can the SQL locking clause as argument (see lock!).
If you check what with_lock does internally, you'll see that it is little more than a thin wrapper around lock!:
lock!(lock = true)
Obtain a row lock on this record. Reloads the record to obtain the requested lock.
So with_lock is simply doing a row lock and locking foo's row.
Don't bother with all this locking nonsense. The only sane way to handle this sort of situation is to use a unique constraint in the database, no one but the database can ensure uniqueness unless you want to do absurd things like locking whole tables; then just go ahead and blindly try your INSERT or UPDATE and trap and ignore the exception that will be raised when the unique constraint is violated.
The correct way to handle this situation is actually right in the Rails docs:
http://apidock.com/rails/v4.0.2/ActiveRecord/Relation/find_or_create_by
begin
CreditAccount.find_or_create_by(user_id: user.id)
rescue ActiveRecord::RecordNotUnique
retry
end
("find_or_create_by" is not atomic, its actually a find and then a create. So replace that with your find and then create. The docs on this page describe this case exactly.)
Why don't you use a unique constraint? It's made for uniqueness
A reason why a lock wouldn't be working in a Rails app in query cache.
If you try to obtain an exclusive lock on the same row multiple times in a single request, query cached kicks in so subsequent locking queries never reach the DB itself.
The issue has been reported on Github.

validates_uniqueness_of failing on heroku?

In my User model, I have:
validates_uniqueness_of :fb_uid (I'm using facebook connect).
However, at times, I'm getting duplicate rows upon user sign up. This is Very Bad.
The creation time of the two records is within 100ms. I haven't been able to determine if it happens in two separate requests or not (heroku logging sucks and only goes back so far and it's only happened twice).
Two things:
Sometimes the request takes some time, because I query FB API for name info, friends, and picture.
I'm using bigint to store fb_uid (backend is postgres).
I haven't been able to replicate in dev.
Any ideas would be extremely appreciated.
The signin function
def self.create_from_cookie(fb_cookie, remote_ip = nil)
return nil unless fb_cookie
return nil unless fb_hash = authenticate_cookie(fb_cookie)
uid = fb_hash["uid"].join.to_i
#Make user and set data
fb_user = FacebookUser.new
fb_user.fb_uid = uid
fb_user.fb_authorized = true
fb_user.email_confirmed = true
fb_user.creation_ip = remote_ip
fb_name_data, fb_friends_data, fb_photo_data, fb_photo_ext = fb_user.query_data(fb_hash)
return nil unless fb_name_data
fb_user.set_name(fb_name_data)
fb_user.set_photo(fb_photo_data, fb_photo_ext)
#Save user and friends to the db
return nil unless fb_user.save
fb_user.set_friends(fb_friends_data)
return fb_user
end
I'm not terribly familiar with facebook connect, but is it possible to get two of the same uuid if two separate users from two separate accounts post a request in very quick succession before either request has completed? (Otherwise known as a race condition) validates_uniqueness_of can still suffer from this sort of race condition, details can be found here:
http://apidock.com/rails/ActiveModel/Validations/ClassMethods/validates_uniqueness_of
Because this check is performed
outside the database there is still a
chance that duplicate values will be
inserted in two parallel transactions.
To guarantee against this you should
create a unique index on the field.
See add_index for more information.
You can really make sure this will never happen by adding a database constraint. Add this to a database migration and then run it:
add_index :user, :fb_uid, :unique => true
Now a user would get an error instead of being able to complete the request, which is usually preferable to generating illegal data in your database which you have to debug and clean out manually.
From Ruby on Rails v3.0.5 Module ActiveRecord::Validations::ClassMethods
http://s831.us/dK6mFQ
Concurrency and integrity
Using this [validates_uniqueness_of]
validation method in conjunction with
ActiveRecord::Base#save does not
guarantee the absence of duplicate
record insertions, because uniqueness
checks on the application level are
inherently prone to race conditions.
For example, suppose that two users
try to post a Comment at the same
time, and a Comment’s title must be
unique. At the database-level, the
actions performed by these users could
be interleaved in the following
manner: ...
It seems like there is some sort of a race condition inside your code. To check this, i would first change the code so that facebook values are first extracted and only then i would create a new facebook object.
Then i would highly suggest that you write a test to check whether your function gets executed once. It seems that it's executed two times.
And upon this, there seems to be a race condition upon waiting to get the facebook results.

Resources