How to ensure atomicity when updating a field in a table? - ruby-on-rails

I want to write a piece of code such that it is guaranteed that at any one time, only one process can update a field for a certain record in the posts table.
Is this the correct way to do it?
#Make a check before entering transaction, so that a transaction
#is not entered into needlessly (this check is just for avoiding
#using DB resources that will be used when starting a transaction)
if #post.can_set_to_active?
ActiveRecord::Base.transaction do
#Make a check again, this time after entering transaction, to be
#sure that post can be marked active.
#Expectation is that inside a transaction, it is guaranteed that no other
#process can set the status of this post.
if #post.can_set_to_active?
#now set the post to active
#post.status = :active
#post.save
end #end of check inside transaction
end #end of transaction
end #end of check outside transaction
Also, is there some way to test this scenario using RSpec or even some other method?

class Post
##activation_lock = Mutex.new
def activate
self.status = :active
self.save
end
synchronize :activate, :with => :##activation_lock
end

Related

Rails & postgresql, notify/listen to when a new record is created

I'm experimenting & learning how to work with PostgreSQL, namely its Notify/Listen feature, in the context of making Server-Sent Events according to this tutorial.
The tutorial publishes NOTIFY to the user channel (via its id) whenever a user is saved and an attribute, authy_status is changed. The LISTEN method then yields the new authy_status Code:
class Order < ActiveRecord::Base
after_commit :notify_creation
def notify_creation
if created?
ActiveRecord::Base.connection_pool.with_connection do |connection|
execute_query(connection, ["NOTIFY user_?, ?", id, authy_status])
end
end
end
def on_creation
ActiveRecord::Base.connection_pool.with_connection do |connection|
begin
execute_query(connection, ["LISTEN user_?", id])
connection.raw_connection.wait_for_notify do |event, pid, status|
yield status
end
ensure
execute_query(connection, ["UNLISTEN user_?", id])
end
end
end
end
I would like to do something different, but haven't been able to find information on how to do this. I would like to NOTIFY when a user is created in the first place (i.e., inserted into the database), and then in the LISTEN, I'd like to yield up the newly created user itself (or rather its id).
How would I modify the code to achieve this? I'm really new to writing SQL so for example, I'm not very sure about how to change ["NOTIFY user_?, ?", id, authy_status] to a statement that looks not at a specific user, but the entire USER table, listening for new records (something like... ["NOTIFY USER on INSERT", id] ?? )
CLARIFICATIONS
Sorry about not being clear. The after_save was a copy error, have corrected to after_commit above. That's not the issue though. The issue is that the listener listens to changes in a SPECIFIC existing user, and the notifier notifies on changes to a SPECIFIC user.
I instead want to listen for any NEW user creation, and therefore notify of that. How does the Notify and Listen code need to change to meet this requirement?
I suppose, unlike my guess at the code, the notify code may not need to change, since notifying on an id when it's created seems to make sense still (but again, I don't know, feel free to correct me). However, how do you listen to the entire table, not a particular record, because again I don't have an existing record to listen to?
For broader context, this is the how the listener is used in the SSE in the controller from the original tutorial:
def one_touch_status_live
response.headers['Content-Type'] = 'text/event-stream'
#user = User.find(session[:pre_2fa_auth_user_id])
sse = SSE.new(response.stream, event: "authy_status")
begin
#user.on_creation do |status|
if status == "approved"
session[:user_id] = #user.id
session[:pre_2fa_auth_user_id] = nil
end
sse.write({status: status})
end
rescue ClientDisconnected
ensure
sse.close
end
end
But again, in my case, this doesn't work, I don't have a specific #user I'm listening to, I want the SSE to fire when any user has been created... Perhaps it's this controller code that also needs to be modified? But this is where I'm very unclear. If I have something like...
User.on_creation do |u|
A class method makes sense, but again how do I get the listen code to listen to the entire table?
Please use after_commit instead of after_save. This way, the user record is surely committed in the database
There are two additional callbacks that are triggered by the completion of a database transaction: after_commit and after_rollback. These callbacks are very similar to the after_save callback except that they don't execute until after database changes have either been committed or rolled back.
https://guides.rubyonrails.org/active_record_callbacks.html#transaction-callbacks
Actually it's not relevant to your question, you can use either.
Here's how I would approach your use case: You want to get notified when an user is created:
#app/models/user.rb
class User < ActiveRecord::Base
after_commit :notify_creation
def notify_creation
if id_previously_changed?
ActiveRecord::Base.connection_pool.with_connection do |connection|
self.class.execute_query(connection, ["NOTIFY user_created, '?'", id])
end
end
end
def self.on_creation
ActiveRecord::Base.connection_pool.with_connection do |connection|
begin
execute_query(connection, ["LISTEN user_created"])
connection.raw_connection.wait_for_notify do |event, pid, id|
yield self.find id
end
ensure
execute_query(connection, ["UNLISTEN user_created"])
end
end
end
def self.clean_sql(query)
sanitize_sql(query)
end
def self.execute_query(connection, query)
sql = self.clean_sql(query)
connection.execute(sql)
end
end
So that if you use
User.on_creation do |user|
#do something with the user
#check user.authy_status or whatever attribute you want.
end
One thing I am not sure why you want to do this, because it could have a race condition situation where 2 users being created and the unwanted one finished first.

ActiveRecord is not reloading nested object after it's updated inside a transaction

I'm using Rails 4 with Oracle 12c and I need to update the status of an User, and then use the new status in a validation for another model I also need to update:
class User
has_many :posts
def custom_update!(new_status)
relevant_posts = user.posts.active_or_something
ActiveRecord::Base.transaction do
update!(status: new_status)
relevant_posts.each { |post| post.update_stuff! }
end
end
end
class Post
belongs_to :user
validate :pesky_validation
def update_stuff!
# I can call this from other places, so I also need a transaction here
ActiveRecord::Base.transaction do
update!(some_stuff: 'Some Value')
end
end
def pesky_validation
if user.status == OLD_STATUS
errors.add(:base, 'Nope')
end
end
end
However, this is failing and I receive the validation error from pesky_validation, because the user inside Post doesn't have the updated status.
The problem is, when I first update the user, the already instantiated users inside the relevant_posts variable are not yet updated, and normally all I'd need to fix this was to call reload, however, maybe because I'm inside a transaction, this is not working, and pesky_validation is failing.
relevant_users.first.user.reload, for example, reloads the user to the same old status it had before the update, and I'm assuming it's because the transaction is not yet committed. How can I solve this and update all references to the new status?

track params changes in rails active record

So i have my form and in my controller i have my update method as follows
def update
#student = Student.find(params[:id])
if #student.update_attributes!(student_params)
#student.read_notes = true
#here i check if the records changed or not?
ap #student.name_changed?
end
end
def student_params
params.require(:student).permit(:name, :email, :age, :class)
end
This fails as i always get the false response each time even though i have actually made changes to the name record.
How do i actually track my changes in my record if i am updating via this way?
When you save the record (which update_attributes!, update!, and update will all do), Rails' "dirty tracking" resets and you lose the ability to easily tell if anything changed. What you could do instead is use assign_attributes, like so:
def update
#student = Student.find(params[:id])
#student.assign_attributes(student_params)
if #student.name_changed?
# ...
end
#student.save!
end
There's also an ActiveRecord method called previous_changes, which stores changes made after a save. This article goes into detail on how to use that.
You could also simply track if the name parameter differs from the record's name, or store the value prior to the update and compare it afterward, depending on your needs.

Can Sidekiq be performed for more than 1 task?

we have already used sidekiq for inserting records into our table asynchronously and we very often check production sidekiq dashboard to monitor no. of processed, queued, retry, busy for inserting records.
And we have got a new requirement to delete records (say users tables : delete expired users) asynchronously. we also need to monitor sidekiq dashboard for processes, queued, retry very often.
For insert records we use :
In my User controller:
def create_user
CreateUserWorker.perform_async(#client_info, #input_params)
end
In my lib/workers/createuser_worker.rb
class CreateUserWorker
include Sidekiq::Worker
def perform(client_info, input_params)
begin
#client_info = client_info
#user = User.new(#client_info)
#user.create(input_params)
rescue
raise
end
end
end
If I do the same for delete users asynchronously using sidekiq, how can i differentiate inserted process with deleted process without any messup?
First, If you want to check error for creating in begin-rescue block, you should use create! method. not create method.
Create method do not raise error.
Check here
Destroy method is same to Create method.
Use destroy method with ! (destroy!)
Of course, You should add new worker for destroy user.
because perform method should exists only 1.
If you do not want to add new worker, try pattern below!
UserWorker
def perform(~, flag)
#flag meaning is create or destroy
is_success = false # result of creating or destroying
# create or destroy
# ..
# ..
LogModel.create({}) # user info with is_success and flag
end
ebd
P.S
I think create() next new() is some awkward(?).
I recommend
#user = User.create(client_info)
or
#user = User.new(client_info)
#user.save! (bang meaning is same to above)
And no need begin-rescue block. Just use Create, Destroy method with bang.
def perform(client_info, input_params)
User.create!(client_info) # if failed raise Error
end
++Added for comments
I think if you have many user deleted or destroyed, pass user_ids (or user_infos) array to Worker perform method and in perform method, loop creating or destroying (if there is failed record created or destroyed, create log file or log model entry about a failed record).
If all user_id must be created or destroyed at once, use transaction block.
def perform(params)
begin
ActiveRecord::Base.transaction do
# loop create or destroy
end
rescue
end
end
if not, just loop
def perform(params)
#loop
if Create or destroy method (without bang)
#success
else
#failed
end
end
XWorker.perform_async() method maybe is called from admin page(?).

Rails before_destroy callback db changes always rolled back

I'm trying to prevent deletion of models from the db and pretty much follow this guide (see 9.2.5.3 Exercise Your Paranoia with before_destroy) from a Rails 4 book.
I have a simple model:
class User < ActiveRecord::Base
before_destroy do
update_attribute(:deleted_at, Time.current)
false
end
and in the controller:
def destroy
#user = User.find(params[:id])
# #user.update!(deleted_at: Time.zone.now) # if I do it here it works
#user.destroy # if I also comment this line...
render :show
end
The callback gets called and the attribute gets set, but then the database transaction always gets rolled back. It I leave out the returning of false the model gets deleted because the execution of delete is not halted.
As you can see in the comments I can get it to work but what I really want to do is use a Service Object and put the logic out of the controller.
if your callback returns false the transaction will always be rollbacked.
For what you want you should not call to the destroy method on your arel object.
Instead, make your own method like soft_destroy or something like that and update your attribute.
And to prevent others from calling the destroy method on your arel object, just add a callback raising and exception for instance.
Your model is just an object. If you really want to change the concept of destroy, change it:
def destroy
condition ? alt_action : super
end

Resources