Rails & postgresql, notify/listen whenever a new record is created on an admin dashboard without race condition - ruby-on-rails

I have an admin dashboard where I want an alert to be fired whenever a user is created (on a separate page). The code below works, however, there's a race condition. If 2 users are created very close together, it will only fire once.
class User < ApplicationRecord
after_commit :notify_creation, on: :create
def notify_creation
ActiveRecord::Base.connection_pool.with_connection do |connection|
self.class.execute_query(connection, ["NOTIFY user_created, '?'", id])
end
end
def self.listen_to_creation
ActiveRecord::Base.connection_pool.with_connection do |connection|
begin
execute_query(connection, ["LISTEN user_created"])
connection.raw_connection.wait_for_notify do |event, pid, id|
yield id
end
ensure
execute_query(connection, ["UNLISTEN user_created"])
end
end
end
def self.clean_sql(query)
sanitize_sql(query)
end
private
def self.execute_query(connection, query)
sql = self.clean_sql(query)
connection.execute(sql)
end
end
class AdminsController < ApplicationController
include ActionController::Live
def update
response.headers['Content-Type'] = 'text/event-stream'
sse = SSE.new(response.stream, event: 'notice')
begin
User.listen_to_creation do |user_id|
sse.write({user_id: user_id})
end
rescue ClientDisconnected
ensure
sse.close
end
end
end
This is my first time doing this, so I followed this tutorial, which like most tutorials are focused on updates to a single record, rather than listening to an entire table for new creation.

This is happening because you send only one update at once and then the request ends. If you make a request at the AdminsController#update. You have one subscriber waiting for your notification. Look at this block
begin
execute_query(connection, ["LISTEN user_created"])
connection.raw_connection.wait_for_notify do |event, pid, id|
yield id
end
ensure
execute_query(connection, ["UNLISTEN user_created"])
end
As soon as you get one notification, the block yields and then you close the channel. So if you are relying on frontend to make one more connection attempt once it gets the result, if a record gets created before you start listening to the channel again in the new connection, you won't get a notification as there was no listener attached to Postgres at that time.
This is sort of a common issue in any realtime notification system. You would ideally want a pipe to frontend(Websocket, SSE or even LongPolling) which is always open. If you get a new item you send it to the frontend using that pipe and you should ideally keep that pipe open as in case of Websockets and SSE. Right now you are kind of treating your SSE connection as a long poll.
So your code should look something like
# Snippet 2
def self.listen_to_creation
ActiveRecord::Base.connection_pool.with_connection do |connection|
begin
execute_query(connection, ["LISTEN user_created"])
loop do
connection.raw_connection.wait_for_notify do |event, pid, id|
yield id
end
end
ensure
execute_query(connection, ["UNLISTEN user_created"])
end
end
end
But this will run into a problem where it will keep thread alive forever even if the connection is closed until some data comes to thread and it is encounters an error while writing back then. You can choose to either run it a fixed number of times with short lived notify intervals or you can add sort of a hearbeat to it. There are two simple ways of accomplishing a hearbeat. I will add them as quick hack codes.
# Snippet 3
def self.listen_to_creation(heartbeat_interval = 10)
ActiveRecord::Base.connection_pool.with_connection do |connection|
begin
execute_query(connection, ["LISTEN user_created"])
last_hearbeat = Time.now
loop do
connection.raw_connection.wait_for_notify(heartbeat_interval) do |event, pid, id|
yield({id: id})
end
if Time.now - last_heartbeat >= heartbeat_interval
yield({heartbeat: true})
last_heartbeat = Time.now
end
end
ensure
execute_query(connection, ["UNLISTEN user_created"])
end
end
end
In the above example you will at least be sending something in the pipe every heartbeat_interval seconds. So, if the pipe closes it should error out and close the pipe thus freeing up the thread.
This approach kind of adds controller related logic to model and if you want to hold postgres notify without a time interval, the other thing that you can do to do a heartbeat is, just launch a thread in the controller itself. Launch a thread in the controller method that sleeps for heartbeat_interval and writes sse.write({heartbeat: true}) after waking up. You can leave the model code the same as Snippet 2 in that case.
Also, I added the other things to watch with SSEs with Puma & Rails in an answer to your other question:

Related

Rails & postgresql, notify/listen to when a new record is created

I'm experimenting & learning how to work with PostgreSQL, namely its Notify/Listen feature, in the context of making Server-Sent Events according to this tutorial.
The tutorial publishes NOTIFY to the user channel (via its id) whenever a user is saved and an attribute, authy_status is changed. The LISTEN method then yields the new authy_status Code:
class Order < ActiveRecord::Base
after_commit :notify_creation
def notify_creation
if created?
ActiveRecord::Base.connection_pool.with_connection do |connection|
execute_query(connection, ["NOTIFY user_?, ?", id, authy_status])
end
end
end
def on_creation
ActiveRecord::Base.connection_pool.with_connection do |connection|
begin
execute_query(connection, ["LISTEN user_?", id])
connection.raw_connection.wait_for_notify do |event, pid, status|
yield status
end
ensure
execute_query(connection, ["UNLISTEN user_?", id])
end
end
end
end
I would like to do something different, but haven't been able to find information on how to do this. I would like to NOTIFY when a user is created in the first place (i.e., inserted into the database), and then in the LISTEN, I'd like to yield up the newly created user itself (or rather its id).
How would I modify the code to achieve this? I'm really new to writing SQL so for example, I'm not very sure about how to change ["NOTIFY user_?, ?", id, authy_status] to a statement that looks not at a specific user, but the entire USER table, listening for new records (something like... ["NOTIFY USER on INSERT", id] ?? )
CLARIFICATIONS
Sorry about not being clear. The after_save was a copy error, have corrected to after_commit above. That's not the issue though. The issue is that the listener listens to changes in a SPECIFIC existing user, and the notifier notifies on changes to a SPECIFIC user.
I instead want to listen for any NEW user creation, and therefore notify of that. How does the Notify and Listen code need to change to meet this requirement?
I suppose, unlike my guess at the code, the notify code may not need to change, since notifying on an id when it's created seems to make sense still (but again, I don't know, feel free to correct me). However, how do you listen to the entire table, not a particular record, because again I don't have an existing record to listen to?
For broader context, this is the how the listener is used in the SSE in the controller from the original tutorial:
def one_touch_status_live
response.headers['Content-Type'] = 'text/event-stream'
#user = User.find(session[:pre_2fa_auth_user_id])
sse = SSE.new(response.stream, event: "authy_status")
begin
#user.on_creation do |status|
if status == "approved"
session[:user_id] = #user.id
session[:pre_2fa_auth_user_id] = nil
end
sse.write({status: status})
end
rescue ClientDisconnected
ensure
sse.close
end
end
But again, in my case, this doesn't work, I don't have a specific #user I'm listening to, I want the SSE to fire when any user has been created... Perhaps it's this controller code that also needs to be modified? But this is where I'm very unclear. If I have something like...
User.on_creation do |u|
A class method makes sense, but again how do I get the listen code to listen to the entire table?
Please use after_commit instead of after_save. This way, the user record is surely committed in the database
There are two additional callbacks that are triggered by the completion of a database transaction: after_commit and after_rollback. These callbacks are very similar to the after_save callback except that they don't execute until after database changes have either been committed or rolled back.
https://guides.rubyonrails.org/active_record_callbacks.html#transaction-callbacks
Actually it's not relevant to your question, you can use either.
Here's how I would approach your use case: You want to get notified when an user is created:
#app/models/user.rb
class User < ActiveRecord::Base
after_commit :notify_creation
def notify_creation
if id_previously_changed?
ActiveRecord::Base.connection_pool.with_connection do |connection|
self.class.execute_query(connection, ["NOTIFY user_created, '?'", id])
end
end
end
def self.on_creation
ActiveRecord::Base.connection_pool.with_connection do |connection|
begin
execute_query(connection, ["LISTEN user_created"])
connection.raw_connection.wait_for_notify do |event, pid, id|
yield self.find id
end
ensure
execute_query(connection, ["UNLISTEN user_created"])
end
end
end
def self.clean_sql(query)
sanitize_sql(query)
end
def self.execute_query(connection, query)
sql = self.clean_sql(query)
connection.execute(sql)
end
end
So that if you use
User.on_creation do |user|
#do something with the user
#check user.authy_status or whatever attribute you want.
end
One thing I am not sure why you want to do this, because it could have a race condition situation where 2 users being created and the unwanted one finished first.

Continue a loop after rescuing from an external API error in Rails

How can I use rescue to continue a loop. I’ll make an example
def self.execute
Foo.some_scope.each do |foo|
# This calls to an external API, and sometimes can raise an error if the account is not active
App::Client::Sync.new(foo).start!
end
end
So normally rescue Bar::Web::Api::Error => e would go at the end of the method and the loop will stop. If I could update a attribute of the foo that was rescued and call the method again, that foo would not be included in the scope and I would be able to start the loop again. But the issue with that is, I only want this once for each foo. So that way would loop through all of the existing foo again.
What’s another way I could do this? I could make a private method that is called at the top of the execute method. This could Loop through the foo and update the attribute so they aren’t part of the scope. But this sounds like an endless loop.
Does anyone have a good solution to this?
You can put a begin and rescue block within the loop. You talk about "updating an attribute of the foo" but it seems you only want that to ensure this foo is not processed on a restart of the loop, but you don't need to restart the loop.
def self.execute
Foo.some_scope.each do |foo|
# This calls to an external API, and sometimes can raise an error if the account is not active
begin
App::Client::Sync.new(foo).start!
rescue Bar::Web::Api::Error
foo.update(attribute: :new_value) # if you still need this
end
end
end
You could use retry. It will re-execute the whole begin block when called from a rescue block. If you only want it to retry a limited number of times you could use a counter. Something like:
def self.execute
Foo.some_scope.each do |foo|
num_tries = 0
begin
App::Client::Sync.new(foo).start!
rescue
num_tries += 1
retry if num_tries > 1
end
end
end
Documentation here.

How can I overriding class with proc and yield in test on rails?

I have below classes(only for example),
class Background
def self.add_thread(&blcok)
Thread.new do
yield
ActiveRecord::Base.connection.close
end
end
end
class Email
def send_email_in_other_thread
Background.add_thread do
send_email
end
end
def send_email
UserMailer.greeting_email.deliver_now
end
end
And below codes are for tests,
class EmailTest < ActiveSupport::TestCase
class Background
def self.add_thread(&block)
yield
end
end
test 'should send email' do
assert_difference 'ActionMailer::Base.deliveries.size', 1 do
send_email_in_other_thread
end
end
end
But this test fails, "ActionMailer::Base.deliveries.size" didn't change by 1.
And 1 in about 20 times success.
I think it is because of the modified Background class. Maybe overriding in test doesn't work or yield proc is not executed instantly but in delayed.
I tried 'block.call' instead of yield, but the result is same.
How can I make this test always be success?
This looks like a classic race condition. Thread.new returns as soon as the thread is spawned, not when it's work is completed.
Because your main thread doesn't halt execution, most of the time your assertion is run before the mail has been sent.
You could use the join method to wait for the sending thread to finish execution before returning, but then it would essentially be equivalent to a single thread again, as it blocks the calling (main) thread until work is done.
Thread.new do
yield
ActiveRecord::Base.connection.close
end.join
There are already some great gems, however, for dealing with background jobs in Rails. Check out SideKiq and Resque for example.

Can Sidekiq be performed for more than 1 task?

we have already used sidekiq for inserting records into our table asynchronously and we very often check production sidekiq dashboard to monitor no. of processed, queued, retry, busy for inserting records.
And we have got a new requirement to delete records (say users tables : delete expired users) asynchronously. we also need to monitor sidekiq dashboard for processes, queued, retry very often.
For insert records we use :
In my User controller:
def create_user
CreateUserWorker.perform_async(#client_info, #input_params)
end
In my lib/workers/createuser_worker.rb
class CreateUserWorker
include Sidekiq::Worker
def perform(client_info, input_params)
begin
#client_info = client_info
#user = User.new(#client_info)
#user.create(input_params)
rescue
raise
end
end
end
If I do the same for delete users asynchronously using sidekiq, how can i differentiate inserted process with deleted process without any messup?
First, If you want to check error for creating in begin-rescue block, you should use create! method. not create method.
Create method do not raise error.
Check here
Destroy method is same to Create method.
Use destroy method with ! (destroy!)
Of course, You should add new worker for destroy user.
because perform method should exists only 1.
If you do not want to add new worker, try pattern below!
UserWorker
def perform(~, flag)
#flag meaning is create or destroy
is_success = false # result of creating or destroying
# create or destroy
# ..
# ..
LogModel.create({}) # user info with is_success and flag
end
ebd
P.S
I think create() next new() is some awkward(?).
I recommend
#user = User.create(client_info)
or
#user = User.new(client_info)
#user.save! (bang meaning is same to above)
And no need begin-rescue block. Just use Create, Destroy method with bang.
def perform(client_info, input_params)
User.create!(client_info) # if failed raise Error
end
++Added for comments
I think if you have many user deleted or destroyed, pass user_ids (or user_infos) array to Worker perform method and in perform method, loop creating or destroying (if there is failed record created or destroyed, create log file or log model entry about a failed record).
If all user_id must be created or destroyed at once, use transaction block.
def perform(params)
begin
ActiveRecord::Base.transaction do
# loop create or destroy
end
rescue
end
end
if not, just loop
def perform(params)
#loop
if Create or destroy method (without bang)
#success
else
#failed
end
end
XWorker.perform_async() method maybe is called from admin page(?).

sidekiq - fall back to standard sync ruby code when redis server is not running

I'm using sidekiq in a rails app to send some emails asynchronously. How can I ensure that the code (the job itself) is executed even when the Redis server is not running.
CommentsWorker.perform_async(#user.id, #comment.id)
In the comments worker, I'm fetching the user and the comment, and I send an email:
def perform(user_id, comment_id)
user = User.find(user_id)
comment = Comment.find(comment_id)
CommentMailer.new_comment(user, comment).deliver
end
If I stop the Redis server, my app raises an exception Redis::CannotConnectError
I still want to send that email, even when the server is stopped, using old fashioned sync code. I tried to rescue from that exception, but for some reason it doesn't work.
Figured it out. The solution was to test for a redis connection and rescue from the exception, but before the call to perform_async. There's now only the minor issue of having to wait for the connection to time out, but I guess I can live with that.
redis_available = true
Sidekiq.redis do |connection|
begin
connection.info
rescue Redis::CannotConnectError
redis_available = false
end
end
if redis_available
CommentsWorker.perform_async(user.id, #comment.id, #award.id)
else
#sync code
user = User.find(user_id)
comment = Comment.find(comment_id)
CommentMailer.new_comment(user, comment).deliver
end
I know this has already been answered and I liked #mihai's approach, but we wanted to reuse the code elsewhere in our application for multiple workers. Also, we wanted to make it more generic to work with any worker.
We decided to extend Sidekiq::Worker with an additional method perform_async_with_failover defined below:
module Sidekiq
module Worker
module ClassMethods
def perform_async_with_failover(*args)
redis_available = true
Sidekiq.redis do |connection|
begin
connection.info
rescue Redis::CannotConnectError
redis_available = false
end
end
if redis_available
# process the job asynchronously
perform_async(*args)
else
# otherwise, instantiate and perform synchronously
self.new.perform(*args)
end
end
end
end
end
Improving Kyle's solution above since I guess using connection.info is to see if there is a connection available. But my understanding is that info would send extra commands when redis is online and that is unnecessary. I would just catch the exception from perform_async instead:
module Sidekiq
module Worker
module ClassMethods
def perform_async_with_failover(*args)
begin
# process the job asynchronously
perform_async(*args)
rescue Redis::CannotConnectError => e
# otherwise, instantiate and perform synchronously
self.new.perform(*args)
end
end
end
end
end
edited:
you may want to have a look at sidekiq error handling section
Implementing #mihai's answer I ended up creating a service to encapsulate the action of "gracefully delivering email".
Example of calling the class:
message = MyMailer.order_confirmation(email_arguments)
GracefullyDeliverEmail.call(message)
The class:
class GracefullyDeliverEmail
###
# Attempts to queue email for async sending but fails
# gracefully to delivering it immediately.
#
# #param context.message {ActionMailer::MessageDelivery}
###
def self.call(message)
validate!(message)
if redis_available?
message.deliver_later(wait: 2.mins)
else
message.deliver_now # Fallback to inline delivery
end
end
# == Private Methods ======================================================
private
# https://stackoverflow.com/questions/15993080/sidekiq-fall-back-to-standard-sync-ruby-code-when-redis-server-is-not-running/42247913#42247913
def redis_available?
redis_available = true
Sidekiq.redis do |connection|
begin
connection.info
rescue Redis::CannotConnectError
redis_available = false
end
end
redis_available
end
def validate!(message)
if !message.is_a?(ActionMailer::MessageDelivery)
raise "message must be of class ActionMailer::MessageDelivery"
end
end
end
You should check if there are active redis clients, something like:
def perform(user_id, comment_id)
user = User.find(user_id)
comment = Comment.find(comment_id)
redis_info = Sidekiq.redis { |conn| conn.info }
CommentMailer.new_comment(user, comment).deliver
rescue Redis::CannotConnectError
CommentMailer.new_comment(user, comment)
end
should do it.

Resources