How to run some action every few seconds for 10 minutes in rails? - ruby-on-rails

I am trying to build quizup like app and want to send broadcast every 10 second with a random question for 2 minutes. How do I do that using rails ? I am using action cable for sending broadcast. I can use rufus-scheduler for running an action every few seconds but I am not sure if it make sense to use it for my use case .

Simplest solution would be to fork a new thread:
Thread.new do
duration = 2.minutes
interval = 10.seconds
number_of_questions_left = duration.seconds / interval.seconds
while(number_of_questions_left > 0) do
ActionCable.server.broadcast(
"some_broadcast_id", { random_question: 'How are you doing?' }
)
number_of_questions_left -= 1
sleep(interval)
end
end
Notes:
This is only a simple solution of which you are actually ending up more than 2.minutes of total run time, because each loop actually ends up sleeping very slightly more than 10 seconds. If this discrepancy is not important, then the solution above would be already sufficient.
Also, this kind-of-scheduler only persists in memory, as opposed to a dedicated background worker like sidekiq. So, if the rails process gets terminated, then all currently running "looping" code will also be terminated as well, which you might intentionally want or not want.
If using rufus-scheduler:
number_of_questions_left = 12
scheduler = Rufus::Scheduler.new
# `first_in` is set so that first-time job runs immediately
scheduler.every '10s', first_in: 0.1 do |job|
ActionCable.server.broadcast(
"some_broadcast_id", { random_question: 'How are you doing?' }
)
number_of_questions_left -= 1
job.unschedule if number_of_questions_left == 0
end

Related

Can Sidekiq run a loop with wait and see a change to the db?

I have a sidekiq worker that waits for a change to happen to a record made by a remote client. Something like the following:
#myworker async process to wait for client to confirm status
perform(myRecordID)
sendClient(myRecordID)
didClientAcknowledge = false
while !didClientAcknowledge
didClientAcknowledge = myRecords.find(myRecordID).status == :ACK_OK
if didClientAcknowledge
break
end
# wait for client to perform an update on the record to confirm status
sleep 5.seconds
end
Rails.logger.info("client got the message")
end
my problem is that although I can see that the client has in fact performed the acknowledgement and updated the record with correct status update (ACK_OK), my sidekiq thread continues to see the old status for myRecord.
I'm sure my logic is flawed here but it seems like the sidekiq process does not "see" changes to the DB...but if I used my rails console I can see that the client has in fact updated the DB as expected...
Thanks!
Edit 1
ok so here's a thought, instead of the loop, I'll schedule another call to the worker within 5 seconds... so here's the updated code:
perform(myRecordID, retry_count)
retry_count -= 1
if retry_count < 1
return
end
sendClient(myRecordID)
didClientAcknowledge = false
if !didClientAcknowledge
didClientAcknowledge = myRecords.find(myRecordID).status == :ACK_OK
if didClientAcknowledge
Rails.logger.info("client got the message")
return
end
# wait for client to perform an update on the record to confirm status
myWorker.perform_in(5.seconds)
end
Rails.logger.info("client got the message")
end
This seems to work, but will test a bit more..one challenge is having a retry count which means I need to maintain some sort of variable between calls to the worker...
edit2 possibly this can be done by passing in the time to the first call and then checking if a timeout has been surpassed before invoking the next instance...assuming time does not stand still as well inside the async call...
edit3 Adding the retry_count argument allows us to control how many times this worker will be spawned...

How to split a long-lived Sidekiq job into many short-lived jobs in a Ruby on Rails app

So I'm building a website that calls a third-party API that can take from 20 seconds to 30 minutes to return a result. But I can't know this duration in advance so need to poll it frequently to check if the work is done (returns "COMPLETE" and the result) or not (returns "IN_PROGRESS"). Also, this API might be called many times from many users at the same time.
So I created a Sidekiq worker that checks the API every 5 seconds until it receives "COMPLETE", and only then it ends. But I've read that Sidekiq should only be doing short-lived jobs, and I'm struggling to get my head around how should I do it. Also I've been trying to search for an answer but I suspect I don't know the words to find what I'm looking for.
I'm sure there is a way I can tell my workers to call the API once, and if the result is "IN_PROGRESS" end but make sure another worker will do another API call to check, and so on and so on until the result is "COMPLETE".
Also, I guess this is also handy to better distribute the load in case many users demand the use of said API, because fewer workers can do more of this short-lived jobs.
This is my worker, which I hope clarifies what I'm doing right now:
class ThingProgressWorker
include Sidekiq::Worker
def perform(id)
#thing = Thing.find(id)
#thing_api_call = ThingAPICall.new // This uses the ruby library of the API
completed = false
while completed == false
result = #thing_api_call.get_result( { thing_job_name: #thing.job_name })
if !result.include? "COMPLETED"
completed = false
sleep 5
else
completed = true
#thing.status = "completed"
#thing.save
break
end
end
end
end
So if the API takes ten minutes to go from "IN_PROGRESS" to "COMPLETED" this worker will be busy for that long, which I recon is not advised at all.
I've been thinking about this for some hours now and can't think of how should I do to make each API call its own job without having a worker busy until the API is done.
The only solution I've thought so far is having a master worker that calls another worker for each API call, but then I'll still have a worker busy for as long as the API takes to send the result.
I'd appreciate any help or directions!
Thanks in advance
Try to call the worker with a delay. for example:
class ThingProgressWorker
include Sidekiq::Worker
def perform(id)
#thing = Thing.find(id)
#thing_api_call = ThingAPICall.new // This uses the ruby library of the API
result = #thing_api_call.get_result( { thing_job_name: #thing.job_name })
if !result.include? "COMPLETED"
ThingProgressWorker.perform_in(1.minute, id)
else
completed = true
#thing.status = "completed”
#thing.save
end
end
end
This will add the worker to the queue but will not run it immediately but in the time you specify.

specifying different retry limit and retry delay for different jobs in backburner

I am using beaneater/beanstalk in my app for maintaining the job queues.
https://github.com/nesquena/backburner
My global config file for backburner look like -
Backburner.configure do |config|
config.beanstalk_url = ["beanstalk://#{CONFIG['beanstalk']['host']}:#{CONFIG['beanstalk']['port']}"]
config.tube_namespace = CONFIG['beanstalk']['tube_name']
config.on_error = lambda { |e| puts e }
config.max_job_retries = 5 # default 0 retries
config.retry_delay = 30 # default 5 seconds
config.default_priority = 65536
config.respond_timeout = 120
config.default_worker = Backburner::Workers::Simple
config.logger = Logger.new('log/backburner.log')
config.priority_labels = { :custom => 50, :useless => 1000 }
config.reserve_timeout = nil
end
I want to set different retry limit and retry delay for different jobs.
I was looking at rubydoc for corresponding variable/function. As per this rubydoc link, I tried configuring retry_limit locally in a worker as:
One specific worker look like -
class AbcJob
include Backburner::Queue
queue "abc_job" # defaults to 'backburner-jobs' tube
queue_priority 10 # most urgent priority is 0
queue_respond_timeout 300 # number of seconds before job times out
queue_retry_limit 2
def self.perform(abc_id)
.....Task to be done.....
end
end
However, it is still picking up the retry limit from global config file and retrying it 5 times instead of 2. Any thing that I am missing here?
How can I over write retry limit and retry delay locally?
I could not find right way to do it but I found a solution.
I am putting entire body of perform in begin-rescue block and in case of failure I am re-enqueing it with custom delay. Also, to keep the track of number of retries I made it an argument which I am enqueueing.
class AbcJob
include Backburner::Queue
queue "abc_job" # defaults to 'backburner-jobs' tube
queue_priority 10 # most urgent priority is 0
queue_respond_timeout 300 # number of seconds before job times out
def self.perform(abc_id, attempt = 1)
begin
.....Task to be done.....
rescue StandardError => e
# Any notification method so that you can know about failure reason and fix it before next retry
# I am using NotificationMailer with e.message as body to debug
# Any function you want your retry delay to be, I am using quadratic
delay = attempt * attempt
if attempt + 1 < GlobalConstant::MaxRetryCount
Backburner::Worker.enqueue(AbcJob, [abc_id, attempt + 1], delay: delay.minute)
else
raise # if you want your jobs to be buried eventually
end
end
end
I have kept the default value of attempt to 1 so that magic nuber 1 do not appear in code which might raise question about why are we passing a constant. For enqueueing from other places in code you can use simple enqueue
Backburner::Worker.enqueue(AbcJob, abc_id)

Resque retry retries without delay

This is the code that I have
class ExampleTask
extend Resque::Plugins::ExponentialBackoff
#backoff_strategy = [0, 20, 3600]
#queue = :example_tasks
def self.perform
raise
end
end
I am running into a problem where, whenever I enqueue this task locally, Resque seems to retry the task immediately without respecting the backoff strategy. Has anyone ever experienced this problem before?
upgrading to 1.0.0 actually solves this problem.
For any future readers, the first integer in the array #backoff_strategy is how long Resque-Retry will wait before retrying the first time. From the github readme:
key: m = minutes, h = hours
no delay, 1m, 10m, 1h, 3h, 6h
#backoff_strategy = [0, 60, 600, 3600, 10800, 21600]
#retry_delay_multiplicand_min = 1.0
#retry_delay_multiplicand_max = 1.0
The first delay will be 0 seconds, the 2nd will be 60 seconds, etc... Again, tweak to your own needs.

Server Side Timers with Juggernaut 2

I am writing a rails app with Juggernaut 2 for real-time push notifications and am not sure how to approach this problem. I have a number of users in a chat room and I would like to run a timer so that a push can go out to each browser in the chat room every 30 seconds. Juggernaut 2 is built on node.js, so I'm assuming I need to write this code there. I just have no idea where to start in terms of integrating this with Juggernaut 2.
I just browsed through Juggernaut briefly so take my answer with a grain of salt...
You might be interested in the Channel object (https://github.com/maccman/juggernaut/blob/master/lib/juggernaut/channel.js) You'll notice that Channel.channel is an object (think ruby's hash) of all the channels that exist. You can set a 30 second recurring timer (setInterval - http://nodejs.org/docs/v0.4.2/api/timers.html#setInterval) to do something with all your channels.
What to do in each loop iteration? Well, the link to the aforementioned Channel code has a publish method:
publish: function(message){
var channels = message.getChannels();
delete message.channels;
for(var i=0, len = channels.length; i < len; i++) {
message.channel = channels[i];
var clients = this.find(channels[i]).clients;
for(var x=0, len2 = clients.length; x < len2; x++) {
clients[x].write(message);
}
}
}
So you basically have to create a Message object with message.channels set to Channel.channels and if you pass that message to the publish method, it will send out to all your clients.
As to the contents of your message, I dunno what you are using client side (socket.io? a chat client someone already built for you off Juggernaut and socket.io?) so that's up to you.
As for where to put the code creating the interval and firing off the callback to publish your message to all channels, you might want to check here in the code that creates the actual server listening on the given port: (https://github.com/maccman/juggernaut/blob/master/lib/juggernaut/server.js) If you attach the interval within init(), then as soon as you start the server it will be checking every 30 seconds to publish your given message to every channel
Here is a sample client which pushes every 30 seconds in Ruby.
Install your Juggernaut with Redis and Node: install ruby and rubygems, then run gem install juggernaut and
#!/usr/bin/env ruby
require "rubygems"
require "juggernaut"
while 1==1
Juggernaut.publish("channel1","some Message")
sleep 30
end
We implemented a quiz system which pushed out questions on a variable time interval. We did it as follows:
def start_quiz
Rails.logger.info("*** Quiz starting at #{Time.now}")
$redis.flushall # Clear all scores from database
quiz = Quiz.find(params[:quizz] || 1 )
#quiz_master = quiz.user
quiz_questions = quiz.quiz_questions.order("question_no ASC")
spawn_block do
quiz_questions.each { |q|
Rails.logger.info("*** Publishing question #{q.question_no}.")
time_alloc = q.question_time
Juggernaut.publish( select_channel("/quiz_stream"), {:q_num => q.num, :q_txt => q.text :time=> time_alloc} )
sleep(time_alloc)
scoreboard = publish_scoreboard
Juggernaut.publish( select_channel("/scoreboard"), {:scoreboard => scoreboard} )
}
end
respond_to do |format|
format.all { render :nothing => true, :status => 200 }
end
end
The key in our case was using 'spawn' to run a background process for the quiz timing so that we could still process the incoming scores.
I have no idea how scalable this is.

Resources