I am writing a rails app with Juggernaut 2 for real-time push notifications and am not sure how to approach this problem. I have a number of users in a chat room and I would like to run a timer so that a push can go out to each browser in the chat room every 30 seconds. Juggernaut 2 is built on node.js, so I'm assuming I need to write this code there. I just have no idea where to start in terms of integrating this with Juggernaut 2.
I just browsed through Juggernaut briefly so take my answer with a grain of salt...
You might be interested in the Channel object (https://github.com/maccman/juggernaut/blob/master/lib/juggernaut/channel.js) You'll notice that Channel.channel is an object (think ruby's hash) of all the channels that exist. You can set a 30 second recurring timer (setInterval - http://nodejs.org/docs/v0.4.2/api/timers.html#setInterval) to do something with all your channels.
What to do in each loop iteration? Well, the link to the aforementioned Channel code has a publish method:
publish: function(message){
var channels = message.getChannels();
delete message.channels;
for(var i=0, len = channels.length; i < len; i++) {
message.channel = channels[i];
var clients = this.find(channels[i]).clients;
for(var x=0, len2 = clients.length; x < len2; x++) {
clients[x].write(message);
}
}
}
So you basically have to create a Message object with message.channels set to Channel.channels and if you pass that message to the publish method, it will send out to all your clients.
As to the contents of your message, I dunno what you are using client side (socket.io? a chat client someone already built for you off Juggernaut and socket.io?) so that's up to you.
As for where to put the code creating the interval and firing off the callback to publish your message to all channels, you might want to check here in the code that creates the actual server listening on the given port: (https://github.com/maccman/juggernaut/blob/master/lib/juggernaut/server.js) If you attach the interval within init(), then as soon as you start the server it will be checking every 30 seconds to publish your given message to every channel
Here is a sample client which pushes every 30 seconds in Ruby.
Install your Juggernaut with Redis and Node: install ruby and rubygems, then run gem install juggernaut and
#!/usr/bin/env ruby
require "rubygems"
require "juggernaut"
while 1==1
Juggernaut.publish("channel1","some Message")
sleep 30
end
We implemented a quiz system which pushed out questions on a variable time interval. We did it as follows:
def start_quiz
Rails.logger.info("*** Quiz starting at #{Time.now}")
$redis.flushall # Clear all scores from database
quiz = Quiz.find(params[:quizz] || 1 )
#quiz_master = quiz.user
quiz_questions = quiz.quiz_questions.order("question_no ASC")
spawn_block do
quiz_questions.each { |q|
Rails.logger.info("*** Publishing question #{q.question_no}.")
time_alloc = q.question_time
Juggernaut.publish( select_channel("/quiz_stream"), {:q_num => q.num, :q_txt => q.text :time=> time_alloc} )
sleep(time_alloc)
scoreboard = publish_scoreboard
Juggernaut.publish( select_channel("/scoreboard"), {:scoreboard => scoreboard} )
}
end
respond_to do |format|
format.all { render :nothing => true, :status => 200 }
end
end
The key in our case was using 'spawn' to run a background process for the quiz timing so that we could still process the incoming scores.
I have no idea how scalable this is.
Related
So I'm building a website that calls a third-party API that can take from 20 seconds to 30 minutes to return a result. But I can't know this duration in advance so need to poll it frequently to check if the work is done (returns "COMPLETE" and the result) or not (returns "IN_PROGRESS"). Also, this API might be called many times from many users at the same time.
So I created a Sidekiq worker that checks the API every 5 seconds until it receives "COMPLETE", and only then it ends. But I've read that Sidekiq should only be doing short-lived jobs, and I'm struggling to get my head around how should I do it. Also I've been trying to search for an answer but I suspect I don't know the words to find what I'm looking for.
I'm sure there is a way I can tell my workers to call the API once, and if the result is "IN_PROGRESS" end but make sure another worker will do another API call to check, and so on and so on until the result is "COMPLETE".
Also, I guess this is also handy to better distribute the load in case many users demand the use of said API, because fewer workers can do more of this short-lived jobs.
This is my worker, which I hope clarifies what I'm doing right now:
class ThingProgressWorker
include Sidekiq::Worker
def perform(id)
#thing = Thing.find(id)
#thing_api_call = ThingAPICall.new // This uses the ruby library of the API
completed = false
while completed == false
result = #thing_api_call.get_result( { thing_job_name: #thing.job_name })
if !result.include? "COMPLETED"
completed = false
sleep 5
else
completed = true
#thing.status = "completed"
#thing.save
break
end
end
end
end
So if the API takes ten minutes to go from "IN_PROGRESS" to "COMPLETED" this worker will be busy for that long, which I recon is not advised at all.
I've been thinking about this for some hours now and can't think of how should I do to make each API call its own job without having a worker busy until the API is done.
The only solution I've thought so far is having a master worker that calls another worker for each API call, but then I'll still have a worker busy for as long as the API takes to send the result.
I'd appreciate any help or directions!
Thanks in advance
Try to call the worker with a delay. for example:
class ThingProgressWorker
include Sidekiq::Worker
def perform(id)
#thing = Thing.find(id)
#thing_api_call = ThingAPICall.new // This uses the ruby library of the API
result = #thing_api_call.get_result( { thing_job_name: #thing.job_name })
if !result.include? "COMPLETED"
ThingProgressWorker.perform_in(1.minute, id)
else
completed = true
#thing.status = "completed”
#thing.save
end
end
end
This will add the worker to the queue but will not run it immediately but in the time you specify.
I am trying to build quizup like app and want to send broadcast every 10 second with a random question for 2 minutes. How do I do that using rails ? I am using action cable for sending broadcast. I can use rufus-scheduler for running an action every few seconds but I am not sure if it make sense to use it for my use case .
Simplest solution would be to fork a new thread:
Thread.new do
duration = 2.minutes
interval = 10.seconds
number_of_questions_left = duration.seconds / interval.seconds
while(number_of_questions_left > 0) do
ActionCable.server.broadcast(
"some_broadcast_id", { random_question: 'How are you doing?' }
)
number_of_questions_left -= 1
sleep(interval)
end
end
Notes:
This is only a simple solution of which you are actually ending up more than 2.minutes of total run time, because each loop actually ends up sleeping very slightly more than 10 seconds. If this discrepancy is not important, then the solution above would be already sufficient.
Also, this kind-of-scheduler only persists in memory, as opposed to a dedicated background worker like sidekiq. So, if the rails process gets terminated, then all currently running "looping" code will also be terminated as well, which you might intentionally want or not want.
If using rufus-scheduler:
number_of_questions_left = 12
scheduler = Rufus::Scheduler.new
# `first_in` is set so that first-time job runs immediately
scheduler.every '10s', first_in: 0.1 do |job|
ActionCable.server.broadcast(
"some_broadcast_id", { random_question: 'How are you doing?' }
)
number_of_questions_left -= 1
job.unschedule if number_of_questions_left == 0
end
I need to convert videos in 4 threads
For example I have Active Record models Video with titles: Video1, Video2, Video3, Video4, Video5
So, I need to execute something like this
bundle exec script/video_converter start
Where script will process unconverted videos for 4 threads, for example
Video.where(state: 'unconverted').first.process
But if one of 4 videos are converted, next video must be automatically added to thread
What is the best solution for this ? Sidekiq gem? Daemons gem + Ruby Threads manually?
For now I am using this script:
THREAD_COUNT = 4
SLEEP_TIME = 5
logger = CONVERTATION_LOG
spawns = []
loop do
videos = Video.where(state:'unconverted').limit(THREAD_COUNT).reorder("ID DESC")
videos.each do |video|
spawns << Spawnling.new do
result = video.process
if result.nil?
video.create_thumbnail!
else
video.failured!
end
end
end
Spawnling.wait(spawns)
sleep(SLEEP_TIME)
end
But this script waits 4 videos, and after it takes another 4 videos. I want, that after one of 4-th video converted, it will be automatically added to new thread, which is empty.
If your goal is to keep processing videos by using just 4 threads (or whatever Spawnling is configured to use - as it supports fork and thread), then, you could use a Queue to queue all your video records to be processed, spawn 4 threads and let them keep processing records one by one until queue is empty.
require "rails"
require "spawnling"
# In your case, videos are read from DB, below array is for illustration
videos = ["v1", "v2", "v3", "v4", "v5", "v6", "..."]
THREAD_COUNT = 4
spawns = []
q = Queue.new
videos.each {|i| q.push(i) }
THREAD_COUNT.times do
spawns << Spawnling.new do
until q.empty? do
v = q.pop
# simulate processing
puts "Processing video #{v}"
# simulate processing time
sleep(rand(10))
end
end
end
Spawnling.wait(spawns)
This answer is inspired from this answer
PS: I have added few requires and defined videos array to make above code self-contained running example.
Per docs I thought it would be (for everyday at 3pm)
daily.hour_of_day(15)
What I'm getting is a random mess. First, it's executing whenever I push to Heroku regardless of time, and then beyond that, seemingly randomly. So the latest push to Heroku was 1:30pm. It executed: twice at 1:30pm, once at 2pm, once at 4pm, once at 5pm.
Thoughts on what's wrong?
Full code (note this is for the Fist of Fury gem, but FoF is heavily influenced by Sidetiq so help from Sidetiq users would be great as well).
class Outstanding
include SuckerPunch::Job
include FistOfFury::Recurrent
recurs { daily.hour_of_day(15) }
def perform
ActiveRecord::Base.connection_pool.with_connection do
# Auto email lenders every other day if they have outstanding requests
lender_array = Array.new
Inventory.where(id: (Borrow.where(status1:1).all.pluck("inventory_id"))).each { |i| lender_array << i.signup.id }
lender_array.uniq!
lender_array.each { |l| InventoryMailer.outstanding_request(l).deliver }
end
end
end
Maybe you should use:
recurrence { daily.hour_of_day(15) }
instead of recurs?
Nokogiri works fine for me in the console, but if I put it anywhere... Model, View, or Controller, it times out.
I'd like to use it 1 of 2 ways...
Controller
def show
#design = Design.find(params[:id])
doc = Nokogiri::HTML(open(design_url(#design)))
images = doc.css('.well img') ? doc.css('.well img').map{ |i| i['src'] } : []
end
or...
Model
def first_image
doc = Nokogiri::HTML(open("http://localhost:3000/blog/#{self.id}"))
image = doc.css('.well img')[0] ? doc.css('.well img')[0]['src'] : nil
self.update_attribute(:photo_url, image)
end
Both result in a timeout, though they work perfectly in the console.
When you run your Nokogiri code from the console, you're referencing your development server at localhost:3000. Thus, there are two instances running: one making the call (your console) and one answering the call (your server)
When you run it from within your app, you are referencing the app itself, which is causing an infinite loop since there is no available resource to respond to your call (that resource is the one making the call!). So you would need to be running multiple instances with something like Unicorn (or simply another localhost instance at a different port), and you would need at least one of those instances to be free to answer the Nokogiri request.
If you plan to run this in production, just know that this setup will require an available resource to answer the Nokogiri request, so you're essentially tying up 2 instances with each call. So if you have 4 instances and all 4 happen to make the call at the same time, your whole application is screwed. You'll probably experience pretty severe degradation with only 1 or 2 calls at a time as well...
Im not sure what default value of timeout.
But you can specify some timeout value like below.
require 'net/http'
http = Net::HTTP.new('localhost')
http.open_timeout = 100
http.read_timeout = 100
Nokogiri.parse(http.get("/blog/#{self.id}").body)
Finally you can find what is the problem as you can control timeout value.
So, with tyler's advice I dug into what I was doing a bit more. Because of the disconnect that ckeditor has with the images, due to carrierwave and S3, I can't get any info direct from the uploader (at least it seems that way to me).
Instead, I'm sticking with nokogiri, and it's working wonderfully. I realized what I was actually doing with the open() command, and it was completely unnecessary. Nokogiri parses HTML. I can give it HTML in for form of #design.content! Duh, on my part.
So, this is how I'm scraping my own site, to get the images associated with a blog entry:
designs_controller.rb
def create
params[:design][:photo_url] = Nokogiri::HTML(params[:design][:content]).css('img').map{ |i| i['src']}[0]
#design = Design.new(params[:design])
if #design.save
flash[:success] = "Design created"
redirect_to designs_url
else
render 'designs/new'
end
end
def show
#design = Design.find(params[:id])
#categories = #design.categories
#tags = #categories.map {|c| c.name}
#related = Design.joins(:categories).where('categories.name' => #tags).reject {|d| d.id == #design.id}.uniq
set_meta_tags og: {
title: #design.name,
type: 'article',
url: design_url(#design),
image: Nokogiri::HTML(#design.content).css('img').map{ |i| i['src']},
article: {
published_time: #design.published_at.to_datetime,
modified_time: #design.updated_at.to_datetime,
author: 'Alphabetic Design',
section: 'Designs',
tag: #tags
}
}
end
The Update action has the same code for Nokogiri as the Create action.
Seems kind of obvious now that I'm looking at it, lol. I dwelled on this for longer than I'd like to admit...