In my ruby script,I am using celluloid-zmq gem. where I am trying to run evaluate_response asynchronously inside pollers using,
async.evaluate_response(socket.read_multipart)
But if I remove sleep from loop, somehow thats not working out, It is not reaching to "evaluate_response" method. But if I put sleep inside loop it works perfectly.
require 'celluloid/zmq'
Celluloid::ZMQ.init
module Celluloid
module ZMQ
class Socket
def socket
#socket
end
end
end
end
class Indefinite
include Celluloid::ZMQ
## Readers
attr_reader :dealersock,:pullsock,:pollers
def initialize
prepare_dealersock and prepare_pullsock and prepare_pollers
end
## prepare DEALER SOCK
def prepare_dealersock
#dealersock = DealerSocket.new
#dealersock.identity = "IDENTITY"
#dealersock.connect("tcp://localhost:20482")
end
## prepare PULL SOCK
def prepare_pullsock
#pullsock = PullSocket.new
#pullsock.connect("tcp://localhost:20483")
end
## prepare the Pollers
def prepare_pollers
#pollers = ZMQ::Poller.new
#pollers.register_readable(dealersock.socket)
#pollers.register_readable(pullsock.socket)
end
def run!
loop do
pollers.poll ## this is blocking operation never mind though we need it
pollers.readables.each do |socket|
## we know socket.read_multipart is blocking call this would give celluloid the chance to run other process in mean time.
async.evaluate_response(socket.read_multipart)
end
## If you remove the sleep the async evaluate response would never be executed.
## sleep 0.2
end
end
def evaluate_response(message)
## Hmmm, the code just not reaches over here
puts "got message: #{message}"
...
...
...
...
end
end
## Code is invoked like this
Indefinite.new.run!
Any idea why this is happening?
The question was 100% changed, so my previous answer does not help.
Now, the issues are...
ZMQ::Poller is not part of Celluloid::ZMQ
You are directly using the ffi-rzmq bindings, and not using the Celluloid::ZMQ wrapping, which provides evented & threaded handling of the socket(s).
It would be best to make multiple actors -- one per socket -- or to just use Celluloid::ZMQ directly in one actor, rather than undermining it.
Your actor never gets time to work with the response
This part makes it a duplicate of:
Celluloid async inside ruby blocks does not work
The best answer is to use after or every and not loop ... which is dominating your actor.
You need to either:
Move evaluate_response to another actor.
Move each socket to their own actor.
This code needs to be broken up into several actors to work properly, with a main sleep at the end of the program. But before all that, try using after or every instead of loop.
Related
I have 2 Sidekiq workers:
Foo:
# frozen_string_literal: true
class FooWorker
include Sidekiq::Worker
sidekiq_options queue: :foo
def perform
loop do
File.open(File.join(Rails.root, 'foo.txt'), 'w') { |file| file.write('FOO') }
end
end
end
Bar:
# frozen_string_literal: true
class BarWorker
include Sidekiq::Worker
sidekiq_options queue: :bar
def perform
loop do
File.open(File.join(Rails.root, 'bar.txt'), 'w') { |file| file.write('BAR') }
end
end
end
Which has pretty the same functionality, both runs on different queues and the yaml file looks like this:
---
:queues:
- foo
- bar
development:
:concurrency: 5
The problem is, even both are running and showing in the Busy page of Sidekiq UI, only one of them will actually create a file and put contents in. Shouldn't Sidekiq be multi-threaded?
Update:
this happens only on my machine
i created a new project with rails new and same
i cloned a colleague project and ran his sidekiq and is working!!!
i used his sidekiq version, not working!
New Update:
this happens also on my colleague machine if he clone my project
if I run 2 jobs with a finite loop ( like 10 times do something with a sleep), first job will be executed and then the second, but after the second finishes and start again both will work on same time as expected -- everyone who cloned the project from: github.com/ArayB/sidekiq-test encountered the problem.
It's not an issue with Sidekiq. It's an issue somewhere in Ruby/MRI/Thread/GIL. Google for more info, but my understanding is that sometimes threads aren't real threads (see "green threads") so really just simulate threading. The important thing is that only one thread can execute at a time.
It's interesting that with only two threads the system isn't giving time to the second thread. No idea why, but it must realize it's mistake when you run it again.
Interestingly if you run your same app but instead fire off 10 TestWorkers (and tweak the output so you can tell the difference) sidekiq will run all 10 "at once".
10.times {|i| TestWorker.perform_async(i) }
Here is the tweaked worker. Be sure to flush the output cause that can also cause issues with threading and TTY buffering not reflecting reality.
class TestWorker
include Sidekiq::Worker
def perform(n)
10.times do |i|
puts "#{n} - #{i} - #{Time.current}"
$stdout.flush
sleep 1
end
end
end
Some interesting links:
https://en.wikipedia.org/wiki/Green_threads
http://ruby-doc.org/core-2.4.1/Thread.html#method-c-pass
https://github.com/ruby/ruby/blob/v2_4_1/thread.c
Does ruby have real multithreading?
I am currently enabling queueing in on of my app wrote in Ruby. I use sidekiq and have define a worker class as below:
class Worker
include Sidekiq::Worker
def perform(params)
sleep 10
##logger.info("--------------------------")
##logger.info("Request received")
##logger.info("--------------------------")
end
end
I am calling it from my main entry point called 'api.rb'
post '/receive/?' do
##logger.debug("/retrieve endpoint received")
Worker.perform_async #params
end
this working fine and each time the sleep is done, the next queued task is started.
In my case, I need to unqueued or start the next item queued only when I decide it. it will be triggered by an external event.
in my 'api.rb', I have added:
post '/response/?' do
next_task
end
The way the code works is that '/receive' can queued 10 requests. the first request will triggered a specific action (sent a post command to a server).
I expect the remote server to send me back a request through '/response' to tell me that the action is finished. when this response is received, I use the 'next_task' api to remove the previous task which was running and now completed and move to the next queued one.
Any idea, how to create a custom trigger to unqueue and start the new job. Is there SIGNAL which allow me to avoid the sidekiq framework to unqueue until I send a specific signal.
Merci
To delete a job in a Sidekiq queue, you would have to iterate the whole queue. It is not an idiomatic use of the queue.
I am afraid I don't understand what exactly you are trying to do. Just remember that you can store state outside of the Sidekiq queue, for example you can have a model for the Job:
post '/receive/?' do
job = Job.create(#params)
Worker.perform_in(10.seconds)
end
and then in the worker:
def perform
job = Job.find_oldest_unexecuted
if job
job.execute!
else
# wait another 10 seconds
Worker.perform_in(10.seconds)
end
end
And in Job:
class Job < ActiveRecord::Base
def self.find_oldest_unexecuted
where(executed_at: nil).order(:id).first
end
def execute!
# do what needs to be done here
update_attribute(:executed_at, Time.zone.now)
end
end
I have a Rails application where users upload Audio files. I want to send them to a third party server, and I need to connect to the external server using Web sockets, so, I need my Rails application to be a websocket client.
I'm trying to figure out how to properly set that up. I'm not committed to any gem just yet, but the 'faye-websocket' gem looks promising. I even found a similar answer in "Sending large file in websocket before timeout", however, using that code doesn't work for me.
Here is an example of my code:
#message = Array.new
EM.run {
ws = Faye::WebSocket::Client.new("wss://example_url.com")
ws.on :open do |event|
File.open('path/to/audio_file.wav','rb') do |f|
ws.send(f.gets)
end
end
ws.on :message do |event|
#message << [event.data]
end
ws.on :close do |event|
ws = nil
EM.stop
end
}
When I use that, I get an error from the recipient server:
No JSON object could be decoded
This makes sense, because the I don't believe it's properly formatted for faye-websocket. Their documentation says:
send(message) accepts either a String or an Array of byte-sized
integers and sends a text or binary message over the connection to the
other peer; binary data must be encoded as an Array.
I'm not sure how to accomplish that. How do I load binary into an array of integers with Ruby?
I tried modifying the send command to use the bytes method:
File.open('path/to/audio_file.wav','rb') do |f|
ws.send(f.gets.bytes)
end
But now I receive this error:
Stream was 19 bytes but needs to be at least 100 bytes
I know my file is 286KB, so something is wrong here. I get confused as to when to use File.read vs File.open vs. File.new.
Also, maybe this gem isn't the best for sending binary data. Does anyone have success sending binary files in Rails with websockets?
Update: I did find a way to get this working, but it is terrible for memory. For other people that want to load small files, you can simply File.binread and the unpack method:
ws.on :open do |event|
f = File.binread 'path/to/audio_file.wav'
ws.send(f.unpack('C*'))
end
However, if I use that same code on a mere 100MB file, the server runs out of memory. It depletes the entire available 1.5GB on my test server! Does anyone know how to do this is a memory safe manner?
Here's my take on it:
# do only once when initializing Rails:
require 'iodine/client'
Iodine.force_start!
# this sets the callbacks.
# on_message is always required by Iodine.
options = {}
options[:on_message] = Proc.new do |data|
# this will never get called
puts "incoming data ignored? for:\n#{data}"
end
options[:on_open] = Proc.new do
# believe it or not - this variable belongs to the websocket connection.
#started_upload = true
# set a task to send the file,
# so the on_open initialization doesn't block incoming messages.
Iodine.run do
# read the file and write to the websocket.
File.open('filename','r') do |f|
buffer = String.new # recycle the String's allocated memory
write f.read(65_536, buffer) until f.eof?
#started_upload = :done
end
# close the connection
close
end
end
options[:on_close] = Proc.new do |data|
# can we notify the user that the file was uploaded?
if #started_upload == :done
# we did it :-)
else
# what happened?
end
end
# will not wait for a connection:
Iodine::Http.ws_connect "wss://example_url.com", options
# OR
# will wait for a connection, raising errors if failed.
Iodine::Http::WebsocketClient.connect "wss://example_url.com", options
It's only fair to mention that I'm Iodine's author, which I wrote for use in Plezi (a RESTful Websocket real time application framework you can use stand alone or within Rails)... I'm super biased ;-)
I would avoid the gets because it's size could include the whole file or a single byte, depending on the location of the next End Of Line (EOL) marker... read gives you better control over each chunk's size.
I'm having an issue trying to get a timeout when connecting via TCPSocket to a remote resource that isn't available. It just hangs indefinitely without timing out. Ideally I'd want it to try reconnect every 2 minutes or so, but the TCPSocket.new call seems to block. I've tried using timeout() but that doesn't do anything either. Trying the same call in an IRB instance works perfectly fine, but when it's in Rails, it fails. Anyone have a work around for this?
My code looks something as follows:
def self.connect!
##connection = TCPSocket.new IP, 4449
end
def self.send(cmd)
puts "send "
unless ##connection
self.connect!
end
loop do
begin
##connection.puts(cmd)
return
rescue IOError
sleep(self.get_reconnect_delay)
self.connect!
end
end
end
Unfortunately, there is currently no way to set timeouts on TCPSocket directly.
See http://bugs.ruby-lang.org/issues/5101 for the feature request. You will have use the basic Socket class and set socket options.
I have a Ruby process that listens on a given device. I would like to spin up/down instances of it for different devices with a rails app. Everything I can find for Ruby daemons seems to be based around a set number of daemons running or background processing with message queues.
Should I just be doing this with Kernel.spawn and storing the PIDs in the database? It seems a bit hacky but if there isn't an existing framework that allows me to bring up/down daemons it seems I may not have much choice.
Instead of spawning another script and keeping the PIDs in the database, you can do it all within the same script, using fork, and keeping PIDs in memory. Here's a sample script - you add and delete "worker instances" by typing commands "add" and "del" in console, exiting with "quit":
#pids = []
#counter = 0
def add_process
#pids.push(Process.fork {
loop do
puts "Hello from worker ##{#counter}"
sleep 1
end
})
#counter += 1
end
def del_process
return false if #pids.empty?
pid = #pids.pop
Process.kill('SIGTERM', pid)
true
end
def kill_all
while del_process
end
end
while cmd = gets.chomp
case cmd.downcase
when 'quit'
kill_all
exit
when 'add'
add_process
when 'del'
del_process
end
end
Of course, this is just an example, and for sending comands and/or monitoring instances you can replace this simple gets loop with a small Sinatra app, or socket interface, or named pipes etc.