I have a Ruby process that listens on a given device. I would like to spin up/down instances of it for different devices with a rails app. Everything I can find for Ruby daemons seems to be based around a set number of daemons running or background processing with message queues.
Should I just be doing this with Kernel.spawn and storing the PIDs in the database? It seems a bit hacky but if there isn't an existing framework that allows me to bring up/down daemons it seems I may not have much choice.
Instead of spawning another script and keeping the PIDs in the database, you can do it all within the same script, using fork, and keeping PIDs in memory. Here's a sample script - you add and delete "worker instances" by typing commands "add" and "del" in console, exiting with "quit":
#pids = []
#counter = 0
def add_process
#pids.push(Process.fork {
loop do
puts "Hello from worker ##{#counter}"
sleep 1
end
})
#counter += 1
end
def del_process
return false if #pids.empty?
pid = #pids.pop
Process.kill('SIGTERM', pid)
true
end
def kill_all
while del_process
end
end
while cmd = gets.chomp
case cmd.downcase
when 'quit'
kill_all
exit
when 'add'
add_process
when 'del'
del_process
end
end
Of course, this is just an example, and for sending comands and/or monitoring instances you can replace this simple gets loop with a small Sinatra app, or socket interface, or named pipes etc.
Related
I am running Rails 3 with Ruby 2.3.3 on puma with postgresql. I have an initializer/twitter.rb file that starts a thread on boot with a streaming api for twitter. When I use rails server to start my application, the twitter streaming works and I can reach my website like normal. (If I do not put the streaming on a different thread, the streaming works but I can not view my application in the browser since the thread is blocked by the twitter stream). But when I use puma -C config/puma.rb to start my application, I get the following message that is telling me that my thread was found on startup and was put to sleep. How can I tell puma to let me run this thread in the background on startup?
initializer/twitter.rb
### START TWITTER THREAD ### if production
if Rails.env.production?
puts 'Starting Twitter Stream...'
Thread.start {
twitter_stream.user do |object|
case object
when Twitter::Tweet
handle_tweet(object)
when Twitter::DirectMessage
handle_direct_message(object)
when Twitter::Streaming::Event
puts "Received Event: #{object.to_yaml}"
when Twitter::Streaming::FriendList
puts "Received FriendList: #{object.to_yaml}"
when Twitter::Streaming::DeletedTweet
puts "Deleted Tweet: #{object.to_yaml}"
when Twitter::Streaming::StallWarning
puts "Stall Warning: #{object.to_yaml}"
else
puts "It's something else: #{object.to_yaml}"
end
end
}
end
config/puma.rb
workers Integer(ENV['WEB_CONCURRENCY'] || 2)
threads_count = Integer(ENV['RAILS_MAX_THREADS'] || 5)
threads threads_count, threads_count
preload_app!
rackup DefaultRackup
port ENV['PORT'] || 3000
environment ENV['RACK_ENV'] || 'development'
on_worker_boot do
# Valid on Rails up to 4.1 the initializer method of setting `pool` size
ActiveSupport.on_load(:active_record) do
config = ActiveRecord::Base.configurations[Rails.env] ||
Rails.application.config.database_configuration[Rails.env]
config['pool'] = ENV['RAILS_MAX_THREADS'] || 5
ActiveRecord::Base.establish_connection(config)
end
end
Message on startup
2017-04-19T23:52:47.076636+00:00 app[web.1]: Connecting to database specified by DATABASE_URL
2017-04-19T23:52:47.115595+00:00 app[web.1]: Starting Twitter Stream...
2017-04-19T23:52:47.229203+00:00 app[web.1]: Received FriendList: --- !ruby/array:Twitter::Streaming::FriendList []
2017-04-19T23:52:47.865735+00:00 app[web.1]: [4] * Listening on tcp://0.0.0.0:13734
2017-04-19T23:52:47.865830+00:00 app[web.1]: [4] ! WARNING: Detected 1 Thread(s) started in app boot:
2017-04-19T23:52:47.865870+00:00 app[web.1]: [4] ! #<Thread:0x007f4df8bf6240#/app/config/initializers/twitter.rb:135 sleep> - /app/vendor/ruby-2.3.3/lib/ruby/2.3.0/openssl/buffering.rb:125:in `sysread'
2017-04-19T23:52:47.875056+00:00 app[web.1]: [4] - Worker 0 (pid: 7) booted, phase: 0
2017-04-19T23:52:47.865919+00:00 app[web.1]: [4] Use Ctrl-C to stop
2017-04-19T23:52:47.882759+00:00 app[web.1]: [4] - Worker 1 (pid: 11) booted, phase: 0
2017-04-19T23:52:48.148831+00:00 heroku[web.1]: State changed from starting to up
Thanks in advance for the help. I have looked at several other posts mentioning WARNING: Detected 1 Thread(s) started in app boot but the answers say to ignore the warning if the thread is not important. In my case, the thread is very important and I need this thread to not sleep.
From your code I think you have a bigger issue on your hands than a sleeping thread... which I guess might be caused by the fact that some things are misnamed and others are just not often considered when relying on a web framework.
In the world of servers, "workers" refer to forked processes that perform server related tasks, often accepting new connections and handling web requests.
BUT - fork doesn't duplicate threads! - the new process (the worker) starts with only one single thread, which is a copy of the thread that called fork.
This is because processes don't share memory (normally). Whatever global data you have in a process is private to that process (i.e., if you save connected websocket clients in an array, that array is different for each "worker").
This can't be helped, it's part of how the OS and fork are designed.
So, the warning is not something you can circumvent - it's an indication of a design flaw in the app(!).
For example, in your current design (assuming the thread wasn't sleeping), the handle_tweet method will only be called for the original server process and it won't be called for any worker process.
If you're using pub/sub, you only need one twitter_stream connection for the whole app (no matter how many servers or workers you application has) - perhaps a twitter_stream process (or background app) will be better than a thread.
But if you're implementing handle_tweet in a process specific way - i.e., by sending a message to every connected clients saved in an array - you need to make sure every "worker" initiates a twitter_stream thread(!).
When I wrote Iodine (a different server than Puma), I handled these use cases using the Iodine.run method, which defers tasks for later. The "saved" task should be performed only after the workers are initialized and the event loop starts running, so it's performed in each process (allowing you to start a new tread in each proccess).
i.e.
Iodine.run do
Thread.start do
twitter_stream.user do |object|
# ...
end
end
end
I assume Puma has a similar solution. From what I understand of the Puma Clustered-Mode Documentation, Adding the following block to your config/puma.rb might help:
# config/puma.rb
on_worker_boot do
Thread.start do
twitter_stream.user do |object|
# ...
end
end
end
Good luck!
EDIT: relating to the comment about twitter_stream using ActiveRecord
From the comments I gather that the twitter_stream callbacks store data in the DataBase as well as handle "push" events or notices.
Although these two concerns are connected, they are very different from each other.
For example, twitter_stream callbacks should only store data in the DataBase once. Even if your application grows to a billion users, you will only need to save the data in the database once.
This means that the twitter_stream callbacks should have their own dedicated process that runs only once, possibly separate from the main application.
At first, and as long as you limit your application to a single (only one server/application running), you might use fork together with the initializer/twitter.rb script... i.e.:
### START TWITTER PROCESS ### if production
if Rails.env.production?
puts 'Starting Twitter Stream...'
Process.fork do
twitter_stream.user do |object|
# ...
end
end
end
On the other hand, notifications should be addressed to a specific user on a specific connection owned by a specific process.
Hence, notifications should be a separate concern from the twitter_stream DataBase update and they should be running in the background of every process, using the on_worker_boot (or Iodine.run) described above.
To achieve this, you might have on_worker_boot start a background thread that will listen to a pub/sub service such as Redis, while the twitter_stream callbacks "publish" updates to the pub/sub service.
This would allow each process to review the update and check if any of the connections it "owns" belongs to a client that should be notified of the update.
The way I'm reading your question, this doesn't look like an issue. A sleeping thread is different from a dead thread. Sleep just means that the thread is waiting idle, not consuming any cpu. If all else is hooked up properly, then as soon as the twitter api detects an event, it should wake the the thread up, run whatever handler you've defined, and then go right back to sleep. Sleeping isn't "running in the background," but it is "waiting for something to happen (e.g. someone tweets #me.) so I can run in the background."
A quick example to demonstrate this:
2.4.0 :001 > t = Thread.new { TCPServer.new(1234).accept ; puts "Got a connection! Dying..." }
=> #<Thread:0x007fa3941fed90#(irb):1 sleep>
2.4.0 :002 > t
=> #<Thread:0x007fa3941fed90#(irb):1 sleep>
2.4.0 :003 > t
=> #<Thread:0x007fa3941fed90#(irb):1 sleep>
2.4.0 :004 > TCPSocket.new 'localhost', 1234
=> #<TCPSocket:fd 35>
2.4.0 :005 > Got a connection! Dying...
t
=> #<Thread:0x007fa3941fed90#(irb):1 dead>
Sleeping just means "waiting for action."
Puma is a thread-based server, and is very particular about spinning threads up in its boot process, hence the warning about a thread started at app boot.
For what it's worth though, it's kind of weird to have a thread listening for updates from an api like that in a webserver. Maybe you should look into having a worker handle twitter events using something like Resque? Or maybe ActionCable is relevant to your use case?
I have a rails application running with puma server. Is there any way, we can see how many number of threads used in application currently ?
I was wondering about the same thing a while ago and came upon this issue. The author included the code they ended up using to collect those stats:
module PumaThreadLogger
def initialize *args
ret = super *args
Thread.new do
while true
# Every X seconds, write out what the state of this dyno is in a format that Librato understands.
sleep 5
thread_count = 0
backlog = 0
waiting = 0
# I don't do the logging or string stuff inside of the mutex. I want to get out of there as fast as possible
#mutex.synchronize {
thread_count = #workers.size
backlog = #todo.size
waiting = #waiting
}
# For some reason, even a single Puma server (not clustered) has two booted ThreadPools.
# One of them is empty, and the other is actually doing work
# The check above ignores the empty one
if (thread_count > 0)
# It might be cool if we knew the Puma worker index for this worker, but that didn't look easy to me.
# The good news: By using the PID we can differentiate two different workers on two different dynos with the same name
# (which might happen if one is shutting down and the other is starting)
source_name = "#{Process.pid}"
# If we have a dyno name, prepend it to the source to make it easier to group in the log output
dyno_name = ENV['DYNO']
if (dyno_name)
source_name="#{dyno_name}."+source_name
end
msg = "source=#{source_name} "
msg += "sample#puma.backlog=#{backlog} sample#puma.active_connections=#{thread_count - waiting} sample#puma.total_threads=#{thread_count}"
Rails.logger.info msg
end
end
end
ret
end
end
module Puma
class ThreadPool
prepend PumaThreadLogger
end
end
This code contains logic that is specific to heroku, but the core of collecting the #workers.size and logging it will work in any environment.
In my ruby script,I am using celluloid-zmq gem. where I am trying to run evaluate_response asynchronously inside pollers using,
async.evaluate_response(socket.read_multipart)
But if I remove sleep from loop, somehow thats not working out, It is not reaching to "evaluate_response" method. But if I put sleep inside loop it works perfectly.
require 'celluloid/zmq'
Celluloid::ZMQ.init
module Celluloid
module ZMQ
class Socket
def socket
#socket
end
end
end
end
class Indefinite
include Celluloid::ZMQ
## Readers
attr_reader :dealersock,:pullsock,:pollers
def initialize
prepare_dealersock and prepare_pullsock and prepare_pollers
end
## prepare DEALER SOCK
def prepare_dealersock
#dealersock = DealerSocket.new
#dealersock.identity = "IDENTITY"
#dealersock.connect("tcp://localhost:20482")
end
## prepare PULL SOCK
def prepare_pullsock
#pullsock = PullSocket.new
#pullsock.connect("tcp://localhost:20483")
end
## prepare the Pollers
def prepare_pollers
#pollers = ZMQ::Poller.new
#pollers.register_readable(dealersock.socket)
#pollers.register_readable(pullsock.socket)
end
def run!
loop do
pollers.poll ## this is blocking operation never mind though we need it
pollers.readables.each do |socket|
## we know socket.read_multipart is blocking call this would give celluloid the chance to run other process in mean time.
async.evaluate_response(socket.read_multipart)
end
## If you remove the sleep the async evaluate response would never be executed.
## sleep 0.2
end
end
def evaluate_response(message)
## Hmmm, the code just not reaches over here
puts "got message: #{message}"
...
...
...
...
end
end
## Code is invoked like this
Indefinite.new.run!
Any idea why this is happening?
The question was 100% changed, so my previous answer does not help.
Now, the issues are...
ZMQ::Poller is not part of Celluloid::ZMQ
You are directly using the ffi-rzmq bindings, and not using the Celluloid::ZMQ wrapping, which provides evented & threaded handling of the socket(s).
It would be best to make multiple actors -- one per socket -- or to just use Celluloid::ZMQ directly in one actor, rather than undermining it.
Your actor never gets time to work with the response
This part makes it a duplicate of:
Celluloid async inside ruby blocks does not work
The best answer is to use after or every and not loop ... which is dominating your actor.
You need to either:
Move evaluate_response to another actor.
Move each socket to their own actor.
This code needs to be broken up into several actors to work properly, with a main sleep at the end of the program. But before all that, try using after or every instead of loop.
Situation:
In a typical cluster setup, I have a 5 instances of mongrel running behind Apache 2.
In one of my initializer files, I schedule a cron task using Rufus::Scheduler which basically sends out a couple of emails.
Problem:
The task runs 5 times, once for each mongrel instance and each recipient ends up getting 5 mails (despite the fact I store logs of each sent mail and check the log before sending). Is it possible that since all 5 instances run the task at exact same time, they end up reading the email logs before they are written?
I am looking for a solution that will make the tasks run only once. I also have a Starling daemon up and running which can be utilized.
The rooster rails plugin specifically addresses your issue. It uses rufus-scheduler and ensures the environment is loaded only once.
The way I am doing it right now:
Try to open a file in exclusive locked mode
When lock is acquired, check for messages in Starling
If message exists, other process has already scheduled the job
Set the message again to the queue and exit.
If message is not found, schedule the job, set the message and exit
Here is the code that does it:
starling = MemCache.new("#{Settings[:starling][:host]}:#{Settings[:starling][:port]}")
mutex_filename = "#{RAILS_ROOT}/config/file.lock"
scheduler = Rufus::Scheduler.start_new
# The filelock method, taken from Ruby Cookbook
# This will ensure unblocking of the files
def flock(file, mode)
success = file.flock(mode)
if success
begin
yield file
ensure
file.flock(File::LOCK_UN)
end
end
return success
end
# open_lock method, taken from Ruby Cookbook
# This will create and hold the locks
def open_lock(filename, openmode = "r", lockmode = nil)
if openmode == 'r' || openmode == 'rb'
lockmode ||= File::LOCK_SH
else
lockmode ||= File::LOCK_EX
end
value = nil
# Kernerl's open method, gives IO Object, in our case, a file
open(filename, openmode) do |f|
flock(f, lockmode) do
begin
value = yield f
ensure
f.flock(File::LOCK_UN) # Comment this line out on Windows.
end
end
return value
end
end
# The actual scheduler
open_lock(mutex_filename, 'r+') do |f|
puts f.read
digest_schedule_message = starling.get("digest_scheduler")
if digest_schedule_message
puts "Found digest message in Starling. Releasing lock. '#{Time.now}'"
puts "Message: #{digest_schedule_message.inspect}"
# Read the message and set it back, so that other processes can read it too
starling.set "digest_scheduler", digest_schedule_message
else
# Schedule job
puts "Scheduling digest emails now. '#{Time.now}'"
scheduler.cron("0 9 * * *") do
puts "Begin sending digests..."
WeeklyDigest.new.send_digest!
puts "Done sending digests."
end
# Add message in queue
puts "Done Scheduling. Sending the message to Starling. '#{Time.now}'"
starling.set "digest_scheduler", :date => Date.today
end
end
# Sleep will ensure all instances have gone thorugh their wait-acquire lock-schedule(or not) cycle
# This will ensure that on next reboot, starling won't have any stale messages
puts "Waiting to clear digest messages from Starling."
sleep(20)
puts "All digest messages cleared, proceeding with boot."
starling.get("digest_scheduler")
Why dont you use mod_passenger (phusion)? I moved from mongrel to phusion and it worked perfect (with a timeamount of < 5 minutes)!
I am trying to run multiple tasks, each task access the database, and I am trying to run the tasks into separate execution wires.
I played around, tried allow_concurrency which I have set to true, or config.thread_safe! but it I get un-deterministic errors, for example sometimes a class is missing, or a constant ...
here is some code
grabbers = get_grabber_name_list
threads = []
grabbers.each { |grabber|
threads << Thread.new {
ARGV[0] = grabber
if (##last_run_timestamp[grabber.to_sym].blank? || (##last_run_timestamp[grabber.to_sym] >= AbstractGrabber.aff_net_accounts(grabber, "grab_interval").seconds.ago))
Rake::Task["aff_net:import:" + grabber].execute
##last_run_timestamp.merge!({grabber.to_sym => Time.now})
end
}
}
threads.each {|t| t.join }
thanks
I've recently implemented a Rails application that uses threads and made a few discoveries:
First, if you're writing to any arrays or hashes (i.e., complex types) outside your thread, wrap them in a mutex. It looks to me like hash and array references may not be thread safe. It seems unlikely that hash/array element indexing isn't thread safe but all I know is that after I put the external data structures in a mutex before writing, problems disappeared.
Second, close your ActiveRecord connection when the thread terminates, otherwise you can end up creating a large number of stale connections. Here's a post about how to do this. I don't know if it still applies for Rails versions > 2.2 but after I started closing connections explicitly, my problems disappeared. The author suggests monkey-patching ActiveRecord to do this automatically but I decided to release connections explicitly in my code.
Here's a sample of code that's working for me:
mutex = Mutex.new
my_array = []
threads = []
1.upto(10) do |i|
threads << Thread.new {
begin
do_some_stuff
mutex.synchronize {
# You'd think that each thread would only touch its own personal
# array element but without a mutex, I run into problems.
my_array[i] = some_computed_value
}
ensure
ActiveRecord::Base.connection_pool.release_connection
end
}
}
threads.each {|t| t.join}
By the way, if you're using threads to take advantage of multi-core CPUs, you'll need to use JRuby. As far as I know, JRuby is the only implementation that can take advantage of native CPU threads. If you use threads so you can do other things while waiting on network connections or some other non-CPU task, this isn't an issue.
You should probably do this using background workers. There are a few options for background worker libraries, but my favourite is delayed_job (http://github.com/tobi/delayed_job).
It should be pretty easy to convert the code you posted into background jobs.