Multiple Worker Streams Sidekiq Icecast Rails - ruby-on-rails

Wondering why I can only have 1 stream listening on a specific port on sidekiq/redis/icecast Rails Stream app. My files are as following:
icecast.xml
<!-- You may have multiple <listener> elements -->
<listen-socket>
<port>35687</port>
<shoutcast-mount>/hiphop</shoutcast-mount>
<!-- <bind-address>127.0.0.1</bind-address> -->
<!-- <shoutcast-mount>/stream</shoutcast-mount> -->
</listen-socket>
<listen-socket>
<port>35690</port>
<shoutcast-mount>/rock</shoutcast-mount>
</listen-socket>
2 workers on rails
class RadioWorker
include Sidekiq::Worker
sidekiq_options :queue => :radioworker
def perform(*_args)
prev_song = nil
s = Shout.new # ruby-shout instance
s.mount = "/rock" # our mountpoint
s.charset = "UTF-8"
s.port = 35690 # the port we've specified earlier
s.host = 'localhost' # hostname
s.user = 'source' # credentials
s.pass = 'parolata'
s.format = Shout::MP3 # format is MP3
s.description = 'Geek Radio' # an arbitrary name
s.connect
class TestWorker
include Sidekiq::Worker
sidekiq_options :queue => :testworker
def perform(*_args)
prev_song = nil
s = Shout.new # ruby-shout instance
s.mount = "/hiphop" # our mountpoint
s.charset = "UTF-8"
s.port = 35687 # the port we've specified earlier
s.host = 'localhost' # hostname
s.user = 'source' # credentials
s.pass = 'parolata'
s.format = Shout::MP3 # format is MP3
s.description = 'Geek Radio' # an arbitrary name
s.connect
both workers have been set to perform async in initializer/sidekiq.rb
and I have the sidekiq queues defined in config/sidekiq.yml
:queues:
- ["radioworker", 1]
- ["testworker", 1]
Currently It seems as if on an instance of bundle exec sidekick -C config/sidekiq.yml I can only listen to 1 of the 2 streams. How can I have them both running at the same time and be able to listen to them?
Thanks in advance

All I had to do was run 2 sidekiq processes 1 running with the params -q radioworker and 1 running with -q testworker. Sorry if the problem description was vague.I was trying to run 2 processes with 1 thread each.Not 1 process with 2 threads. Thanks for the help

Related

Get error message out of Sidekiq job

I want to get exception error message out of the sidekiq job. when I set back_trace option to true it retries my job but I want to exit from job when error raises and get error message.
if I find that process ended successful or fail is enough.
def perform(text)
begin
fail StandardError, 'Error!'
rescue
fail 'EEE' # I want to get this error when call job
end
end
# call
NormalJob.perform_async('test')
# I want to get error here after call
If I were you I would try gem sidekiq-status. It has several options, which can be helpful in such situations:
You can retrieve status of your worker:
job_id = MyJob.perform_async(*args)
# :queued, :working, :complete or :failed , nil after expiry (30 minutes)
status = Sidekiq::Status::status(job_id)
Sidekiq::Status::queued? job_id
Sidekiq::Status::working? job_id
Sidekiq::Status::complete? job_id
Sidekiq::Status::failed? job_id
Also you have options for Tracking progress, saving and retrieveing data associated with job
class MyJob
include Sidekiq::Worker
include Sidekiq::Status::Worker # Important!
def perform(*args)
# your code goes here
# the common idiom to track progress of your task
total 100 # by default
at 5, "Almost done"
# a way to associate data with your job
store vino: 'veritas'
# a way of retrieving said data
# remember that retrieved data is always is String|nil
vino = retrieve :vino
end
end
job_id = MyJob.perform_async(*args)
data = Sidekiq::Status::get_all job_id
data # => {status: 'complete', update_time: 1360006573, vino: 'veritas'}
Sidekiq::Status::get job_id, :vino #=> 'veritas'
Sidekiq::Status::at job_id #=> 5
Sidekiq::Status::total job_id #=> 100
Sidekiq::Status::message job_id #=> "Almost done"
Sidekiq::Status::pct_complete job_id #=> 5
Another option is to use sidekiq batches status
This is what batches allow you to do!
batch = Sidekiq::Batch.new
batch.description = "Batch description (this is optional)"
batch.notify(:email, :to => 'me#example.org')
batch.jobs do
rows.each { |row| RowWorker.perform_async(row) }
end
puts "Just started Batch #{batch.bid}"
b = Sidekiq::Batch.new(bid) # bid is a method on Sidekiq::Worker that gives access to the Batch ID associated to the job.
b.jobs do
SomeWorker.perform_async(1)
sleep 1
# Uh oh, Sidekiq has finished all outstanding batch jobs
# and fires the complete message!
SomeWorker.perform_async(2)
end
status = Sidekiq::Batch::Status.new(bid)
status.total # jobs in the batch => 98
status.failures # failed jobs so far => 5
status.pending # jobs which have not succeeded yet => 17
status.created_at # => 2012-09-04 21:15:05 -0700
status.complete? # if all jobs have executed at least once => false
status.join # blocks until the batch is considered complete, note that some jobs might have failed
status.failure_info # an array of failed jobs
status.data # a hash of data about the batch which can easily be converted to JSON for javascript usage
It can be used out of the box

How to specify markers in YAML file

I have a YAML file that is going to be parsed by two different machines, so I want to specify some sort of markers in the file to indicate which machine has the right to read a specific block.
As an example I want the block 1 to be parsed by machine 1 and block 2 to be parsed by machine2:
# BLOCK 1 - Machine 1
-
:id: 1234
:worker: Foo1
:opts:
:ftpaccount: user1
:limit: 10
# BLOCK 2 - Machine 2
-
:id: 5678
:worker: Foo2
:opts:
:ftpaccount: user2
:limit: 10
How can I achieve something like this? How you implement something similar to this? Thanks.
Treat the blocks as hash entries with the key being the hostname:
require 'yaml'
yaml = <<EOT
host1:
# BLOCK 1 - Machine 1
-
:id: 1234
:worker: Foo1
:opts:
:ftpaccount: user1
:limit: 10
host2:
# BLOCK 2 - Machine 2
-
:id: 5678
:worker: Foo2
:opts:
:ftpaccount: user2
:limit: 10
EOT
config = YAML.load(yaml)
# => {"host1"=>
# [{:id=>1234,
# :worker=>"Foo1",
# :opts=>{:ftpaccount=>"user1", :limit=>10}}],
# "host2"=>
# [{:id=>5678,
# :worker=>"Foo2",
# :opts=>{:ftpaccount=>"user2", :limit=>10}}]}
At this point you can grab the chunk you need:
config['host1']
# => [{:id=>1234, :worker=>"Foo1", :opts=>{:ftpaccount=>"user1", :limit=>10}}]
config['host2']
# => [{:id=>5678, :worker=>"Foo2", :opts=>{:ftpaccount=>"user2", :limit=>10}}]
You don't even have to hard-code the hostname; You can ask the machine what its name is:
`hostname`.chomp # => "MyHost"
Actually, I'd change the YAML a little, so it's a hash of hashes. As is, your YAML returns a hash of arrays of hashes, which, because of the array, makes it more awkward to use:
host1:
# BLOCK 1 - Machine 1
:id: 1234
:worker: Foo1
:opts:
:ftpaccount: user1
:limit: 10
host2:
# BLOCK 2 - Machine 2
:id: 5678
:worker: Foo2
:opts:
:ftpaccount: user2
:limit: 10
Results in:
config = YAML.load(yaml)
# => {"host1"=>
# {:id=>1234, :worker=>"Foo1", :opts=>{:ftpaccount=>"user1", :limit=>10}},
# "host2"=>
# {:id=>5678, :worker=>"Foo2", :opts=>{:ftpaccount=>"user2", :limit=>10}}}
config['host1']
# => {:id=>1234, :worker=>"Foo1", :opts=>{:ftpaccount=>"user1", :limit=>10}}
config['host2']
# => {:id=>5678, :worker=>"Foo2", :opts=>{:ftpaccount=>"user2", :limit=>10}}
Finally, if your YAML file is complex, or long, or has repeated sections, seriously consider writing code that emits that file for you. Ruby makes it really easy to generate the YAML in a very smart way that automatically uses aliases. For instance:
require 'yaml'
SOME_COMMON_DATA = {
'shared_db_dsn' => 'mysql://user:password#host/db'
}
HOST1 = 'foo.com'
HOST1_DATA = {
HOST1 => {
'id' => 1234,
'worker' => 'Foo1',
'opts' => {
'ftpaccount' => 'user1',
'limit' => 10
},
'dsn' => SOME_COMMON_DATA
}
}
HOST2 = 'bar.com'
HOST2_DATA = {
HOST2 => {
'id' => 5678,
'worker' => 'Foo2',
'opts' => {
'ftpaccount' => 'user2',
'limit' => 10
},
'dsn' => SOME_COMMON_DATA
}
}
data = {
HOST1 => HOST1_DATA,
HOST2 => HOST2_DATA,
}
puts data.to_yaml
# >> ---
# >> foo.com:
# >> foo.com:
# >> id: 1234
# >> worker: Foo1
# >> opts:
# >> ftpaccount: user1
# >> limit: 10
# >> dsn: &1
# >> shared_db_dsn: mysql://user:password#host/db
# >> bar.com:
# >> bar.com:
# >> id: 5678
# >> worker: Foo2
# >> opts:
# >> ftpaccount: user2
# >> limit: 10
# >> dsn: *1
Notice how YAML converted "dsn" into an alias and referenced it in the second host's definition using an anchor. This can add up to serious space savings, depending on how you define your variables and build the data structure. See "Aliases and Anchors" for more information.
Also, I'd highly recommend avoiding the use of symbols for your hash keys. By doing so your YAML can be easily loaded by other languages, not just Ruby. At that point, your YAML becomes even more useful when building big systems.
Here's a simple state machine that assembles a string based on the most recent matching comment in the yaml file. The YAML string is then loaded into the parser. If your files are really large, you could easily modify this to use Tempfile or some other IO class.
require 'yaml'
class YAMLSplitter
attr_reader :flag, :mode, :raw
def initialize(flag)
#flag = flag
#mode = :match
#raw = ""
end
def parse(file)
File.read(file).each_line do |line|
process_line(line)
end
YAML.load(raw)
end
private
def process_line(line)
set_match_status(line)
write_line(line) if match?
end
def set_match_status(line)
if line.start_with?("#")
if line.match(flag)
match!
else
nomatch!
end
end
end
def write_line(line)
puts "WRITE_LINE #{mode.inspect} #{line.inspect}"
raw << line
end
def match?
mode == :match
end
def match!
#mode = :match
end
def nomatch!
#mode = :nomatch
end
end
YAML:
---
# machine 1
- 1
- 2
- 3
- 4
# machine 2
- 5
- 6
- 7
- 8
- 9
- 10
- 11
# machine 1
- 12
Execution:
splitter = YAMLSplitter.new('machine 1')
yaml = splitter.parse('test.yml')

Using Smpp in Ruby on Rails

The need to send and receive SMS messages via Smpp, Register Number, received systemID and password, but fails to connect. Connected to the project gem 'ruby-smpp', to use the example of the getaway this, it only changed the values ​​systemID and password.
in the logs:
<- (BindTransceiver) len = 37 cmd = 9 status = 0 seq = 1364360797 (<systemID><password>)
Hex dump follows:
<- 00000000: 0000 0025 0000 0009 0000 0000 5152 7e5d | ...% ........ QR ~]
<- 00000010: 3531 3639 3030 0068 4649 6b4b 7d7d 7a00 | <systemID>.<password>.
<- 00000020: 0034 0001 00 | .4 ...
Starting enquire link timer (with 10s interval)
Delegate: transceiver unbound
in the console:
Connecting to SMSC ...
MT: Disconnected. Reconnecting in 5 seconds ..
MT: Disconnected. Reconnecting in 5 seconds ..
MT: Disconnected. Reconnecting in 5 seconds ..
MT: Disconnected. Reconnecting in 5 seconds ..
MT: Disconnected. Reconnecting in 5 seconds ..
Tell me, please, I do not, maybe in the config to something else to add or change? And smpp connection, as I understand, it works only with a specific IP-address, but the logs on the server and on the local machine are the same
class Gateway
include KeyboardHandler
# MT id counter.
##mt_id = 0
# expose SMPP transceiver's send_mt method
def self.send_mt(*args)
##mt_id += 1
##tx.send_mt(##mt_id, *args)
end
def logger
Smpp::Base.logger
end
def start(config)
# The transceiver sends MT messages to the SMSC. It needs a storage with Hash-like
# semantics to map SMSC message IDs to your own message IDs.
pdr_storage = {}
# Run EventMachine in loop so we can reconnect when the SMSC drops our connection.
puts "Connecting to SMSC..."
loop do
EventMachine::run do
##tx = EventMachine::connect(
config[:host],
config[:port],
Smpp::Transceiver,
config,
self # delegate that will receive callbacks on MOs and DRs and other events
)
print "MT: "
$stdout.flush
# Start consuming MT messages (in this case, from the console)
# Normally, you'd hook this up to a message queue such as Starling
# or ActiveMQ via STOMP.
EventMachine::open_keyboard(KeyboardHandler)
end
puts "Disconnected. Reconnecting in 5 seconds.."
sleep 5
end
end
# ruby-smpp delegate methods
def mo_received(transceiver, pdu)
logger.info "Delegate: mo_received: from #{pdu.source_addr} to #{pdu.destination_addr}: #{pdu.short_message}"
end
def delivery_report_received(transceiver, pdu)
logger.info "Delegate: delivery_report_received: ref #{pdu.msg_reference} stat #{pdu.stat}"
end
def message_accepted(transceiver, mt_message_id, pdu)
logger.info "Delegate: message_accepted: id #{mt_message_id} smsc ref id: #{pdu.message_id}"
end
def message_rejected(transceiver, mt_message_id, pdu)
logger.info "Delegate: message_rejected: id #{mt_message_id} smsc ref id: #{pdu.message_id}"
end
def bound(transceiver)
logger.info "Delegate: transceiver bound"
end
def unbound(transceiver)
logger.info "Delegate: transceiver unbound"
EventMachine::stop_event_loop
end
end
module KeyboardHandler
include EventMachine::Protocols::LineText2
def receive_line(data)
sender, receiver, *body_parts = data.split
unless sender && receiver && body_parts.size > 0
puts "Syntax: <sender> <receiver> <message body>"
else
body = body_parts.join(' ')
puts "Sending MT from #{sender} to #{receiver}: #{body}"
SampleGateway.send_mt(sender, receiver, body)
end
prompt
end
def prompt
print "MT: "
$stdout.flush
end
end
/initializers
require 'eventmachine'
require 'smpp'
LOGFILE = Rails.root + "log/sms_gateway.log"
Smpp::Base.logger = Logger.new(LOGFILE)
/script
puts "Starting SMS Gateway. Please check the log at #{LOGFILE}"
config = {
:host => '127.0.0.1',
:port => 6000,
:system_id => <SystemID>,
:password => <Password>,
:system_type => '', # default given according to SMPP 3.4 Spec
:interface_version => 52,
:source_ton => 0,
:source_npi => 1,
:destination_ton => 1,
:destination_npi => 1,
:source_address_range => '',
:destination_address_range => '',
:enquire_link_delay_secs => 10
}
gw = Gateway.new
gw.start(config)
file from the script / run through the rails runner
Problem solved. Initially were incorrectly specified host and port, as smpp-server was not on the same machine as the RoR-application. As for encoding, sending in Russian layout should be
message = text.encode ("UTF-16BE"). force_encoding ("BINARY")
Gateway.send_mt (sender, receiver, message, data_coding: 8)
To receive a form in the right
message = pdu.short_message.force_encoding ('UTF-16BE'). encode ('UTF-8')

Work with two separate redis instances with sidekiq?

Good afternoon,
I have two separate, but related apps. They should both have their own background queues (read: separate Sidekiq & Redis processes). However, I'd like to occasionally be able to push jobs onto app2's queue from app1.
From a simple queue/push perspective, it would be easy to do this if app1 did not have an existing Sidekiq/Redis stack:
# In a process, far far away
# Configure client
Sidekiq.configure_client do |config|
config.redis = { :url => 'redis://redis.example.com:7372/12', :namespace => 'mynamespace' }
end
# Push jobs without class definition
Sidekiq::Client.push('class' => 'Example::Workers::Trace', 'args' => ['hello!'])
# Push jobs overriding default's
Sidekiq::Client.push('queue' => 'example', 'retry' => 3, 'class' => 'Example::Workers::Trace', 'args' => ['hello!'])
However given that I would already have called a Sidekiq.configure_client and Sidekiq.configure_server from app1, there's probably a step in between here where something needs to happen.
Obviously I could just take the serialization and normalization code straight from inside Sidekiq and manually push onto app2's redis queue, but that seems like a brittle solution. I'd like to be able to use the Client.push functionality.
I suppose my ideal solution would be someting like:
SidekiqTWO.configure_client { remote connection..... }
SidekiqTWO::Client.push(job....)
Or even:
$redis_remote = remote_connection.....
Sidekiq::Client.push(job, $redis_remote)
Obviously a bit facetious, but that's my ideal use case.
Thanks!
So one thing is that According to the FAQ, "The Sidekiq message format is quite simple and stable: it's just a Hash in JSON format." Emphasis mine-- I don't think sending JSON to sidekiq is too brittle to do. Especially when you want fine-grained control around which Redis instance you send the jobs to, as in the OP's situation, I'd probably just write a little wrapper that would let me indicate a Redis instance along with the job being enqueued.
For Kevin Bedell's more general situation to round-robin jobs into Redis instances, I'd imagine you don't want to have the control of which Redis instance is used-- you just want to enqueue and have the distribution be managed automatically. It looks like only one person has requested this so far, and they came up with a solution that uses Redis::Distributed:
datastore_config = YAML.load(ERB.new(File.read(File.join(Rails.root, "config", "redis.yml"))).result)
datastore_config = datastore_config["defaults"].merge(datastore_config[::Rails.env])
if datastore_config[:host].is_a?(Array)
if datastore_config[:host].length == 1
datastore_config[:host] = datastore_config[:host].first
else
datastore_config = datastore_config[:host].map do |host|
host_has_port = host =~ /:\d+\z/
if host_has_port
"redis://#{host}/#{datastore_config[:db] || 0}"
else
"redis://#{host}:#{datastore_config[:port] || 6379}/#{datastore_config[:db] || 0}"
end
end
end
end
Sidekiq.configure_server do |config|
config.redis = ::ConnectionPool.new(:size => Sidekiq.options[:concurrency] + 2, :timeout => 2) do
redis = if datastore_config.is_a? Array
Redis::Distributed.new(datastore_config)
else
Redis.new(datastore_config)
end
Redis::Namespace.new('resque', :redis => redis)
end
end
Another thing to consider in your quest to get high-availability and fail-over is to get Sidekiq Pro which includes reliability features: "The Sidekiq Pro client can withstand transient Redis outages. It will enqueue jobs locally upon error and attempt to deliver those jobs once connectivity is restored." Since sidekiq is for background processes anyway, a short delay if a Redis instance goes down should not affect your application. If one of your two Redis instances goes down and you're using round robin, you've still lost some jobs unless you're using this feature.
As carols10cents says its pretty simple but as I always like to encapsulate the capability and be able to reuse it in other projects I updated an idea from a blog from Hotel Tonight. This following solution improves upon Hotel Tonight's that does not survive Rails 4.1 & Spring preloader.
Currently I make do with adding the following files to lib/remote_sidekiq/:
remote_sidekiq.rb
class RemoteSidekiq
class_attribute :redis_pool
end
remote_sidekiq_worker.rb
require 'sidekiq'
require 'sidekiq/client'
module RemoteSidekiqWorker
def client
pool = RemoteSidekiq.redis_pool || Thread.current[:sidekiq_via_pool] || Sidekiq.redis_pool
Sidekiq::Client.new(pool)
end
def push(worker_name, attrs = [], queue_name = "default")
client.push('args' => attrs, 'class' => worker_name, 'queue' => queue_name)
end
end
You need to create a initializer that sets redis_pool
config/initializers/remote_sidekiq.rb
url = ENV.fetch("REDISCLOUD_URL")
namespace = 'primary'
redis = Redis::Namespace.new(namespace, redis: Redis.new(url: url))
RemoteSidekiq.redis_pool = ConnectionPool.new(size: ENV['MAX_THREADS'] || 6) { redis }
EDIT by Aleks:
In never versions of sidekiq, instead of lines:
redis = Redis::Namespace.new(namespace, redis: Redis.new(url: url))
RemoteSidekiq.redis_pool = ConnectionPool.new(size: ENV['MAX_THREADS'] || 6) { redis }
use lines:
redis_remote_options = {
namespace: "yournamespace",
url: ENV.fetch("REDISCLOUD_URL")
}
RemoteSidekiq.redis_pool = Sidekiq::RedisConnection.create(redis_remote_options)
You can then simply the include RemoteSidekiqWorker module wherever you want. Job done!
**** FOR MORE LARGER ENVIRONMENTS ****
Adding in RemoteWorker Models adds extra benefits:
You can reuse the RemoteWorkers everywhere including the system that has access to the target sidekiq workers. This is transparent to the caller. To use the "RemoteWorkers" form within the target sidekiq system simply do not use an initializer as it will default to using the local Sidekiq client.
Using RemoteWorkers ensure correct arguments are always sent in (the code = documentation)
Scaling up by creating more complicated Sidekiq architectures is transparent to the caller.
Here is an example RemoteWorker
class RemoteTraceWorker
include RemoteSidekiqWorker
include ActiveModel::Model
attr_accessor :message
validates :message, presence: true
def perform_async
if valid?
push(worker_name, worker_args)
else
raise ActiveModel::StrictValidationFailed, errors.full_messages
end
end
private
def worker_name
:TraceWorker.to_s
end
def worker_args
[message]
end
end
I came across this and ran into some issues because I'm using ActiveJob, which complicates how messages are read out of the queue.
Building on ARO's answer, you will still need the redis_pool setup:
remote_sidekiq.rb
class RemoteSidekiq
class_attribute :redis_pool
end
config/initializers/remote_sidekiq.rb
url = ENV.fetch("REDISCLOUD_URL")
namespace = 'primary'
redis = Redis::Namespace.new(namespace, redis: Redis.new(url: url))
RemoteSidekiq.redis_pool = ConnectionPool.new(size: ENV['MAX_THREADS'] || 6) { redis }
Now instead of the worker we'll create an ActiveJob Adapter to queue the request:
lib/active_job/queue_adapters/remote_sidekiq_adapter.rb
require 'sidekiq'
module ActiveJob
module QueueAdapters
class RemoteSidekiqAdapter
def enqueue(job)
#Sidekiq::Client does not support symbols as keys
job.provider_job_id = client.push \
"class" => ActiveJob::QueueAdapters::SidekiqAdapter::JobWrapper,
"wrapped" => job.class.to_s,
"queue" => job.queue_name,
"args" => [ job.serialize ]
end
def enqueue_at(job, timestamp)
job.provider_job_id = client.push \
"class" => ActiveJob::QueueAdapters::SidekiqAdapter::JobWrapper,
"wrapped" => job.class.to_s,
"queue" => job.queue_name,
"args" => [ job.serialize ],
"at" => timestamp
end
def client
#client ||= ::Sidekiq::Client.new(RemoteSidekiq.redis_pool)
end
end
end
end
I can use the adapter to queue the events now:
require 'active_job/queue_adapters/remote_sidekiq_adapter'
class RemoteJob < ActiveJob::Base
self.queue_adapter = :remote_sidekiq
queue_as :default
def perform(_event_name, _data)
fail "
This job should not run here; intended to hook into
ActiveJob and run in another system
"
end
end
I can now queue the job using the normal ActiveJob api. Whatever app reads this out of the queue will need to have a matching RemoteJob available to perform the action.

Rails backgroundRB plugin need to schedule it and queue to database for persistancy

I'm trying to do the following:
Run a Worker and a method within it every 15 minutes
Have a log of the job last runtime, in the database table
bdrd_job_queue.
What I've done:
I have a schedule every 15 minutes in my backgroundRB.yml file
The method call has a persistent_job.finish! call, but it's not working,
because the persistent_job object is nil.
How can I ensure it's logged in the DB, but still automatically
scheduled from backgroundRB.yml?
I was finally able to do it.
The workaround is to schedule a task that will queue it to the database, scheduled to run right away.
In your worker ...
class NotificationWorker < BackgrounDRb::MetaWorker
set_worker_name :notification_worker
def create(args = nil)
end
def queue_notify_changes(args = nil)
BdrbJobQueue.insert_job(:worker_name => 'notification_worker',
:worker_method => 'notify_new_changes_DAEMON',
:args => 'hello_world',
:scheduled_at => Time.now.utc,
:job_key => 'email_changes_notification_task')
end
def notify_new_changes_DAEMON
#Do Incredibly cool stuff here
end
In the config file backgroundrb.yml
---
:backgroundrb:
:ip: 0.0.0.0
:port: 11006
:environment: production
:log: foreground
:debug_log: true
:persistent_disabled: false
:persistent_delay: 10
:schedules:
:notification_worker:
:queue_notify_changes:
:trigger_args: 0 0 0 * * *

Resources