Good afternoon,
I have two separate, but related apps. They should both have their own background queues (read: separate Sidekiq & Redis processes). However, I'd like to occasionally be able to push jobs onto app2's queue from app1.
From a simple queue/push perspective, it would be easy to do this if app1 did not have an existing Sidekiq/Redis stack:
# In a process, far far away
# Configure client
Sidekiq.configure_client do |config|
config.redis = { :url => 'redis://redis.example.com:7372/12', :namespace => 'mynamespace' }
end
# Push jobs without class definition
Sidekiq::Client.push('class' => 'Example::Workers::Trace', 'args' => ['hello!'])
# Push jobs overriding default's
Sidekiq::Client.push('queue' => 'example', 'retry' => 3, 'class' => 'Example::Workers::Trace', 'args' => ['hello!'])
However given that I would already have called a Sidekiq.configure_client and Sidekiq.configure_server from app1, there's probably a step in between here where something needs to happen.
Obviously I could just take the serialization and normalization code straight from inside Sidekiq and manually push onto app2's redis queue, but that seems like a brittle solution. I'd like to be able to use the Client.push functionality.
I suppose my ideal solution would be someting like:
SidekiqTWO.configure_client { remote connection..... }
SidekiqTWO::Client.push(job....)
Or even:
$redis_remote = remote_connection.....
Sidekiq::Client.push(job, $redis_remote)
Obviously a bit facetious, but that's my ideal use case.
Thanks!
So one thing is that According to the FAQ, "The Sidekiq message format is quite simple and stable: it's just a Hash in JSON format." Emphasis mine-- I don't think sending JSON to sidekiq is too brittle to do. Especially when you want fine-grained control around which Redis instance you send the jobs to, as in the OP's situation, I'd probably just write a little wrapper that would let me indicate a Redis instance along with the job being enqueued.
For Kevin Bedell's more general situation to round-robin jobs into Redis instances, I'd imagine you don't want to have the control of which Redis instance is used-- you just want to enqueue and have the distribution be managed automatically. It looks like only one person has requested this so far, and they came up with a solution that uses Redis::Distributed:
datastore_config = YAML.load(ERB.new(File.read(File.join(Rails.root, "config", "redis.yml"))).result)
datastore_config = datastore_config["defaults"].merge(datastore_config[::Rails.env])
if datastore_config[:host].is_a?(Array)
if datastore_config[:host].length == 1
datastore_config[:host] = datastore_config[:host].first
else
datastore_config = datastore_config[:host].map do |host|
host_has_port = host =~ /:\d+\z/
if host_has_port
"redis://#{host}/#{datastore_config[:db] || 0}"
else
"redis://#{host}:#{datastore_config[:port] || 6379}/#{datastore_config[:db] || 0}"
end
end
end
end
Sidekiq.configure_server do |config|
config.redis = ::ConnectionPool.new(:size => Sidekiq.options[:concurrency] + 2, :timeout => 2) do
redis = if datastore_config.is_a? Array
Redis::Distributed.new(datastore_config)
else
Redis.new(datastore_config)
end
Redis::Namespace.new('resque', :redis => redis)
end
end
Another thing to consider in your quest to get high-availability and fail-over is to get Sidekiq Pro which includes reliability features: "The Sidekiq Pro client can withstand transient Redis outages. It will enqueue jobs locally upon error and attempt to deliver those jobs once connectivity is restored." Since sidekiq is for background processes anyway, a short delay if a Redis instance goes down should not affect your application. If one of your two Redis instances goes down and you're using round robin, you've still lost some jobs unless you're using this feature.
As carols10cents says its pretty simple but as I always like to encapsulate the capability and be able to reuse it in other projects I updated an idea from a blog from Hotel Tonight. This following solution improves upon Hotel Tonight's that does not survive Rails 4.1 & Spring preloader.
Currently I make do with adding the following files to lib/remote_sidekiq/:
remote_sidekiq.rb
class RemoteSidekiq
class_attribute :redis_pool
end
remote_sidekiq_worker.rb
require 'sidekiq'
require 'sidekiq/client'
module RemoteSidekiqWorker
def client
pool = RemoteSidekiq.redis_pool || Thread.current[:sidekiq_via_pool] || Sidekiq.redis_pool
Sidekiq::Client.new(pool)
end
def push(worker_name, attrs = [], queue_name = "default")
client.push('args' => attrs, 'class' => worker_name, 'queue' => queue_name)
end
end
You need to create a initializer that sets redis_pool
config/initializers/remote_sidekiq.rb
url = ENV.fetch("REDISCLOUD_URL")
namespace = 'primary'
redis = Redis::Namespace.new(namespace, redis: Redis.new(url: url))
RemoteSidekiq.redis_pool = ConnectionPool.new(size: ENV['MAX_THREADS'] || 6) { redis }
EDIT by Aleks:
In never versions of sidekiq, instead of lines:
redis = Redis::Namespace.new(namespace, redis: Redis.new(url: url))
RemoteSidekiq.redis_pool = ConnectionPool.new(size: ENV['MAX_THREADS'] || 6) { redis }
use lines:
redis_remote_options = {
namespace: "yournamespace",
url: ENV.fetch("REDISCLOUD_URL")
}
RemoteSidekiq.redis_pool = Sidekiq::RedisConnection.create(redis_remote_options)
You can then simply the include RemoteSidekiqWorker module wherever you want. Job done!
**** FOR MORE LARGER ENVIRONMENTS ****
Adding in RemoteWorker Models adds extra benefits:
You can reuse the RemoteWorkers everywhere including the system that has access to the target sidekiq workers. This is transparent to the caller. To use the "RemoteWorkers" form within the target sidekiq system simply do not use an initializer as it will default to using the local Sidekiq client.
Using RemoteWorkers ensure correct arguments are always sent in (the code = documentation)
Scaling up by creating more complicated Sidekiq architectures is transparent to the caller.
Here is an example RemoteWorker
class RemoteTraceWorker
include RemoteSidekiqWorker
include ActiveModel::Model
attr_accessor :message
validates :message, presence: true
def perform_async
if valid?
push(worker_name, worker_args)
else
raise ActiveModel::StrictValidationFailed, errors.full_messages
end
end
private
def worker_name
:TraceWorker.to_s
end
def worker_args
[message]
end
end
I came across this and ran into some issues because I'm using ActiveJob, which complicates how messages are read out of the queue.
Building on ARO's answer, you will still need the redis_pool setup:
remote_sidekiq.rb
class RemoteSidekiq
class_attribute :redis_pool
end
config/initializers/remote_sidekiq.rb
url = ENV.fetch("REDISCLOUD_URL")
namespace = 'primary'
redis = Redis::Namespace.new(namespace, redis: Redis.new(url: url))
RemoteSidekiq.redis_pool = ConnectionPool.new(size: ENV['MAX_THREADS'] || 6) { redis }
Now instead of the worker we'll create an ActiveJob Adapter to queue the request:
lib/active_job/queue_adapters/remote_sidekiq_adapter.rb
require 'sidekiq'
module ActiveJob
module QueueAdapters
class RemoteSidekiqAdapter
def enqueue(job)
#Sidekiq::Client does not support symbols as keys
job.provider_job_id = client.push \
"class" => ActiveJob::QueueAdapters::SidekiqAdapter::JobWrapper,
"wrapped" => job.class.to_s,
"queue" => job.queue_name,
"args" => [ job.serialize ]
end
def enqueue_at(job, timestamp)
job.provider_job_id = client.push \
"class" => ActiveJob::QueueAdapters::SidekiqAdapter::JobWrapper,
"wrapped" => job.class.to_s,
"queue" => job.queue_name,
"args" => [ job.serialize ],
"at" => timestamp
end
def client
#client ||= ::Sidekiq::Client.new(RemoteSidekiq.redis_pool)
end
end
end
end
I can use the adapter to queue the events now:
require 'active_job/queue_adapters/remote_sidekiq_adapter'
class RemoteJob < ActiveJob::Base
self.queue_adapter = :remote_sidekiq
queue_as :default
def perform(_event_name, _data)
fail "
This job should not run here; intended to hook into
ActiveJob and run in another system
"
end
end
I can now queue the job using the normal ActiveJob api. Whatever app reads this out of the queue will need to have a matching RemoteJob available to perform the action.
Related
I have a rails app where an action never finishes and then times out.
Find the diagram below for better illustration.
My rails apps action is called
The action POSTs some data to another app
The other app needs something to complete the computation and calls a different action than the first of the Rails app
The other app gets a response and finishes the computation
The other app responds to the rails apps POST request
The view is rendered accordingly
Now the issue: The other app never gets a response from the main app. After the Rails apps request times out however, the response is sent (however too late of course) so I think it is somehow cued.
I don't understand how to fix that. I use rails 5 and Puma which should be able to handle parallel calls. Its also not a local issue, same happens in prod.
I use the recommended puma.rb config from Heroku
workers Integer(ENV['WEB_CONCURRENCY'] || 2)
threads_count = Integer(ENV['RAILS_MAX_THREADS'] || 5)
threads threads_count, threads_count
preload_app!
rackup DefaultRackup
port ENV['PORT'] || 3000
environment ENV['RACK_ENV'] || 'development'
on_worker_boot do
# Worker specific setup for Rails 4.1+
# See: https://devcenter.heroku.com/articles/deploying-rails-applications-with-the-puma-web-server#on-worker-boot
ActiveRecord::Base.establish_connection
end
What do I do to fix this cueing?
Controller:
# New method
def live_preview_page
preview_locale = params[:preview_locale]
date = params[:date] # The date to preview
page_id = params[:id]
return if locale.nil? || locale =~ /not/ || date.nil?
all_templates = Template.all.order('name ASC') # Maybe move to render_live_editor_page
if date == "all"
active_modules = #page.page_modules.order(rank: :asc)
else
active_modules = #page.page_modules.order(rank: :asc).to_a.valid_for(date: date.to_date)
puts "Active modules: #{active_modules.count}"
end
active_modules_json = active_modules.each do |content_module|
content_module.body = YAML.load(content_module.body).to_json
end
response = helpers.render_preview(active_modules, all_templates, preview_locale)
renderer = ContentRenderer.new
actionController = ActionController::Base.new
rendered_helper = actionController.render_to_string(
partial: '/pages/preview-helper-snippet', locals: {
all_templates: all_templates, # For select when creating new modules
modulesData: active_modules_json, # For rendering the JSON containing the data for the editor
current_page: #page.id,
localeLinks: renderer.generateStgPreviewURLs(SettingService.get_named_locales, #page.id),
locale: preview_locale,
all_locales: SettingService.locales_for_live_editor,
all_sites_and_locales: SettingService.get_sites_and_locales
})
proxy_service = ProxyService.new
proxy_service.get_page do |error, page_wrapper|
# Note: Issue is that Vapor app generates warnings inline template : encountered \r in middle of line, treated as a mere space
rendered_body_with_helper = response.body.force_encoding("UTF-8") + rendered_helper
decorated_page = page_wrapper.gsub("__WIDGET__", rendered_body_with_helper)
render inline: decorated_page
return
end
end
Helper
def render_preview(active_modules, all_templates, preview_locale)
req = Request.new
preview_body = {
modules: active_modules,
templates: all_templates,
sites: SettingService.get_sites,
configuration: {
locale: preview_locale,
site: "DE"
}
}
req.send_request(
url: "#{ENV["RENDER_SERVICE_URL"]}/preview",
body: preview_body,
options: {
type: :post,
json: true,
username: ENV["RENDER_SERVICE_BASIC_AUTH_USERNAME"],
password: ENV["RENDER_SERVICE_BASIC_AUTH_PASSWORD"]
}
) do |response_code, response|
return response
end
end
Request is just a thin wrapper
require "uri"
require "net/http"
class Request
# Yields resonse_code (int), response
# Parameters besides url: are optional
def send_request(url:, body: {}, header: {}, options: {})
uri = URI.parse(url)
http = Net::HTTP.new(uri.host, uri.port)
if options.key? :type
case options[:type]
when :get
request = Net::HTTP::Get.new(uri.request_uri, header)
when :post
request = Net::HTTP::Post.new(uri.request_uri, header)
end
else
# Default is GET
request = Net::HTTP::Get.new(uri.request_uri, header)
end
if options.key?(:username) && options.key?(:password)
request.basic_auth options[:username], options[:password]
end
unless body.class == String
body = body.to_json.to_s
end
request.body = body unless body.empty?
puts request.body
# SSL is default
if options.key? :ssl
http.use_ssl = options[:ssl]
else
http.use_ssl = Rails.configuration.force_ssl
#http.verify_mode = OpenSSL::SSL::VERIFY_NONE
end
if options.key? :json
request.add_field("Content-Type", "application/json")
end
response = http.request(request)
yield response.code.to_i, response
end
end
The following answer is (probably) not the answer you want - but it's the answer you need:
The best way to fix this is to avoid the loop in the request/response logic (where the rails app calls itself through the other app).
Concurrency might help delay the onset of the issue, but the issue will always occur as long as the loop exists.
For example, assume you have an 100 requests from clients to the Rails app.
Rails will call the other app and the other app's request will be queued as request number 101.
This can be solved with 100 threads (for example, 10 workers with 10 threads each)...
But what will your app do with 200 client requests?
This cycle is endless, the more clients you have the more concurrency you require before you experience DoS.
The only solution is to avoid the loop to begin with.
Either break it up to 3 apps or (better yet), avoid dependencies between micro services.
I am trying to set up the Zendesk API in my app, I have decided to go with the API that was built by Zendesk
I have set up the initializer object to load the client.
config/initializers/zendesk.rb
require 'zendesk_api'
client = ZendeskAPI::Client.new do |config|
# Mandatory:
config.url = Rails.application.secrets[:zendesk][:url]
# Basic / Token Authentication
config.username = Rails.application.secrets[:zendesk][:username]
config.token = Rails.application.secrets[:zendesk][:token]
# Optional:
# Retry uses middleware to notify the user
# when hitting the rate limit, sleep automatically,
# then retry the request.
config.retry = true
# Logger prints to STDERR by default, to e.g. print to stdout:
require 'logger'
config.logger = Logger.new(STDOUT)
# Changes Faraday adapter
# config.adapter = :patron
# Merged with the default client options hash
# config.client_options = { :ssl => false }
# When getting the error 'hostname does not match the server certificate'
# use the API at https://yoursubdomain.zendesk.com/api/v2
end
This is pretty much copy paste from the site, but I have decided on using the token + username combination.
I then created a service object that I pass a JSON object and have it construct tickets. This service object is called from a controller.
app/services/zendesk_notifier.rb
class ZendeskNotifier
attr_reader :data
def initialize(data)
#data = data
end
def create_ticket
options = {:comment => { :value => data[:reasons] }, :priority => "urgent" }
if for_operations?
options[:subject] = "Ops to get additional info for CC"
options[:requester] = { :email => 'originations#testing1.com' }
elsif school_in_usa_or_canada?
options[:subject] = "SRM to communicate with student"
options[:requester] = { :email => 'srm#testing2.com' }
else
options[:subject] = "SRM to communicate with student"
options[:requester] = { :email => 'srm_row#testing3.com' }
end
ZendeskAPI::Ticket.create!(client, options)
end
private
def for_operations?
data[:delegate] == 1
end
def school_in_usa_or_canada?
data[:campus_country] == "US" || "CA"
end
end
But now I am getting
NameError - undefined local variable or method `client' for #<ZendeskNotifier:0x007fdc7e5882b8>:
app/services/zendesk_notifier.rb:20:in `create_ticket'
app/controllers/review_queue_applications_controller.rb:46:in `post_review'
I thought that the client was the same one defined in my config initializer. Somehow I think this is a different object now. I have tried looking at their documentation for more information but I am lost as to what this is?
If you want to use the client that is defined in the initializer you would need to make it global by changing it to $client. Currently you have it setup as a local variable.
I used a slightly different way of initializing the client, copying from this example rails app using the standard Zendesk API gem:
https://github.com/devarispbrown/zendesk_help_rails/blob/master/app/controllers/application_controller.rb
As danielrsmith noted, the client variable is out of scope. You could instead have an initializer like this:
config/initializers/zendesk_client.rb:
class ZendeskClient < ZendeskAPI::Client
def self.instance
#instance ||= new do |config|
config.url = Rails.application.secrets[:zendesk][:url]
config.username = Rails.application.secrets[:zendesk][:username]
config.token = Rails.application.secrets[:zendesk][:token]
config.retry = true
config.logger = Logger.new(STDOUT)
end
end
end
Then return the client elsewhere by client = ZendeskClient.instance (abridged for brevity):
app/services/zendesk_notifier.rb:
class ZendeskNotifier
attr_reader :data
def initialize(data)
#data = data
#client = ZendeskClient.instance
end
def create_ticket
options = {:comment => { :value => data[:reasons] }, :priority => "urgent" }
...
ZendeskAPI::Ticket.create!(#client, options)
end
...
end
Hope this helps.
I have two websocket clients, and I want to exchange information between them.
Let's say I have two instances of socket servers, and 1st is retrieve private information, filter it and send to the second one.
require 'em-websocket'
EM.run do
EM::WebSocket.run(host: '0.0.0.0', port: 19108) do |manager_emulator|
# retrieve information. After that I need to send it to another port (9108)
end
EM::WebSocket.run(host: '0.0.0.0', port: 9108) do |fake_manager|
# I need to send filtered information here
end
end
I've tried to do something, but I got usual dark code and I don't know how to implement this functionality.
I'm not sure how you would do that using EM.
I'm assuming you will need to have the fake_manager listen to an event triggered by the manager_emulator.
It would be quite easy if you'd be using a websocket web-app framework. For instance, on the Plezi web-app framework you could write something like this:
# try the example from your terminal.
# use http://www.websocket.org/echo.html in two different browsers to observe:
#
# Window 1: http://localhost:3000/manager
# Window 2: http://localhost:3000/fake
require 'plezi'
class Manager_Controller
def on_message data
FakeManager_Controller.broadcast :_send, "Hi, fake! Please do something with: #{data}\r\n- from Manager."
true
end
def _send message
response << message
end
end
class FakeManager_Controller
def on_message data
Manager_Controller.broadcast :_send, "Hi, manager! This is yours: #{data}\r\n- from Fake."
true
end
def _send message
response << message
end
end
class HomeController
def index
"use http://www.websocket.org/echo.html in two different browsers to observe this demo in action:\r\n" +
"Window 1: http://localhost:3000/manager\r\nWindow 2: http://localhost:3000/fake\r\n"
end
end
# # optional Redis URL: automatic broadcasting across processes or machines:
# ENV['PL_REDIS_URL'] = "redis://username:password#my.host:6379"
# starts listening with default settings, on port 3000
listen
# Setup routes:
# They are automatically converted to the RESTful route: '/path/(:id)'
route '/manager', Manager_Controller
route '/fake', FakeManager_Controller
route '/', HomeController
# exit terminal to start server
exit
Good Luck!
P.S.
If you're going to keep to EM, you might consider using Redis to push and subscribe to events between the two ports.
I've found a way how to do it through em-websocket gem! You need just define variables outside of eventmachine block. Something like that
require 'em-websocket'
message_sender = nil
EM.run do
# message sender
EM::WebSocket.run(host: '0.0.0.0', port: 19108) do |ws|
ws.onopen { message_sender = ws }
ws.onclose { message_sender = nil }
end
# message receiver
EM::WebSocket.run(host: '0.0.0.0', port: 9108) do |ws|
ws.onmessage { |msg| message_sender.send(msg) if message_sender }
end
end
Hey I am attempting to spawn a sidekiq worker that connects to a completely separate Redis database. I know with 3.0's connection pooling this is possible, and I have been able to successfully push a job onto the correct Redis DB, but the problem is the Sidekiq web UI is not showing these jobs in the queue (I have mounted a separate Rack app for this that points exclusively to the other Redis DB). The "Busy" tab in the admin interface also shows my sidekiq workers that I have pointed at this DB, with correct PIDs.
Here's my sidekiq.rb:
Sidekiq.configure_server do |config|
if ENV['REDIS_DB'] == "2"
config.redis = { :url => "redis://#{SIDEKIQ_HOST}:6379/2", :namespace => 'drip' }
else
config.redis = { :url => "redis://#{SIDEKIQ_HOST}:6379", :namespace => 'drip' }
end
end
Sidekiq.configure_client do |config|
if ENV['REDIS_DB'] == "2"
config.redis = { :url => "redis://#{SIDEKIQ_HOST}:6379/2", :namespace => 'drip' }
else
config.redis = { :url => "redis://#{SIDEKIQ_HOST}:6379", :namespace => 'drip' }
end
end
My use case is that I need to have fine grain control over the jobs that go into the second database, so I need the workers configured precisely so they are only using as many resources as I need them to. I only want the workers that are configured in this way to pick up these jobs.
The Web UI cannot be mounted multiple times to point to multiple Redises in a single process. You have to run multiple web processes with REDIS_DB set too.
I'm trying to do the following:
Run a Worker and a method within it every 15 minutes
Have a log of the job last runtime, in the database table
bdrd_job_queue.
What I've done:
I have a schedule every 15 minutes in my backgroundRB.yml file
The method call has a persistent_job.finish! call, but it's not working,
because the persistent_job object is nil.
How can I ensure it's logged in the DB, but still automatically
scheduled from backgroundRB.yml?
I was finally able to do it.
The workaround is to schedule a task that will queue it to the database, scheduled to run right away.
In your worker ...
class NotificationWorker < BackgrounDRb::MetaWorker
set_worker_name :notification_worker
def create(args = nil)
end
def queue_notify_changes(args = nil)
BdrbJobQueue.insert_job(:worker_name => 'notification_worker',
:worker_method => 'notify_new_changes_DAEMON',
:args => 'hello_world',
:scheduled_at => Time.now.utc,
:job_key => 'email_changes_notification_task')
end
def notify_new_changes_DAEMON
#Do Incredibly cool stuff here
end
In the config file backgroundrb.yml
---
:backgroundrb:
:ip: 0.0.0.0
:port: 11006
:environment: production
:log: foreground
:debug_log: true
:persistent_disabled: false
:persistent_delay: 10
:schedules:
:notification_worker:
:queue_notify_changes:
:trigger_args: 0 0 0 * * *