rails: deploy workers for delayed_job - ruby-on-rails

Is there any good practices about setting up a queue to work with delayed_job in rails ?
For more precisions: I intend to ping some web hooks with my rails api. If using delayed_jobs, the PseudoCode could look like
get :ping do
present ping: :pong #grape style
# Bad, synchronous idea:
MyAwesomeTracker.send(event: "ping") # thi will wait for the server answer before it goes on
#Better: put it in a queue using delay_job:
MyAwesomeTracker.delay.send(event: "ping") # this will go to the queue
end
Now wether I use job_delay or resque, I'm able to send events into the queue, which is great.
The actual question: Is there any good practices for deploying workers whenever I deploy my api ?
What about worker failures ? Is there any environnement where a worker can be restarted after a crash/failure ?
I've seen a worker can be launched by running rake some_command, but what I'm wondering is how to set up an environment where a simple cap production deploy would both set up the api application, and some workers that listen to the queue.
Thanks in advance !

Related

How does Redis work with Rails and Sidekiq

Problem: need to send e-mails from Rails asynchronously.
Environment: Windows 7, Ruby 2.0, Rails 4.1, Sidekiq, Redis
After setting everything up, starting Sidekiq and starting Redis, I can see the mail request queued to Redis through the monitor:
1414256204.699674 "exec"
1414256204.710675 "multi"
1414256204.710675 "sadd" "queues" "default"
1414256204.710675 "lpush" "queue:default" "{\"retry\":true,\"queue\":\"default\",\"class\":\"Sidekiq::Extensions::DelayedMailer\",\"args\":[\"---\\n- !ruby/class 'UserMailer'\\n- :async_reminder\\n- - 673\\n\"],\"jid\":\"d4024c0c219201e5d1649c54\",\"enqueued_at\":1414256204.709674}"
But the mailer method never seems to get executed. The mail doesn't get sent and none of the log messages show up.
How does Redis know to execute the job on the queue and does something else need to be setup in the environment for it to know where the application resides?
Is delayed_job a better solution?
I started redis in one window, bundle exec sidekiq in another window, and rails server in a third window.
How does an item on the redis queue get picked up and processed? Is sidekiq both putting things on the redis queue and checking to see if something was added that needs to be processed?
Redis is used just for storage. It stores jobs to be done. It does not execute anything. DelayedJob uses your database for job storage instead of Redis.
Rails process pushes new jobs to Redis.
Sidekiq process pops jobs from Redis and executes them.
In your MONITOR output, you should see LPUSH commands when Rails sends mail. You should also see BRPOP commands from Sidekiq.
You need to make sure that both Rails and Sidekiq processes use the same Redis server, database number, and namespace (if any). It's a frequent problem that they don't.

Does application server workers spawn threads?

The application servers used by Ruby web applications that I know have the concept of worker processes. For example, Unicorn has this on the unicorn.rb configuration file, and for mongrel it is called servers, set usually on your mongrel_cluster.yml file.
My two questions about it:
1) Does every worker/server works as a web server and spam a processes/threads/fiber each time it receives a request, or it blocks when a new request is done if there is already other running?
2) Is this different from application server to application server? (Like unicorn, mongrel, thin, webrick...)
This is different from app server to app server.
Mongrel (at least as of a few years ago) would have several worker processes, and you would use something like Apache to load balance between the worker processes; each would listen on a different port. And each mongrel worker had its own queue of requests, so if it was busy when apache gave it a new request, the new request would go in the queue until that worker finished its request. Occasionally, we would see problems where a very long request (generating a report) would have other requests pile up behind it, even if other mongrel workers were much less busy.
Unicorn has a master process and just needs to listen on one port, or a unix socket, and uses only one request queue. That master process only assigns requests to worker processes as they become available, so the problem we had with Mongrel is much less of an issue. If one worker takes a really long time, it won't have requests backing up behind it specifically, it just won't be available to help with the master queue of requests until it finishes its report or whatever the big request is.
Webrick shouldn't even be considered, it's designed to run as just one worker in development, reloading everything all the time.
off the top of my head, so don't take this as "truth"
ruby (MRI) servers:
unicorn, passenger and mongrel all use 'workers' which are separate processes, all of these workers are started when you launch the master process and they persist until the master process exits. If you have 10 workers and they are all handling requests, then request 11 will be blocked waiting for one of them to complete.
webrick only runs a single process as far as I know, so request 2 would be blocked until request 1 finishes
thin: I believe it uses 'event I/O' to handle http, but is still a single process server
jruby servers:
trinidad, torquebox are multi-threaded and run on the JVM
see also puma: multi-threaded for use with jruby or rubinious
I think GitHub best explains unicorn in their (old, but valid) blog post https://github.com/blog/517-unicorn.
I think it puts backlog requests in a queue.

Single-dyno setup on Heroku for Rails with both WebSocket and worker queue

At the moment I have a small web application running on Heroku with a single dyno. This dyno runs a Rails app on Unicorn with a single worker queue.
config/unicorn.rb:
worker_processes 1
timeout 180
#resque_pid = nil
before_fork do |server, worker|
#resque_pid ||= spawn("bundle exec rake jobs:work")
end
I would like to add WebSockets functionality, but from what I have read Unicorn is not one of the web servers that supports faye-websocket. There is the Rainbows! web server that is based on Unicorn, but I'm unclear if it's possible for me to switch and keep my spawn for the queue worker.
I suppose having more than a single dyno one could just add another dyno to run a Rainbows! web server for the WebSockets part, right? This is unfortunately not an option at the moment. Is there a way to get it working with a single dyno for my setup?
If not, what other options are available to get information from server to client e.g. based on asynchronous work being completed? I'm using poll for other things in the application, i.e. to start an asynchronous job that is handled by the worker process and upon completion the polling client (browser) will see a completion flag. This works, but I'd like to improve it if possible.
I'm open to hear about your experiences and suggestions. Thanks in advance!

Enqueue and run jobs in Sidekiq from two Rails servers

I have two servers: web server (front-end) and analytic (backend) server. I need to pass a job from front-end server to back-end server through Sidekiq.
My hack is:
Install Sidekiq in both web server and backend server. I now have front-end Sidekiq and back-end Sidekiq.
Configure front-end Sidekiq so that it points to Redis server of the back-end Sidekiq. In other words, two Sidekiq shares the same Redis database server.
Now, I need to enqueue a job from front-end Sidekiq, then execute a code from back-end Sidekiq.
How I should go about doing it?
Sidekiq is a distributed messaging queue, and the whole purpose of it is for use cases like you described. Just setup a queue for the front-end to read, and a queue for the back-end to read. When you read it from the front-end queue, insert it back to the back-end queue.

Ignoring unregister_worker in resque

My rails app consists of an API server and an Admin server. Both make use of resque jobs.
In my local development environment each of them have a Procfile and are using the same redis server. The problem I have is that I cannot control which server is picking up which job. Each server has distinct queues, but they seem to be picked up in the 'unregister_worker' routine.
/ruby-1.9.3-p194/gems/resque-1.24.1/lib/resque/worker.rb:459:in `unregister_worker'
I see this in my call stack and it leads to a ruby error since it doesn't know the class.
How can I tell resque to ignore an `unregister_worker'? Just to clarify I don't have a worker task associated with the '*' queue.
As a workaround I could run two redis servers locally as I do in my production environment, but I like to avoid that if I can.
Redis Namespaces do exactly that. In 'config/initializers/resque.rb' specify the namespace of the queue:
Resque.redis.namespace = "resque:admin_server"
Found the solution here:
How do you start resque to use Resque.redis.namespace?

Resources