Ignoring unregister_worker in resque - ruby-on-rails

My rails app consists of an API server and an Admin server. Both make use of resque jobs.
In my local development environment each of them have a Procfile and are using the same redis server. The problem I have is that I cannot control which server is picking up which job. Each server has distinct queues, but they seem to be picked up in the 'unregister_worker' routine.
/ruby-1.9.3-p194/gems/resque-1.24.1/lib/resque/worker.rb:459:in `unregister_worker'
I see this in my call stack and it leads to a ruby error since it doesn't know the class.
How can I tell resque to ignore an `unregister_worker'? Just to clarify I don't have a worker task associated with the '*' queue.
As a workaround I could run two redis servers locally as I do in my production environment, but I like to avoid that if I can.

Redis Namespaces do exactly that. In 'config/initializers/resque.rb' specify the namespace of the queue:
Resque.redis.namespace = "resque:admin_server"
Found the solution here:
How do you start resque to use Resque.redis.namespace?

Related

How does Redis work with Rails and Sidekiq

Problem: need to send e-mails from Rails asynchronously.
Environment: Windows 7, Ruby 2.0, Rails 4.1, Sidekiq, Redis
After setting everything up, starting Sidekiq and starting Redis, I can see the mail request queued to Redis through the monitor:
1414256204.699674 "exec"
1414256204.710675 "multi"
1414256204.710675 "sadd" "queues" "default"
1414256204.710675 "lpush" "queue:default" "{\"retry\":true,\"queue\":\"default\",\"class\":\"Sidekiq::Extensions::DelayedMailer\",\"args\":[\"---\\n- !ruby/class 'UserMailer'\\n- :async_reminder\\n- - 673\\n\"],\"jid\":\"d4024c0c219201e5d1649c54\",\"enqueued_at\":1414256204.709674}"
But the mailer method never seems to get executed. The mail doesn't get sent and none of the log messages show up.
How does Redis know to execute the job on the queue and does something else need to be setup in the environment for it to know where the application resides?
Is delayed_job a better solution?
I started redis in one window, bundle exec sidekiq in another window, and rails server in a third window.
How does an item on the redis queue get picked up and processed? Is sidekiq both putting things on the redis queue and checking to see if something was added that needs to be processed?
Redis is used just for storage. It stores jobs to be done. It does not execute anything. DelayedJob uses your database for job storage instead of Redis.
Rails process pushes new jobs to Redis.
Sidekiq process pops jobs from Redis and executes them.
In your MONITOR output, you should see LPUSH commands when Rails sends mail. You should also see BRPOP commands from Sidekiq.
You need to make sure that both Rails and Sidekiq processes use the same Redis server, database number, and namespace (if any). It's a frequent problem that they don't.

Worker "dyno" in AWS Elastic Beanstalk

Amazon Web Service has now a worker tiers in their Elastic Beanstalk. But, it nevertheless confuse us who come from the days of Worker dyno.
As a comparison, in Heroku, one can configure two dynos (something like processor?) each for web and worker. The web will work for any request, and will timeout normally at 15 secs. Thus, if you have a request that last more than that, your request will simply timed-out although not terminated per se. In that case, you should use worker and your web dyno should visit the endpoint several times per minutes (maybe) to check if there is any result to be brought back to the user. To make either worker or web dyno, what you need is just slide the slider and you are good to go. Sometimes, you may need a Procfile. But there is nothing fancy, or something really difficult, or confusing about it.
In AWS EBS (Elastic Beanstalk), since day 1 you hit eb init, you will be asked whether it is a Standard or Worker. When you hit Standard, it seems there is no way to make it as worker as well.
In our situation, the worker and standard web is located under one application. So, how could we use an EBS instance both for worker and standard. Our worker is using sidekiq, and redis. Please, point to any guidance or help us in this matter.
AWS Elastic Beanstalk has two types of Environments - Web tier and Worker tier.
Web tier environments are meant for web applications - http/https request processing. You get one or more EC2 instances behind a load balancer. You can get other resources like database per your requirement. You can choose the platform you wish e.g. Ruby, Python, Java, Node.js, PHP, Docker.
Worker environments are meant for asynchronous message processing. When you create a worker environment you do not have a load balancer. All your EC2 instances are in an autoscaling group. All these instances are running a daemon which is polling a single SQS queue for messages. When a message is pulled by the daemon from the SQS queue, the daemon sends a HTTP Post request on localhost:80. You can configure the port but the important thing is that the daemon posts the message as an HTTP request on localhost. Your worker application is actually a web application that receives the post request and processes the message. After the message is successfully processed the worker daemon expects that your web application running on localhost returns a HTTP 200 OK response. The daemon then deletes the message from SQS queue. You can write your worker application for any platform just like standard web server applications - Ruby, Python, Java, Node.js, PHP, Docker.
Based on my understanding of your usecase I would recommend creating two Elastic Beanstalk environments - one Standard and one Worker environment. The Standard web server receives HTTP requests and processes them synchronously. This environment puts the relevant data in an SQS queue. The second environment is a worker and the daemon running on this environment polls this SQS queue for messages. Your second environment is a web application that is NOT open to the internet. The worker daemon posts the messages as HTTP requests to your worker environment. Thus you can process long running workloads asynchronously using this second worker environment.
With worker environments you can use your own queues or Elastic Beanstalk can generate a queue for you. You can configure parameters like message visibility timeout, http connections based on your requirements or you can use the defaults.
Below are some links that may be useful for you:
http://aws.amazon.com/blogs/aws/background-task-handling-for-aws-elastic-beanstalk/
http://blogs.aws.amazon.com/application-management/post/Tx1Y8QSQRL1KQZC/Elastic-Beanstalk-Video-Tutorial-Worker-Tier
https://stackoverflow.com/a/23942498/161628
Does this meet your requirements? Please let me know if you have further questions.
Update
You need to upload your source code at two places - once for the worker environment and once for the web server environment. If someone was starting from scratch then they might have two separate code bases. But I think in your case I think it should be perfectly fine to have a single code base shared between the two environments. Suppose your web request arrives at '/register', then the register() method in your application can post messages to an SQS queue and be done with the HTTP request. Now your worker environment will poll the SQS queue and post messages over HTTP on localhost to the URL '/async_register' which will invoke a method async_register() in your application and do the asynchronous processing. These two methods can live in the same source code bundle which can be shared by both the worker and web server environment. The code path taken by worker and web server will be different so that web server environments will invoke register() and worker environments will invoke the async_register() method.
Another caveat is that HTTP requests sent by the worker daemon on localhost will contain an HTTP header - "User-Agent": "aws-sqsd/1.1". Read more here. So in your web application you can have a single listener to post requests on "/register" and depending on whether this header is present or not you invoke register() or async_register() methods internally.
Also I think if you want to share the code base between the two environments you can upload the code base at only one place. Your environments are logically grouped into applications. So you can have a single application. You upload your source code to this application using the "CreateApplicationVersion" API call. Suppose you upload an application version with label 'v1'. You can now create a worker environment and a web server environment under the same application. When you create an environment you need to provide a version to deploy to your enviroment. In this case you can deploy v1 to both environments. So you will be sharing the same source code for both environments. When you have a new version "v2". You upload this version and then perform an update on both environments changing their version to "v2".
The same version of the source code can be deployed to both environments. They will be running on different EC2 instances because one environment is dedicated for responding to web requests and one environment is dedicated for responding to asynchronous web requests (worker).

rails: deploy workers for delayed_job

Is there any good practices about setting up a queue to work with delayed_job in rails ?
For more precisions: I intend to ping some web hooks with my rails api. If using delayed_jobs, the PseudoCode could look like
get :ping do
present ping: :pong #grape style
# Bad, synchronous idea:
MyAwesomeTracker.send(event: "ping") # thi will wait for the server answer before it goes on
#Better: put it in a queue using delay_job:
MyAwesomeTracker.delay.send(event: "ping") # this will go to the queue
end
Now wether I use job_delay or resque, I'm able to send events into the queue, which is great.
The actual question: Is there any good practices for deploying workers whenever I deploy my api ?
What about worker failures ? Is there any environnement where a worker can be restarted after a crash/failure ?
I've seen a worker can be launched by running rake some_command, but what I'm wondering is how to set up an environment where a simple cap production deploy would both set up the api application, and some workers that listen to the queue.
Thanks in advance !

Single-dyno setup on Heroku for Rails with both WebSocket and worker queue

At the moment I have a small web application running on Heroku with a single dyno. This dyno runs a Rails app on Unicorn with a single worker queue.
config/unicorn.rb:
worker_processes 1
timeout 180
#resque_pid = nil
before_fork do |server, worker|
#resque_pid ||= spawn("bundle exec rake jobs:work")
end
I would like to add WebSockets functionality, but from what I have read Unicorn is not one of the web servers that supports faye-websocket. There is the Rainbows! web server that is based on Unicorn, but I'm unclear if it's possible for me to switch and keep my spawn for the queue worker.
I suppose having more than a single dyno one could just add another dyno to run a Rainbows! web server for the WebSockets part, right? This is unfortunately not an option at the moment. Is there a way to get it working with a single dyno for my setup?
If not, what other options are available to get information from server to client e.g. based on asynchronous work being completed? I'm using poll for other things in the application, i.e. to start an asynchronous job that is handled by the worker process and upon completion the polling client (browser) will see a completion flag. This works, but I'd like to improve it if possible.
I'm open to hear about your experiences and suggestions. Thanks in advance!

Not able to make Resque work

I'm trying to make Resque work with my project, but unfortunately it seems that for some reasons Resque is not able to write on Redis.
Redis seems to be configured correctly, I'm able to connect with redis-cli and issue commands, runs on port 6379 as configured inside my Rails 3.0.5 app.
When I try to Resque enqueue something the job is queued, but it doesn't seem that something actually happens on Redis (0 clients connected inside my Redis logs).
When I restart the console, the queue is empty, with no workers running.
Everything fails silently, I have nothing in my rails logs, nothing on the console, nothing if I start a worker, it just (obviously) doesn't find any job to perform.
https://gist.github.com/867620
Any suggestions on how to fix or debug this ?
The problem was that I was including resque_spec in the bundle.
Obviously, resque_spec was stubbing Resque.enqueue, making my mistake very stupid and very difficult to spot.

Resources