Here are some questions I have on ActiveJobs:
Say I've queued up n number of jobs on a job queue on sidekiq via ActiveJobs. On my EC2, I've set puma to have 4 workers, with 5 threads each. Does this mean up to 20 concurrent jobs will run at the same time? Will each thread pick up a queued job when it's idle and just process it? I tried this setting but it seems like it is still processing it in serial - 1 job at a time. Is there more settings I would need to do?
Regarding concurrency - how would I be able to setup even more EC2 instances just to tackle the job queue itself?
Regarding the queues itself - is there a way for us to manage / look at the queue from within Rails? Or should I rely on sidekiq's web interface to look at the queue?
Sidekiq has good Wiki. As for your questions:
Sidekiq(and other Background Job implementations) works as
where producer is your Rails app(s), Queue - Redis and consumer - Sidekiq worker(s). All three entities are completely independent applications, which may run on different servers. So, neither Puma nor Rails application can affect Sidekiq concurrency at all.
Sidekiq concurrency description goes far beyond SO answer. You can google large posts by "scaling Sidekiq workers". In short: yes, you can run separate EC2 instance(s) and set up Redis and tune Sidekiq workers count, concurrency per worker, ruby runtime, queues concurrency and priority and so so on.
Edited: Sidekiq has per worker configruration (usually sidekiq.yml). But number of workers is managed by system tools like Debian's Upstart. Or you can buy Sidekiq Pro/Enterprise with many features (like sidekiqswarm).
From wiki: Sidekiq API
Related
I have spawned a few thousand workers in sidekiq in the low queue. But apart from these workers there are a lot of other workers in low and other queues as well. I have access to the sidekiq admin dashboard and can view the queues and the workers running in them. But i need to scroll a lot to find information about workers that im interested in.
Is there a way to get information just about the status of the instances of a particular worker that im interested in ?
The Busy page is merely a reflection of the data available in the Workers API, you can roll your own report.
https://github.com/mperham/sidekiq/wiki/API#workers
I have the following tasks to do in a rails application:
Download a video
Trim the video with FFMPEG between a given duration (Eg.: 00:02 - 00:09)
Convert the video to a given format
Move the converted video to a folder
Since I wanted to make this happen in background jobs, I used 1 resque worker that processes a queue.
For the first job, I have created a queue like this
#queue = :download_video that does it's task, and at the end of the task I am going forward to the next task by calling Resque.enqueue(ConvertVideo, name, itemId). In this way, I have created a chain of queues that are enqueued when one task is finished.
This is very wrong, since if the first job starts to enqueue the other jobs (one from another), then everything get's blocked with 1 worker until the first list of queued jobs is finished.
How should this be optimised? I tried adding more workers to this way of enqueueing jobs, but the results are wrong and unpredictable.
Another aspect is that each job is saving a status in the database and I need the jobs to be processed in the right order.
Should each worker do a single job from above and have at least 4 workers? If I double the amount to 8 workers, would it be an improvement?
Have you considered using sidekiq ?
As said in Sidekiq documentation :
resque uses redis for storage and processes messages in a single-threaded process. The redis requirement makes it a little more difficult to set up, compared to delayed_job, but redis is far better as a queue than a SQL database. Being single-threaded means that processing 20 jobs in parallel requires 20 processes, which can take a lot of memory.
sidekiq uses redis for storage and processes jobs in a multi-threaded process. It's just as easy to set up as resque but more efficient in terms of raw processing speed. Your worker code does need to be thread-safe.
So you should have two kind of jobs : download videos and convert videos and any download video job should be done in parallel (you can limit that if you want) and then each stored in one queue (the "in-between queue") before being converted by multiple convert jobs in parallel.
I hope that helps, this link explains quite well the best practices in Sidekiq : https://github.com/mperham/sidekiq/wiki/Best-Practices
As #Ghislaindj noted Sidekiq might be an alternative - largely because it offers plugins that control execution ordering.
See this list:
https://github.com/mperham/sidekiq/wiki/Related-Projects#execution-ordering
Nonetheless, yes, you should be using different queues and more workers which are specific to the queue. So you have a set of workers all working on the :download_video queue and then you other workers attached to the :convert_video queue, etc.
If you want to continue using Resque another approach would be to use delayed execution, so when you enqueue your subsequent jobs you specify a delay parameter.
Resque.enqueue_in(10.seconds, ConvertVideo, name, itemId)
The down-side to using delayed execution in Resque is that it requires the resque-scheduler package, so you're introducing a new dependency:
https://github.com/resque/resque-scheduler
For comparison Sidekiq has delayed execution natively available.
Have you considered merging all four tasks into just one? In this case you can have any number of workers, one will do the job. It will work very predictable, you can even know how much time will take to finish the task. You also don't have problems when one of the subtasks takes longer than all others and it piles up in the queue.
Im confusing about where should I have a script polling an Aws Sqs inside a Rails application.
If I use a thread inside the web app probably it will use cpu cycles to listen this queue forever and then affecting performance.
And if I reserve a single heroku worker dyno it costs $34.50 per month. It makes sense to pay this price for it for a single queue poll? Or it's not the case to use a worker for it?
The script code:
What it does: Listen to converted pdfs. Gets the responde and creates the object into a postgres database.
queue = AWS::SQS::Queue.new(SQSADDR['my_queue'])
queue.poll do |msg|
...
id = received_message['document_id']
#document = Document.find(id)
#document.converted_at = Time.now
...
end
I need help!! Thanks
You have three basic options:
Do background work as part of a worker dyno. This is the easiest, most straightforward option because it's the thing that's most appropriate. Your web processes handle incoming HTTP requests, and your worker process handles the SQS messages. Done.
Do background work as part of your web dyno. This might mean spinning up another thread (and dealing with the issues that can cause in Rails), or it might mean forking a subprocess to do background processing. Whatever happens, bear in mind the 512 MB limit of RAM consumed by a dyno, and since I'm assuming you have only one web dyno, be aware that dyno idling means your app likely isn't running 24x7. Also, this option smells bad because it's generally against the spirit of the 12-factor app.
Do background work as one-off processes. Make e.g. a rake handle_sqs task that processes the queue and exits once it's empty. Heroku Scheduler is ideal: have it run once every 20 minutes or something. You'll pay for the one-off dyno for as long as it runs, but since that's only a few seconds if the queue is empty, it costs less than an always-on worker. Alternately, your web app could use the Heroku API to launch a one-off process, programmatically running the equivalent heroku run rake handle_sqs.
I've just inherited a Rails project and before it was on a typical 'nix server. The decision was made to move it to heroku for the client and it up to me to get the background process working.
Currently it uses Whenever to schedule daily events(email etc) and fire up the delayed job queue on boot.
Heroku provides an example of documentation for a custom clock process using clockwork, can I going by this example use it with whenever? Any pitfalls I might come across? Will I need to create a separate worker dyno?
Scheduled Jobs and Custom Clock Processes in Ruby with Clockwork
Yes -- Heroku's Cedar stack lets you run whatever you want.
The basic building block of the Cedar stack is the dyno. Each dyno gets an ephemeral copy of your application, 512 MB of RAM, and a bunch of shared CPU time. Web dynos are expected to bind an HTTP server to the port specified in the $PORT environment variable, since that's where Heroku will send HTTP requests, but other than that, web dynos are identical to other types of dynos.
Your application tells Heroku how to run its various components by defining them in the Procfile. (See Declaring and Scaling Process Types with Procfile.) The Clock Processes article demonstrates a pattern where you use a worker (i.e. non-web) dyno to enqueue work based on arbitrary criteria. Again, you can do whatever you want here -- just define it in a Procfile and Heroku will happily run it. If you go with a clock process (e.g. a 24x7 whenever), you'll be using a whole dyno ($0.05/hour) to do nothing but schedule work.
In your case, I'd consider switching from Whenever to Heroku Scheduler. Scheduler is basically a Heroku-run cron, where the crontab entries are "spin up a dyno and run this command". You'll still pay $0.05/hour for the extra dynos, but unlike the clock + worker setup, you'll only pay for the time they actually spend running. It cleanly separates periodic tasks from the steady-state web + worker traffic, and it's usually significantly cheaper too.
The only other word of warning is that running periodic tasks in distributed systems is complex and has complex failure modes. Some of the platform incidents (corresponding with the big EC2 outages) have resulted in things like 2 simultaneous clock processes and duplicate scheduler runs. If you're doing something that needs to run serially (like emailing people once a day), consider guarding it with RDBMS locking, and double-checking that it's actually been ~23 hours since your daily job.
Heroku Scheduler is often a bad option for production use because it's unreliable and will skip running its tasks sometimes.
The good news is that if you run a jobs queue dyno with Sidekiq there are scheduling plugins for it, e.g. sidekiq-cron. With that you can use the same dyno for scheduling. And if you don't have a jobs worker yet you need to set it up just for scheduling if you need to run it reliably.
P.S. if you happen to run Delayed::Job for jobs queing there are scheduling plugins for it, too, e.g. this one.
I have 16 resque queues and when I try to see the memory allocaton for these queues it is showing like 4% of the memory for each fo these queues. But at that time all these queues are empty. SO, out of 100% of my memory nearly 64% is utilized by the environment load itself. Thats what I feel.
My doubt are
1. Will each of these resque queues loads the complete application into memory separately.
If Yes, can I make any change to the resque configuation in such a way that all resque queues use the same environment loaded in a single place in memory.
Thanks in advance
I think you are out of luck if you're using Resque. I believe this is why Sidekiq was developed as a nearly drop-in replacement for Resque. The author of Sidekiq wrote a blog post describing how he improved Resque's memory usage. Here's a little bit from the Sidekiq FAQs:
Why Sidekiq over Multi-threaded Resque?
Back at Carbon Five I worked on improving Resque to use threads
instead of forking. That project was the basis for Sidekiq. I would
suggest using Sidekiq over that fork of Resque for a few reasons:
MT Resque was a one-off for a Carbon Five client and is not supported.
There are a number of bugs that were not solved, e.g. the web UI's
display of worker threads, because they were not important to the
client.
Sidekiq was built from the ground up to use threads via
Celluloid.
Sidekiq has middleware, which lets you do cool things in
the job lifespan. Resque doesn't support middleware like this
natively.
In short, MT Resque: a quick hack to save one client a lot of money, Sidekiq: the well designed solution for the same problem.