On my Ruby on Rails application I need to execute 50 background jobs in parallel. Each job creates a TCP connection to a different server, fecths some data and updates an active record object.
I know different solutions to perform this task but any of them in parallel. For example, delayed_job (DJ) could be a great solution if only it could execute all jobs in parallel.
Any ideas? Thanks.
It is actually possible to run multiple delayed_job workers.
From http://github.com/collectiveidea/delayed_job:
# Runs two workers in separate processes.
$ RAILS_ENV=production script/delayed_job -n 2 start
$ RAILS_ENV=production script/delayed_job stop
So, in theory, you could just execute:
$ RAILS_ENV=production script/delayed_job -n 50 start
This will spawn 50 processes, however I'm not sure whether that would be recommended depending on the resources of the system you're running this on.
An alternative option would be to use threads. Simply spawn a new thread for each of your jobs.
One thing to bear is mind with this method is that ActiveRecord is not thread-safe. You can make it thread-safe using the following setting:
ActiveRecord::Base.allow_concurrency = true
Some thoughts...
Just because you need to read 50 sites and naturally want some parallel work does not mean that you need 50 processes or threads. You need to balance the slowdown and overhead. How about having 10 or 20 processes each read a few sites?
Depending on which Ruby you are using, be careful about the green threads, you may not get the parallel result you want
You might want to structure it like a reverse, client-side inetd, and use connect_nonblock and IO.select to get the parallel connections you want by making all the servers respond in parallel. You don't really need parallel processing of the results, you just need to get in line at all the servers in parallel, because that is where the latency really is.
So, something like this from the socket library...extend it for multiple outstanding connections...
require 'socket'
include Socket::Constants
socket = Socket.new(AF_INET, SOCK_STREAM, 0)
sockaddr = Socket.sockaddr_in(80, 'www.google.com')
begin
socket.connect_nonblock(sockaddr)
rescue Errno::EINPROGRESS
IO.select(nil, [socket])
begin
socket.connect_nonblock(sockaddr)
rescue Errno::EISCONN
end
end
socket.write("GET / HTTP/1.0\r\n\r\n")
# here perhaps insert IO.select. You may not need multiple threads OR multiple
# processes with this technique, but if you do insert them here
results = socket.read
Since you're working with rails, I would advise you to use delayed_job to do this rather than splitting off into threads or forks. Reason being - dealing with timeouts and stuff when the browser is waiting can be a real pain. There are two approaches you can take with DJ
The first is - spawn 50+ workers. Depending on your environment this may be a pretty memory heavy solution, but it works great. Then when you need to run your job, just make sure you create 50 unique jobs. If there is too much memory bloat and you want to do things this way, make a separate environment that is stripped down, specifically for your workers.
The second way is to create a single job that uses Curl::Multi to run your 50 concurrent TCP requests. You can find out more about this here: http://curl-multi.rubyforge.org/ In that way, you could have one background processor running all of your TCP requests in parallel.
Related
I am currently developing a Rails application which takes a long list of links as input, scrapes them using a background worker (Resque), then serves the results to the user. However, in some cases, there are numerous URLs and I would like to be able to make multiple requests in parallel / concurrency such that it would take much less time, rather than waiting for one request to complete to a page, scraping it, and moving on to the next one.
Is there a way to do this in heroku/rails? Where might I find more information?
I've come across resque-pool but I'm not sure whether it would solve this issue and/or how to implement. I've also read about using different types of servers to run rails in order to make concurrency possible, but don't know how to modify my current situation to take advantage of this.
Any help would be greatly appreciated.
Don't use Resque. Use Sidekiq instead.
Resque runs in a single-threaded process, meaning the workers run synchronously, while Sidekiq runs in a multithreaded process, meaning the workers run asynchronously/simutaneously in different threads.
Make sure you assign a URL to scrape per worker. It's no use if one worker scrape multiple URLs.
With Sidekiq, you can pass the link to a worker, e.g.
LINKS = [...]
LINKS.each do |link|
ScrapeWoker.perform_async(link)
end
The perform_async doesn't actually execute the job right away. Instead, the link is just put in a queue in redis along with the worker class, and so on, and later (could be milliseconds later) workers are assigned to execute each job in queue in its own thread by running the perform instance method in ScrapeWorker. Sidekiq will make sure to retry again if exception occur during execution of a worker.
PS: You don't have pass a link to the worker. You can store the links to a table and then pass the ids of the records to workers.
More info about sidekiq
Adding these two lines to your code will also let you wait until the last job is complete before proceeding:
this line ensures that your program waits for at least one job is enqueued before checking that all jobs are completed as to avoid misinterpreting an unfilled queue as the completion of all jobs
sleep(0.2) until Sidekiq::Queue.new.size > 0 || Sidekiq::Workers.new.size > 0
this line ensures your program waits till all jobs are done
sleep(0.5) until Sidekiq::Workers.new.size == 0 && Sidekiq::Queue.new.size == 0
I'm working with Amazon SQS queues and I have a class that consumes the messages on the queue. I am trying to get the messages consumed as close to real time as possible so I need the consuming code to be endlessly run. There will be messages on the queue consistently for more than half the day.
There are a few solutions I have come across to run this endlessly and I am wondering if there is a best practice for this type of need.
Option 1
On the web server use delayed_job or sidekiq to run the process continuously in the background.
Option 2
Have a separate server have a ruby application dedicated to consuming the messages.
Option 3
Placing the SQS consumer in a rake task and using a system call to fire off the task in the background.
Any insight is appreciated!
You can use shoryuken.
It will consume your messages continuously until your queue has messages.
shoryuken -r your_worker.rb -C shoryuken.yml \
-l log/shoryuken.log -p shoryuken.pid -d
As you've probably already discovered, there isn't one obvious right way™ to handle this kind of thing. It depends a lot on what work you do for each job, the size of your app and infrastrucure, and your personal preferences on APIs, message queuing philosophies, and architecture.
That said, I'd probably lean towards option 2 based on your description. Sidekiq and delayed_job don't speak SQS, and while you could teach them with something like sidekiq-sqs, it sounds like you might outgrow them pretty quick. Unless you need your Rails environment available to your workers, you'd have better luck separating your queue consumers into distinct applications, which makes it easy to scale horizontally just by starting more processes. It also allows you to further decouple the workers from your Rails app, which can make things easier to deploy and administer.
Option 3 is a non-starter IMO. You'll want to have a daemon running to process jobs as they come in, and if rake has to load your environment on each job, things are going to get sloooow.
I'm new to Rails and multithreading and am curious about how to achieve the following in the most elegant way.
I couldn't find any nice tutorials which explained in detail what's the best design decision for the following task:
I have a couple of HTTP requests which will be run for a user in the background, for example, parsing a couple websites and get some information like HTTP response code, response time, then return the results. For performance reasons, I decided to split the total number of URLs to parse into batches of 25 each, then execute each batch in a thread, join these and write the result to a database.
I decided to use the following gem (http://rubygems.org/gems/thread) to ensure that there's a maximum number of threads that are run simultaneously. So far so good.
The problem is, if two users start their analysis in parallel, the maximum number of threads is two times the maximum of my threadpool.
My solution (imho) is to create a worker daemon which runs on its own and waits for jobs from the clients.
My question is, what's the best way to achieve this in Rails?
Maybe create a Rake task, and use it as a daemon (see: "Daemoninsing a rake task") and (how?) add jobs to it?
Thank you very much in advance!
I'd build a queue in a table in the database, and a bit of code that is periodically started by cron, which walks that table, passing requests to Typhoeus and Hydra.
Here's how the author summarizes the gem:
Like a modern code version of the mythical beast with 100 serpent heads, Typhoeus runs HTTP requests in parallel while cleanly encapsulating handling logic.
As users add requests, append them to the table. You'll want fields like:
A "processed" field so you can tell which were handled in case the system goes down.
A "success" field so you can tell which requests were processed successfully, so you can retry if they failed.
A "retry_count" field so you can retry up to "n" times, then flag that URL as unreachable.
A "next_scan_time" field that says when the URL should be scanned again so you don't DOS a site by hitting it continuously.
Typhoeus and Hydra are easy to use, and do make it easy to handle multiple requests.
There are a bunch of libraries for Rails that can manage queues of long-running background jobs for you. Here are a few:
Sidekiq uses Redis for job storage and supports multiple worker threads.
Resque also uses Redis and a single worker thread.
delayed_job manages a job queue through ActiveRecord (or Mongoid).
Once you've chosen one, I'd recommend using Foreman to simplify launching multiple daemons at once.
Ok, so this is probably evil, however.. here's the question! I want to run a pretty lightweight app on a shared environment (site5). Ideally I would like to use delayed_job for the ease of queueing the mails (~200+ every so often). However, being a shared environment they don't want background processes running all the time (fair enough).
So, my plan, such as it is, is to queue the mails using delayed job, and then every hour or something, spin up a cron job, send a few emails (10 or something small) and then kill the process. And repeat.
Q) Is there a rake jobs:works:1 equivalent task it'd be easy to setup? - pointer would be handy.
I'm quite open to "this is a terrible idea, don't even go there" being the answer.. in which case I might look at another queuing strategy... (or heroku hire-fire perhaps..)
You can get delayed job to process only a certain number of jobs by doing:
Delayed::Worker.new.work_off(10)
You could fire a script to do that from cron or use "rails runner":
rails runner -e production 'Delayed::Worker.new.work_off(10)'
I guess the main issue on whether it is a good idea or not is working out what small value is actually high enough to make sure you process all your jobs in a reasonable time-frame. Also, you've got the overhead of firing up the rails environment every time you want to process, or even check whether you should process, any jobs. That might cause problems in a shared environment if they are particularly strict on spikes of memory or CPU usage.
Why not skip the 'workers' (which are just daemons which look for work else sleep) and have your cron fire a custom rake task of 10.times { MailerJob.first.perform }
You'd just need to require you're app in the line before that so its loaded ofc.
I just got the last month heroku bill, and the scheduled rake tasks were a relatively heavy burden. We are pretty early in our development process, so we just developed some rake tasks to get the job done recently, and didn't had much concern in theirs optimization.
Now we want to improve theirs performance and theirs heroku processing hours usage. We use New Relic to monitor the webapp performance, but apparently this type of rake tasks are ignored by default, and it's unclear how to override that.
Anyone had a similiar problem? How can I track the scheduled tasks in close to real time to monitor performance, optimize, and don't get suprise bills?
Whilst you can't really monitor rake tasks that well, there are a few little things you can do. One is the use of logging. Output start and end times of tasks to logs, and you can then see what's been happening duration wise. If you couple this with something like the Papertrail add-on then you can do additional interrogation later on.
As for running the jobs themselves, there's a couple of ways that you can run background processes which are dependant on how they need to run:
If you're needing to run jobs on a schedule, there's a few options available. Firstly there's the Heroku scheduler, which is pretty good, but doesn't guarantee executions will happen. Normally you would use this to kick off a rake task which will bring up a one-off dyno for the duration of the task - therefore you need to ensure in development that these tasks are as efficient as possible.
Alternatively, if you're looking at jobs that need a little more control or using a clock process. Essentially this is a dyno running 24/7 that does nothing but kick off other jobs at preset intervals and times. This would normally be done using the clockwork gem. The downside of this approach is that you need to pay for a clock process all the time.
A third approach, and one that might work is delayed job, with it's runat option, allowing you to queue a job to be run in the future (and jobs can re-queue themselves). There are a few issues with this in that a failure can kill the whole chain, and you need a full time worker running to process them all.
Therefore, in order to minimize your bills, ensure that your rake tasks are as performant and reliable, and then choose the scheduling option that suits you. If you're looking at schedules plus user created events, delayed_job might be the best option. If you're looking at a few tasks running periodically, then go scheduler. If you're looking at running lots of time critical jobs on a regular basis, go with clockwork.
Either way, you should be able to constrain a fair amount of processing into just one or two processes depending on your approach.
I know this question is almost 10 years old, but there is a new way!
You can now monitor your Heroku Scheduler jobs using One-off Dyno Metrics. This Heroku add-on gathers metrics for all detached one-off dynos running in your Heroku app. It was created to be an extension of Heroku's Application Metrics and works out of the box.
when you are running on heroku cedar there is a way to get a free setup for your workers. this is no answer to your monitoring question, but it might be interesting anyways: http://blog.nofail.de/2011/07/heroku-cedar-background-jobs-for-free/
You can force the New Relic agent to start in your rake tasks and report their performance data.
Not the answer to the specific question,but...
One method of reducing overhead is using Unicorn server to get multiple workers working on one dyno. It depends on your set up, but most people who've taken the time to test it can comfortably get 3 - 4 worker processes running concurrently. It's a huge boost in clearing cues or tasks. Just be careful not to max out the allocated memory for the dyno.