I have a tinder style app that allows users to rate events. After a user rates an Event, a background resque job runs that re-ranks the other events based on user's feedback.
This background job takes about 10 seconds and it runs about 20 times a minute per user.
Using a simple example. If I have 10 users using the app at any given time, and I never want a job to be waiting, what's the optimal way to do this?
I'm confused about Dynos, resque pools, and redis connections. Can someone help me understand the difference? Is there a way to calculate this?
Not sure you're asking the right question. Your real question is "how can I get better performance?" Not "how many dynos?" Just adding dynos won't necessarily give you better performance. More dynos give you more memory...so if your app is running slowly because you're running out of available memory (i.e. you're running on swap), then more dynos could be the answer. If those jobs take 10 seconds each to run, though...memory probably isn't your actual problem. If you want to monitor your memory usage, check out a visualization tool like New Relic.
There are a lot of approaches to solving your problem. But I would start with the code that your wrote. Posting some code on SO might help understand why that job takes 10 seconds (Post some code!). 10 seconds is a long time. So optimizing the queries inside that job would almost surely help.
Another piece of low hanging fruit...switch from resque to sidekiq for your background jobs. Really easy to use. You'll use less memory and should see an instant bump in performance.
Dynos: These are individual virtual/physical servers. Think of them as being the same as EC2 instances.
Redis Connections: Individual connections to the Redis Instance.
Resque Pool: A gem that allows you to run workers concurrently on the same dyno/instance.
First of all, it’s worth looking for ways in which you can improve the performance of the job itself. You might be able to get it below ten seconds by using low level model caching or optimizing your algorithm.
In terms of working out how many workers you would need, you’ll need to take the number runs per minute (20) times the number of seconds it takes to run (10) times the number of users (10). That will give you the number of seconds per minute it would take to run on one worker. 20 * 10 * 10 = 2000. Divide that by 60 and you have the number of minutes per minute, 33.3. So if you had 34 workers, and these numbers were all consistent, they should be able to keep on top of things.
That said, you shouldn’t be in a position where you need to run 36 or more dynos for just 10 concurrent users for a ranking algorithm. That’s going to get expensive very quickly.
Optimise your algorithm, try to add more caching, and give Sidekiq a try too. In my experience, Sidekiq can process a queue up to 10 times faster than Resque. It depends what your job is doing, and how you utilize each tool, but it's worth checking out. See Sidekiq vs Resque.
Re-ranking other events is a bad idea.
You should consider having total_points and average_points columns for events table and let the ranks be decided by order by queries. Like this.
class Event
has_many :feedbacks
scope :rank_by_total, -> { order(:total_points) }
scope :rank_by_average, -> { order(:average_points) }
end
class Feedback
belongs_to :event
after_create :update_points
def update_points
total = event.feedbacks.sum(:points)
avg = event.feedbacks.average(:points)
event.update(total_points: total, average_points: avg)
end
end
So, How many workers/dynos do you need?
You don't need to worry about dyno or worker for this problem. No matter how many dynos with higher processing power you use, your solution will take good amount of time when your events table becomes huge. So try changing your solution the way I have described.
Related
We're building a web-app where users will be uploading potentially large files that will need to be processed in the background. The task involves calling 3rd-party APIs so each job can take several hours to complete. We're using DelayedJob to run the background jobs. With every user kicking off a background job, each of which will take a few hours to finish, that will add up to a lot of background jobs every quickly. I am wondering what would be the best way to setup the deployment for this? We're currently hosted on DigitalOcean. I've kicked off 10 DelayedJob workers. Each one (when ideal) takes up 157MB. When actively running it utilizes around 900 MB. Our user-base right now is pretty small so it's not an issue but will be one soon. So on a 4GB droplet, I can probably run like 2 or 3 workers at a time. How should we approach this issue? Should we be looking at using DigitalOcean's API to auto-spin cheap droplets on demand? Should we subscribe to high-memory droplets on a monthly basis instead? If we go with auto-spinning droplets, should we stick with DigitalOcean or would Heroku make more sense? Or is the entire approach wrong and should we be approaching it from an entire different direction? Any help/advice would be very much appreciated.
Thanks!
It sounds like you are limited by memory on the number of workers that you can run on your DigitalOcean host.
If you are worried about scaling, I would focus on making the workers as efficient as possible. Have you done any benchmarking to understanding where the 900MB of memory is being allocated? I'm not sure what the nature of these jobs are, but you mentioned large files. Are you reading the contents of these files into memory, or are you streaming them? Are you using a database with SQL you can tune? Are you making many small API calls when you could be using a batch endpoint? Are you assigning intermediary variables that must then be garbage collected? Can you compress the files before you send them?
Look at the job structure itself. I've found that background jobs work best with many smaller jobs rather than one larger job. This allows execution to happen in parallel, and be more load balanced across all workers. You could even have a job that generates other jobs. If you need a job to orchestrate callbacks when a group of jobs finishes there is a DelayedJobGroup plugin at https://github.com/salsify/delayed_job_groups_plugin that allows you to invoke a final job only after the sibling jobs complete. I would aim for an execution time of a single job to be under 30 seconds. This is arbitrary but it illustrates what I mean by smaller jobs.
Some hosting providers like Amazon provide spot instances where you can pay a lower price on servers that do not have guaranteed availability. These pair well with the many fewer jobs approach I mentioned earlier.
Finally, Ruby might not be the right tool for the job. There are faster languages, and if you are limited by memory, or CPU, you might consider writing these jobs and their workers in another language like Javascript, Go or Rust. These can pair well with a Ruby stack, but offload computationally expensive subroutines to faster languages.
Finally, like many scaling issues, if you have more money than time, you can always throw more hardware at it. At least for a while.
I thing memory and time is more problem for you. you have to use sidekiq gem for this process because it will consume less time and memory consumption for doing the same job,because it uses redis as database which is key value pair db.if the problem continues go with java script.
This is the first time that I've actually run into timing issues regarding the task I have to tackle. I need to do a calculation (running against a webservice) with approximately 7M records. This would take more than 180hrs, so I was thinking about running multiple instances of the webservice on EC2 and just running rake tasks in parallel.
Since I have never done this before, I was wondering what needs to be considered.
More precisely:
What's the maximum number of rake tasks I can run (Is there any limit
at all besides your own machine power)?
What's the maximum number of concurrent connections to a postgres 9.3
db?
Are there any things to be considered when running multiple
active_record.save actions at the same time?
I am looking forward to hearing your thoughts.
Best,
Phil
rake instances
Every time you run rake, you are running a new instance of your ruby server, with all associated memory and related load-dependency usages. Look in your Rakefile for the inits.
your number of instances in limited by memory and CPU used
you must profile each memory and CPU to know how many can be run
you could write a program to monitor and calculate what's possible, but heuristics will work better for one-off, and first experiments.
datastore
heuristically explore your database capacity, too.
watch for write-locks that create blocking
watch for slow reads due to missing indices
look at your postgres configs to see concurrency limits, cache size, etc.
.save
each rake task is its own ruby server, so multiple active_record.save actions impacts:
blocking/waiting due to write-locking
one instance getting 'old' data that was read prior to another's update .save
operational complexity
the number of records (7MM) is just a multiplier for all of the operations that occur upon each record. The operational complexity is the source of limitation, since theoretically, running 7MM workers would solve the problem in the minimum timescale
if 180hr is accurate (dubious), then (180 * 60 * 60 * 1000) / 7000000 == 92.57 ms per process.
Look for any shared-resource that is an IO blocker.
look for any common calculation that you can do in advance and cache. A lookup beats a calc.
errata
leave headroom for base OS processes. These will vary by your environment, but you mention AWS but best to conceptually learn how to monitor any system for activity
run top in a separate screen / terminal as the rakes are running.
Prefer to run 2 tops in different screens. sort 1 by memory, sort the other by CPU
have a way to monitor the rakes
watch for events that bubble up the top processes.
if you do this long / well enough, you've profiled you headroom
run more rakes to fill your headroom
don't overrun your memory or you'll get swapping
You may want to consider beanstalk instead, but my guess is you'll find that more complicated than learning all these good foundations, first.
So I have a Resque worker that calls an API, the issue is the API has a rate limit of 2 requests per second.
Is there a way to add a delay between each job processed in a specific queue?
P.S. the queue could have thousands of pending jobs.
Why not sleep for a given amount of time at the end of the process? Well, perhaps you want your resque worker to be doing something useful instead. In CPU time, half a second is a lot of time - you could have done something useful there, like process a job from another queue that's not rate limited.
I have this same problem myself, so I'm motivated to find a solution. It seems like there are two easy-ish ways to do it. The first idea is to use resque scheduler and pre-compute the time to run the job at before inserting it. This seems error-prone to me. The second is to use a gem like https://github.com/flyerhzm/resque-restriction (disclaimer: just found it through some googling. haven't used it yet) and rate-limit as you pull jobs off the queue. Seems like a robust solution in theory. Note that if you can't execute the job yet, it never comes off the queue, so you'll pull something else instead - much more efficient use of your workers.
Per my comment, I'd recommend just performing a sleep for a given number of seconds at the end of each Resque process method.
I'm looking for some advice here. My school's student section registration process is online and involves around 6,000 students
They base seating off first come first serve basis. Every year they open the site at noon and floods of people get on a try to register as fast as possible to get good seats. Every year without fail the server crashes and everyone is mad.
After several years of being frustrated myself, I've offered to redo their registration system.
My plan is to rewrite it in ruby on rails, and use heroku for hosting.
Does a heroku dyno only handle one request at a time?
Heroku scales up to 50 dynos. Will that be enough to handle around 6,000 users with about 5 pageviews per transaction in a short amount of time, say a half hour?
Any helpful strategies or tips you can give me before I dive into this project?
Does a heroku dyno only handle one request at a time?
Yes. Heroku dynos are single threaded.
Heroku scales up to 50 dynos. Will that be enough to handle around
6,000 users with about 5 pageviews per transaction in a short amount
of time, say a half hour?
This depends on how fast your page loads. For arguments sake let's pretend it takes 2s per page request (as per Google Analytics recommendation) and you need to load 6,000 users x 5 page views / 30 minutes - 1000 page views per minute.
At 2s per page load, one single dyno would load 30 page views per minute. At 50 dynos, this would be 1500 page views per minute. This would obviously allow you to exceed your overall goal and leave you some room for error, but if all 6000 users are hitting the page at once then a single Heroku app may not be able to keep up depending on your timeout. You would need to implement a user queue system - explained below.
Any helpful strategies or tips you can give me before I dive into this
project?
All that said, a 2s load time may vary depending on the assets your page needs to load, the amount it needs to interact with a database, it's queries, caching, etc. Your page can also potentially serve much faster.
You also need to worry about the initial hit of all the users. This could be taken care of via a first come first serve queue system - similar to that used by Ticketmaster if you've ever used their site. This could be accomplished via AWS SQS or your preference of queue system.
With a user queue and caching of your assets and common database queries, you should be able to accomplish this with 50 or less dynos.
EDIT: I'm taking your word for it that Heroku will run 50 web dynos. They show 24 as max on their pricing page, but I cannot find any info one way or another.
Does a heroku dyno only handle one request at a time?
It depends on web server you use (https://devcenter.heroku.com/articles/dynos#dynos-and-requests). If you want more concurrency within a dyno, I'd suggest taking a look at something like Puma.
Heroku scales up to 50 dynos. Will that be enough to handle around 6,000 users with about 5 pageviews per transaction in a short amount of time, say a half hour?
Any helpful strategies or tips you can give me before I dive into this project?
You can have more than 50 dynos. A specific answer for you app is going to be way better that a guess or generalization. Run a load test against your site (e.g., using Blitz) and find out the real numbers. Costs for add-ons are pro-rated per second, so you only pay for the period you have it installed. So make sure you uninstall or downgrade Blitz once you've finished your test.
Currently, my Nokogiri script iterates through Google's SERPs until it finds the position of the target website. It does this for each keyword for each website that each user specifies (users are capped on amount of websites & keywords they can track).
Right now, it's run in a rake that's hard-scheduled every day and batches all scrapes at once by looping through all the websites in the database. But I'm concerned about scalability and swarming Google with a batch of requests.
I'd like a solution that scales and can run these scrapes over the course of the day. I'm not sure what kind of solution is available or what I'm really looking for.
Note: The amount of websites/keywords change from day to day as users add and delete their websites and keywords. I don't mean to make this question too superfluous, but is this the kind of thing Beanstalkd/Stalker (job queuing) can be used for?
You will have to balance two issues: Scalability for lots of users versus Google shutting you down for scaping in violation of their terms of use.
So your system will need to be able to distribute tasks to various different IPs to conceal your bulk scraping which suggests at least two levels of queuing. One to manage all the jobs and send them to each separate IP for subsequent searching and collecting results and queues on each separate machine to hold the requested searches until they are executed and the results returned.
I have no idea what Google's thresholds are (I am sure they don't advertise it) but exceeding them and getting cut off would obviously be devastating for what you are trying to do so your simple looping rake task is exactly what you shouldn't do after a certain number of users.
So yes, use a queue of some sort but realize that you probably have a different goal from the typical goal of a queue in that you want to deliberately delay jobs rather that offload word to avoid UI delays. So you will be seeking ways to slow down the queue rather than have it just execute job after job as they arrive in the queue.
So based on a cursory inspection of DelayedJob and BackgroundJobs it looks like DelayedJob has what you would need with the run_at attribute. But I am only speculating here and I am sure an expert would have more to say.
If I'm understanding correclty, it sounds like one of these tools might fit the bill:
Delayed_job: https://github.com/tobi/delayed_job
or
BackgroundJobs: http://codeforpeople.rubyforge.org/svn/bj/trunk/README
I've used both of them, and found them easy to work with.
There are definitely some background job libraries that might work.
delayed_job: https://github.com/collectiveidea/delayed_job (beware of the unmaintained branch from tobi!)
resque: https://github.com/defunkt/resque
However, you might think about just scheduling a Cron job that runs more times during the day, and processes less items per run.
SaaS solution: http://momentapp.com/ "Launch delayed jobs with scheduled http requests" - disclaimer a) in beta b) I am not affiliated with this service